You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2022/01/13 03:26:15 UTC

[GitHub] [hadoop] Jackson-Wang-7 opened a new pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundanc…

Jackson-Wang-7 opened a new pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880


   …y striped blocks.
   
   <!--
     Thanks for sending a pull request!
       1. If this is your first time, please read our contributor guidelines: https://cwiki.apache.org/confluence/display/HADOOP/How+To+Contribute
       2. Make sure your PR title starts with JIRA issue id, e.g., 'HADOOP-17799. Your PR title ...'.
   -->
   
   ### Description of PR
   if there are two or more blocks exist in a same rack, it may cause unique data block is added to exactlyOne processing list when choosing redundancy stripted block to delete.
   `storages.remove(cur);
       if (storages.isEmpty()) {
         rackMap.remove(rack);
       }
       if (moreThanOne.remove(cur)) {
         if (storages.size() == 1) {
           **final DatanodeStorageInfo remaining = storages.get(0);
           moreThanOne.remove(remaining);
           exactlyOne.add(remaining);**
         }
       } else {
         exactlyOne.remove(cur);
       }`
   
   In this case, moreThanOne list may not contain the remaining block. The remaining block shouldn’t be deleted, but it is added to exactlyOne list. And then it will be deleted.
   
   ### How was this patch tested?
   The testcase is that:(EC 6+3)
   blk_-xxx009 in rack /d1/r1
   blo_-xxx008 in rack /d1/r1
   blo_-xxx008 in rack /d1/r2
   blo_-xxx008 in rack /d1/r3
   blk_-xxx007 in rack /d1/r4
   blo_-xxx006 in rack /d2/r1
   blk_-xxx005 in rack /d2/r2
   blo_-xxx004 in rack /d2/r3
   blk_-xxx003 in rack /d2/r4
   blo_-xxx002 in rack /d2/r5
   blk_-xxx001 in rack /d2/r6
   After the FBR is triggered and redundant data blocks are added to invalidate list, blo_-xxx008 in rack /d1/r1 and blo_-xxx008 in rack /d1/r2 need to be deleted, blk_-xxx009 is HEALTHY.
   
   ### For code changes:
   
   - [ ] Does the title or this PR starts with the corresponding JIRA issue id (e.g. 'HADOOP-17799. Your PR title ...')?
   - [ ] Object storage: have the integration tests been executed and the endpoint declared according to the connector-specific documentation?
   - [ ] If adding new dependencies to the code, are these dependencies licensed in a way that is compatible for inclusion under [ASF 2.0](http://www.apache.org/legal/resolved.html#category-a)?
   - [ ] If applicable, have you updated the `LICENSE`, `LICENSE-binary`, `NOTICE-binary` files?
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tasanuma commented on pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
tasanuma commented on pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#issuecomment-1013125929


   Thanks for fixing this issue, @Jackson-Wang-7. Thanks for reviewing it, @tomscut.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Jackson-Wang-7 commented on a change in pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
Jackson-Wang-7 commented on a change in pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#discussion_r784471042



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two
+    // replica nodes, while storages[2] and dataNodes[5] are in second set.
+    assertEquals(1, first.size());
+    assertEquals(2, second.size());
+    List<StorageType> excessTypes = new ArrayList<>();
+    {
+      // test returning null
+      excessTypes.add(StorageType.SSD);
+      assertNull(((BlockPlacementPolicyDefault) striptedPolicy)
+          .chooseReplicaToDelete(first, second, excessTypes, rackMap));
+    }
+    excessTypes.add(StorageType.DEFAULT);
+    DatanodeStorageInfo chosen = ((BlockPlacementPolicyDefault) striptedPolicy)
+        .chooseReplicaToDelete(first, second, excessTypes, rackMap);
+    // Within all storages, storages[5] with least free space
+    assertEquals(chosen, storages[0]);
+    striptedPolicy.adjustSetsWithChosenReplica(rackMap, first, second, chosen);
+    assertEquals(0, first.size());
+    assertEquals(2, second.size());
+
+    // Within first set, storages[1] with less free space

Review comment:
       I have modified it. Please review.

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two

Review comment:
       I have modified it. Please review.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Jackson-Wang-7 commented on a change in pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
Jackson-Wang-7 commented on a change in pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#discussion_r784471878



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two
+    // replica nodes, while storages[2] and dataNodes[5] are in second set.
+    assertEquals(1, first.size());
+    assertEquals(2, second.size());
+    List<StorageType> excessTypes = new ArrayList<>();
+    {
+      // test returning null
+      excessTypes.add(StorageType.SSD);
+      assertNull(((BlockPlacementPolicyDefault) striptedPolicy)
+          .chooseReplicaToDelete(first, second, excessTypes, rackMap));
+    }
+    excessTypes.add(StorageType.DEFAULT);
+    DatanodeStorageInfo chosen = ((BlockPlacementPolicyDefault) striptedPolicy)
+        .chooseReplicaToDelete(first, second, excessTypes, rackMap);
+    // Within all storages, storages[5] with least free space

Review comment:
       > @Jackson-Wang-7 Thanks for submitting the PR. It makes sense to me. Could you fix the check style issues?
   
   I have fixed the check style. May you check if there are any other questions?
   




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tomscut commented on a change in pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
tomscut commented on a change in pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#discussion_r784007211



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two

Review comment:
       The whole thing looks good to me. We should modify several comments.

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two
+    // replica nodes, while storages[2] and dataNodes[5] are in second set.
+    assertEquals(1, first.size());
+    assertEquals(2, second.size());
+    List<StorageType> excessTypes = new ArrayList<>();
+    {
+      // test returning null
+      excessTypes.add(StorageType.SSD);
+      assertNull(((BlockPlacementPolicyDefault) striptedPolicy)
+          .chooseReplicaToDelete(first, second, excessTypes, rackMap));
+    }
+    excessTypes.add(StorageType.DEFAULT);
+    DatanodeStorageInfo chosen = ((BlockPlacementPolicyDefault) striptedPolicy)
+        .chooseReplicaToDelete(first, second, excessTypes, rackMap);
+    // Within all storages, storages[5] with least free space

Review comment:
       Update this comment.

##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two
+    // replica nodes, while storages[2] and dataNodes[5] are in second set.
+    assertEquals(1, first.size());
+    assertEquals(2, second.size());
+    List<StorageType> excessTypes = new ArrayList<>();
+    {
+      // test returning null
+      excessTypes.add(StorageType.SSD);
+      assertNull(((BlockPlacementPolicyDefault) striptedPolicy)
+          .chooseReplicaToDelete(first, second, excessTypes, rackMap));
+    }
+    excessTypes.add(StorageType.DEFAULT);
+    DatanodeStorageInfo chosen = ((BlockPlacementPolicyDefault) striptedPolicy)
+        .chooseReplicaToDelete(first, second, excessTypes, rackMap);
+    // Within all storages, storages[5] with least free space
+    assertEquals(chosen, storages[0]);
+    striptedPolicy.adjustSetsWithChosenReplica(rackMap, first, second, chosen);
+    assertEquals(0, first.size());
+    assertEquals(2, second.size());
+
+    // Within first set, storages[1] with less free space

Review comment:
       Update this comment.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tasanuma commented on pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
tasanuma commented on pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#issuecomment-1012048946


   @Jackson-Wang-7 Thanks for submitting the PR. It makes sense to me. Could you fix the check style issues?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tomscut commented on a change in pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
tomscut commented on a change in pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#discussion_r784007211



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two

Review comment:
       Good catch. The whole thing looks good to me. We should modify several comments.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Jackson-Wang-7 commented on pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
Jackson-Wang-7 commented on pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#issuecomment-1011754937


   @Hexiaoqiao @ayushtkn Please take a look at this PR. We have started testing in this way.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#issuecomment-1013025485


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m  0s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  40m  6s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 52s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 38s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  6s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 45s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m 31s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 44s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m 14s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  30m 19s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 42s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 42s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 28s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 56s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 33s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   1m  0s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 27s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   4m  0s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  29m  4s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  | 363m 29s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/3/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 51s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 488m 27s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.sps.TestExternalStoragePolicySatisfier |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/3/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3880 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux fd74107f605c 4.15.0-153-generic #160-Ubuntu SMP Thu Jul 29 06:54:29 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 87ea3b1418a34ed393cf51ebd074c251ac011e1e |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/3/testReport/ |
   | Max. process+thread count | 2204 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/3/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#issuecomment-1012898965


   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 44s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m 10s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 28s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   1m  1s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 27s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 35s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 18s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 30s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 19s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 21s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 21s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 11s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 11s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | +1 :green_heart: |  checkstyle  |   0m 52s |  |  the patch passed  |
   | +1 :green_heart: |  mvnsite  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 24s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 31s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  22m 35s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  | 227m 55s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/2/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 45s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 328m  1s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.TestRollingUpgrade |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/2/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3880 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux eb976fdc1cb0 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 22121dfd0f72d84894610037bef0c039f006c2fb |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/2/testReport/ |
   | Max. process+thread count | 3463 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/2/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] tasanuma merged pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
tasanuma merged pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880


   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Jackson-Wang-7 commented on a change in pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
Jackson-Wang-7 commented on a change in pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#discussion_r784470983



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/test/java/org/apache/hadoop/hdfs/server/blockmanagement/TestReplicationPolicy.java
##########
@@ -1018,6 +1018,69 @@ public void testChooseReplicaToDelete() throws Exception {
     assertEquals(chosen, storages[1]);
   }
 
+  /**
+   * Test for the chooseReplicaToDelete are processed based on
+   * EC and STRIPED Policy.
+   */
+  @Test
+  public void testStripedChooseReplicaToDelete() throws Exception {
+    List<DatanodeStorageInfo> replicaList = new ArrayList<>();
+    List<DatanodeStorageInfo> candidate = new ArrayList<>();
+    final Map<String, List<DatanodeStorageInfo>> rackMap
+        = new HashMap<String, List<DatanodeStorageInfo>>();
+
+    replicaList.add(storages[0]);
+    replicaList.add(storages[1]);
+    replicaList.add(storages[2]);
+    replicaList.add(storages[4]);
+
+    candidate.add(storages[0]);
+    candidate.add(storages[2]);
+    candidate.add(storages[4]);
+
+    // Refresh the last update time for all the datanodes
+    for (int i = 0; i < dataNodes.length; i++) {
+      DFSTestUtil.resetLastUpdatesWithOffset(dataNodes[i], 0);
+    }
+
+    List<DatanodeStorageInfo> first = new ArrayList<>();
+    List<DatanodeStorageInfo> second = new ArrayList<>();
+    striptedPolicy.splitNodesWithRack(replicaList, candidate, rackMap, first,
+        second);
+    // storages[0] and storages[1] are in first set as their rack has two
+    // replica nodes, while storages[2] and dataNodes[5] are in second set.
+    assertEquals(1, first.size());
+    assertEquals(2, second.size());
+    List<StorageType> excessTypes = new ArrayList<>();
+    {
+      // test returning null
+      excessTypes.add(StorageType.SSD);
+      assertNull(((BlockPlacementPolicyDefault) striptedPolicy)
+          .chooseReplicaToDelete(first, second, excessTypes, rackMap));
+    }
+    excessTypes.add(StorageType.DEFAULT);
+    DatanodeStorageInfo chosen = ((BlockPlacementPolicyDefault) striptedPolicy)
+        .chooseReplicaToDelete(first, second, excessTypes, rackMap);
+    // Within all storages, storages[5] with least free space

Review comment:
       I have modified it. Please review.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #3880: HDFS-16420. Avoid deleting unique data blocks when deleting redundancy striped blocks.

Posted by GitBox <gi...@apache.org>.
hadoop-yetus commented on pull request #3880:
URL: https://github.com/apache/hadoop/pull/3880#issuecomment-1011931066


   :confetti_ball: **+1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   0m 45s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  0s |  |  codespell was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | +1 :green_heart: |  test4tests  |   0m  0s |  |  The patch appears to include 2 new or modified test files.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  33m  9s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 27s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  compile  |   1m 22s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  checkstyle  |   0m 59s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 28s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  1s |  |  trunk passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 37s |  |  trunk passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 19s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  22m 27s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 18s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 12s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  javac  |   1m 12s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 53s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 2 new + 131 unchanged - 0 fixed = 133 total (was 131)  |
   | +1 :green_heart: |  mvnsite  |   1m 15s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 52s |  |  the patch passed with JDK Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07  |
   | +1 :green_heart: |  spotbugs  |   3m 25s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  23m  3s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | +1 :green_heart: |  unit  | 233m 27s |  |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 334m 10s |  |  |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.41 ServerAPI=1.41 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/3880 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell |
   | uname | Linux 7f4a32876506 4.15.0-112-generic #113-Ubuntu SMP Thu Jul 9 23:41:39 UTC 2020 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / d26ee090c4316e9ac73120dd0e39b30f6593b186 |
   | Default Java | Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.13+8-Ubuntu-0ubuntu1.20.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_312-8u312-b07-0ubuntu1~20.04-b07 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/1/testReport/ |
   | Max. process+thread count | 3582 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-3880/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0-SNAPSHOT https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org