You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by GitBox <gi...@apache.org> on 2021/03/22 03:42:39 UTC

[GitHub] [hadoop] jianghuazhu commented on a change in pull request #2782: HDFS-15901.Solve the problem of DN repeated block reports occupying too many RPCs during Safemode.

jianghuazhu commented on a change in pull request #2782:
URL: https://github.com/apache/hadoop/pull/2782#discussion_r598407210



##########
File path: hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/blockmanagement/BlockManager.java
##########
@@ -2603,6 +2603,24 @@ public long requestBlockReportLeaseId(DatanodeRegistration nodeReg) {
       LOG.warn("Failed to find datanode {}", nodeReg);
       return 0;
     }
+
+    // During safemode, DataNodes are only allowed to report all data once.
+    if (namesystem.isInStartupSafeMode()) {
+      boolean allReported = true;
+      for (DatanodeStorageInfo storageInfo : node.getStorageInfos()) {
+        if (storageInfo.getBlockReportCount() < 1) {
+          allReported = false;
+          break;
+        }
+      }
+
+      if (allReported) {
+        LOG.info("The DataNode has reported all blocks and does not need " +

Review comment:
       @jojochuang, thank you for your reply. I have submitted a new solution. And it has been tested.
   Log information:
   2021-03-22 11:37:34,264 [main] INFO blockmanagement.BlockManager (BlockManager.java:requestBlockReportLeaseId(2618))-The datanode DatanodeRegistration(1.1.1.1:9866, datanodeUuid=f854d421-701a-4afb-b8df-8be2e84aeb2c, infoPort=9864, infoSecurePort=9865, ipcPort=9867, storageInfo=null) has reported all blocks and does not need to be reported again during SafeMode.




-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org