You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "hfutatzhanghb (via GitHub)" <gi...@apache.org> on 2023/02/15 06:43:00 UTC

[GitHub] [hadoop] hfutatzhanghb opened a new pull request, #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

hfutatzhanghb opened a new pull request, #5398:
URL: https://github.com/apache/hadoop/pull/5398

   The current logic of IncrementalBlockReportManager# addRDBI method could lead to the missing blocks when datanodes in pipeline are I/O busy.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Hexiaoqiao commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "Hexiaoqiao (via GitHub)" <gi...@apache.org>.
Hexiaoqiao commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432498891

   Thanks to involve me here. It is interesting issue. I am confused about some points of the description.
   
   > dn3 is writting the blk_12345_002 , but dn2 is blocked by recoverClose method and does not send ack to client.
   
   is this another injects or related this write flow?
   
   > dn3 writes blk_12345_003 successfully.
   > dn3 writes blk_12345_002 successfully and notifyNamenodeReceivedBlock.
   
   Here dn3 writes the same block replica twice, is it expected?
   
   Sorry didn't dig deeply this logic, will trace it for a while.
   @hfutatzhanghb Thanks again for your report and offer the solution. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] zhangshuyan0 commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "zhangshuyan0 (via GitHub)" <gi...@apache.org>.
zhangshuyan0 commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432976667

   It's great to make sure dn only report the replica with maximum timestamp. 
   Even though [HDFS-16146](https://issues.apache.org/jira/browse/HDFS-16146) already merged, is it still possible to miss blocks in trunk when the replication factor is 2 ?  Would you mind adding a UT for reproducing this case?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1430844228

   > 
   
   hello, @ayushtkn . OK, i will try to construct a UT to reproduce this issue, and I will try to describe the issue in this page.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Hexiaoqiao commented on a diff in pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "Hexiaoqiao (via GitHub)" <gi...@apache.org>.
Hexiaoqiao commented on code in PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#discussion_r1108243429


##########
hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/IncrementalBlockReportManager.java:
##########
@@ -252,7 +256,9 @@ synchronized void addRDBI(ReceivedDeletedBlockInfo rdbi,
     // Make sure another entry for the same block is first removed.
     // There may only be one such entry.
     for (PerStorageIBR perStorage : pendingIBRs.values()) {
-      if (perStorage.remove(rdbi.getBlock()) != null) {
+      ReceivedDeletedBlockInfo oldRdbi = perStorage.get(rdbi.getBlock());
+      if (oldRdbi != null && oldRdbi.getBlock().getGenerationStamp() < rdbi.getBlock().getGenerationStamp()

Review Comment:
   This fix still leave one unexpected case, consider the new entry's generation stamp is less than the old one, it will put again at line 265 and overwrite it, right?
   How about the following patch? ADD it will be better to add new unit test to verify this bugfix. Thanks.
   ```
   --- a/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/IncrementalBlockReportManager.java
   +++ b/hadoop-hdfs-project/hadoop-hdfs/src/main/java/org/apache/hadoop/hdfs/server/datanode/IncrementalBlockReportManager.java
   @@ -57,6 +57,14 @@
          this.dnMetrics = dnMetrics;
        }
    
   +    /**
   +     * Get block info from this IBR.
   +     * @return block info if it exists; otherwise, return null.
   +     */
   +    ReceivedDeletedBlockInfo get(Block block) {
   +      return blocks.getOrDefault(block, null);
   +    }
   +
        /**
         * Remove the given block from this IBR
         * @return true if the block was removed; otherwise, return false.
   @@ -252,8 +260,19 @@ synchronized void addRDBI(ReceivedDeletedBlockInfo rdbi,
        // Make sure another entry for the same block is first removed.
        // There may only be one such entry.
        for (PerStorageIBR perStorage : pendingIBRs.values()) {
   -      if (perStorage.remove(rdbi.getBlock()) != null) {
   -        break;
   +      ReceivedDeletedBlockInfo oldRdbi = perStorage.get(rdbi.getBlock());
   +      if (oldRdbi != null) {
   +        long oldGS = oldRdbi.getBlock().getGenerationStamp();
   +        long newGS = rdbi.getBlock().getGenerationStamp();
   +        // If the same block entry has existed and generation stamp less than
   +        // the new one, then remove it first. If generation stamp greater than
   +        // the new one, then keep it. Please reference HDFS-16922 for more
   +        // details.
   +        if (oldGS < newGS && perStorage.remove(rdbi.getBlock()) != null) {
   +          break;
   +        } else if (oldGS >= newGS) {
   +          return;
   +        }
          }
        }
   ```



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] Hexiaoqiao commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "Hexiaoqiao (via GitHub)" <gi...@apache.org>.
Hexiaoqiao commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432500654

   addendum:
   
   > Requires a UT which can reproduce the said issue.
   
   Ayushtkn means here is that we should add new unit tests (source code for test, such as TestClientProtocolForPipelineRecovery at HDFS-16146 mentioned above.) Thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1434306589

   Some UT logs are below:
   ```
   2023-02-17 16:36:12,647 [DataXceiver for client DFSClient_NONMAPREDUCE_-734351046_1 at /127.0.0.1:62549 [Receiving block BP-888771553-172.16.26.102-1676622957798:blk_1073741825_1001]] INFO  datanode.DataNode (DataXceiver.java:writeBlock(932)) - Received BP-888771553-172.16.26.102-1676622957798:blk_1073741825_1003 src: /127.0.0.1:62549 dest: /127.0.0.1:62490 volume: /Users/admin/IdeaProjects/hadoop_community/hadoop-hdfs-project/hadoop-hdfs/target/test/data/dfs/data/data5 of size 22
   
   2023-02-17 16:36:12,652 [DataXceiver for client DFSClient_NONMAPREDUCE_-734351046_1 at /127.0.0.1:62548 [Receiving block BP-888771553-172.16.26.102-1676622957798:blk_1073741825_1001]] INFO  datanode.DataNode (DataXceiver.java:writeBlock(932)) - Received BP-888771553-172.16.26.102-1676622957798:blk_1073741825_1002 src: /127.0.0.1:62548 dest: /127.0.0.1:624
   
   2023-02-17 16:36:16,633 [ibr-executor-0] WARN  datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:sendIBRs(211)) - zhb#sendIBRs reports length is 1, report is [DatanodeStorage[DS-30587342-b739-4417-a374-5b282565b03a,DISK,NORMAL][blk_1073741825_1002, status: RECEIVED_BLOCK, delHint: null]]
   
   2023-02-17 16:36:16,633 [ibr-executor-0] WARN  datanode.IncrementalBlockReportManager (IncrementalBlockReportManager.java:sendIBRs(211)) - zhb#sendIBRs reports length is 1, report is [DatanodeStorage[DS-b0d4b422-d757-4d1c-8ec7-a08f69a93f09,DISK,NORMAL][blk_1073741825_1002, status: RECEIVED_BLOCK, delHint: null]]
   ```
   As the above logs show:  the datanode received blk_1073741825_1003 and blk_1073741825_1002 in one IBR interval,  but it remove the ReceivedDeletedBlockInfo info of blk_1073741825_1003.
   
   should i open a new PR to upload the UT? 
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hubble-insight commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hubble-insight (via GitHub)" <gi...@apache.org>.
hubble-insight commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1644922899

   "This  fix seems to only alleviate the probability of this issue occurring. If the incremental report of this chunk has already been reported to nn and is not cached in pendingIBRs, then subsequent smaller GS reports will be reported to nn again."


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432442616

   @hi, @jojochuang @Hexiaoqiao @zhangshuyan0 , this pr is seems to be another supplement for [HDFS-16146](https://issues.apache.org/jira/browse/HDFS-16146), could you please take a look at this? thanks all.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1431546384

   > Requires a UT which can reproduce the said issue
   
   Hi, @ayushtkn . I can reproduce the issue by UT with version 3.3.x according to our production situation, but can not reproduce the issue with trunk according to our production situation because [HDFS-16146](https://issues.apache.org/jira/browse/HDFS-16146).   But I think the patch in this pr can also be useful to solve this problems.   @Hexiaoqiao , could you please also take a look at this~ thanks.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] zhangshuyan0 commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "zhangshuyan0 (via GitHub)" <gi...@apache.org>.
zhangshuyan0 commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432985896

   It's great to make sure dn only report the replica with maximum timestamp.
   Even though [HDFS-16146](https://issues.apache.org/jira/browse/HDFS-16146) already merged, is it still possible to miss blocks in trunk when the replace policy is set to NEVER ? Would you mind adding a UT for reproducing this case?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1432581037

   > 
   
   
   > Thanks to involve me here. It is interesting issue. I am confused about some points of the description.
   > 
   > > dn3 is writting the blk_12345_002 , but dn2 is blocked by recoverClose method and does not send ack to client.
   > 
   > is this another injects or related this write flow?
   > 
   > > dn3 writes blk_12345_003 successfully.
   > > dn3 writes blk_12345_002 successfully and notifyNamenodeReceivedBlock.
   > 
   > Here dn3 writes the same block replica twice, is it expected?
   > 
   > Sorry didn't dig deeply this logic, will trace it for a while. @hfutatzhanghb Thanks again for your report and offer the solution.
   
   Hi, @Hexiaoqiao , thanks for your reply. 
   For the question 1:  dn2 is blocked by recoverClose() because of the datasetWriteLock acquire in branch-3.3.2
   For the question 2: yes, dn3 writes the same block replica twice, but the two replicas have different generation stamp. and when blk_12345_003 and blk_12345_002  are written in the same IBR interval, the IncrementalBlockReportManager#addRDBI will remove the report of blk_12345_003.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1433001412

   > 
   
   hi, @zhangshuyan0 . thanks for your replying~. yes, it is still possible to miss blocks in trunk when the replace policy is set to NEVER. I'am developing a UT to reproduce this case. 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1434300730

   Hi, @Hexiaoqiao @zhangshuyan0 @ayushtkn 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hadoop-yetus commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hadoop-yetus (via GitHub)" <gi...@apache.org>.
hadoop-yetus commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1431321727

   :broken_heart: **-1 overall**
   
   
   
   
   
   
   | Vote | Subsystem | Runtime |  Logfile | Comment |
   |:----:|----------:|--------:|:--------:|:-------:|
   | +0 :ok: |  reexec  |   1m 23s |  |  Docker mode activated.  |
   |||| _ Prechecks _ |
   | +1 :green_heart: |  dupname  |   0m  0s |  |  No case conflicting files found.  |
   | +0 :ok: |  codespell  |   0m  1s |  |  codespell was not available.  |
   | +0 :ok: |  detsecrets  |   0m  1s |  |  detect-secrets was not available.  |
   | +1 :green_heart: |  @author  |   0m  0s |  |  The patch does not contain any @author tags.  |
   | -1 :x: |  test4tests  |   0m  0s |  |  The patch doesn't appear to include any new or modified tests. Please justify why no new tests are needed for this patch. Also please list what manual steps were performed to verify this patch.  |
   |||| _ trunk Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |  46m 25s |  |  trunk passed  |
   | +1 :green_heart: |  compile  |   1m 31s |  |  trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  compile  |   1m 25s |  |  trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  checkstyle  |   1m  9s |  |  trunk passed  |
   | +1 :green_heart: |  mvnsite  |   1m 30s |  |  trunk passed  |
   | +1 :green_heart: |  javadoc  |   1m  7s |  |  trunk passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 28s |  |  trunk passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 36s |  |  trunk passed  |
   | +1 :green_heart: |  shadedclient  |  32m 38s |  |  branch has no errors when building and testing our client artifacts.  |
   |||| _ Patch Compile Tests _ |
   | +1 :green_heart: |  mvninstall  |   1m 31s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 23s |  |  the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javac  |   1m 23s |  |  the patch passed  |
   | +1 :green_heart: |  compile  |   1m 17s |  |  the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  javac  |   1m 17s |  |  the patch passed  |
   | +1 :green_heart: |  blanks  |   0m  0s |  |  The patch has no blanks issues.  |
   | -0 :warning: |  checkstyle  |   0m 54s | [/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5398/1/artifact/out/results-checkstyle-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs-project/hadoop-hdfs: The patch generated 1 new + 2 unchanged - 0 fixed = 3 total (was 2)  |
   | +1 :green_heart: |  mvnsite  |   2m 18s |  |  the patch passed  |
   | +1 :green_heart: |  javadoc  |   0m 57s |  |  the patch passed with JDK Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04  |
   | +1 :green_heart: |  javadoc  |   1m 30s |  |  the patch passed with JDK Private Build-1.8.0_352-8u352-ga-1~20.04-b08  |
   | +1 :green_heart: |  spotbugs  |   3m 40s |  |  the patch passed  |
   | +1 :green_heart: |  shadedclient  |  30m  2s |  |  patch has no errors when building and testing our client artifacts.  |
   |||| _ Other Tests _ |
   | -1 :x: |  unit  | 234m 21s | [/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt](https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5398/1/artifact/out/patch-unit-hadoop-hdfs-project_hadoop-hdfs.txt) |  hadoop-hdfs in the patch passed.  |
   | +1 :green_heart: |  asflicense  |   0m 47s |  |  The patch does not generate ASF License warnings.  |
   |  |   | 368m 37s |  |  |
   
   
   | Reason | Tests |
   |-------:|:------|
   | Failed junit tests | hadoop.hdfs.server.namenode.TestAuditLogger |
   |   | hadoop.hdfs.server.namenode.TestFSNamesystemLockReport |
   |   | hadoop.hdfs.server.datanode.TestDirectoryScanner |
   |   | hadoop.hdfs.server.namenode.TestAuditLogs |
   |   | hadoop.hdfs.server.namenode.TestFsck |
   
   
   | Subsystem | Report/Notes |
   |----------:|:-------------|
   | Docker | ClientAPI=1.42 ServerAPI=1.42 base: https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5398/1/artifact/out/Dockerfile |
   | GITHUB PR | https://github.com/apache/hadoop/pull/5398 |
   | Optional Tests | dupname asflicense compile javac javadoc mvninstall mvnsite unit shadedclient spotbugs checkstyle codespell detsecrets |
   | uname | Linux 7a1f03d8173f 4.15.0-200-generic #211-Ubuntu SMP Thu Nov 24 18:16:04 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
   | Build tool | maven |
   | Personality | dev-support/bin/hadoop.sh |
   | git revision | trunk / 15d7f88d86352c8b9d38f1eb3c865be101c33a0a |
   | Default Java | Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   | Multi-JDK versions | /usr/lib/jvm/java-11-openjdk-amd64:Ubuntu-11.0.17+8-post-Ubuntu-1ubuntu220.04 /usr/lib/jvm/java-8-openjdk-amd64:Private Build-1.8.0_352-8u352-ga-1~20.04-b08 |
   |  Test Results | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5398/1/testReport/ |
   | Max. process+thread count | 2092 (vs. ulimit of 5500) |
   | modules | C: hadoop-hdfs-project/hadoop-hdfs U: hadoop-hdfs-project/hadoop-hdfs |
   | Console output | https://ci-hadoop.apache.org/job/hadoop-multibranch/job/PR-5398/1/console |
   | versions | git=2.25.1 maven=3.6.3 spotbugs=4.2.2 |
   | Powered by | Apache Yetus 0.14.0 https://yetus.apache.org |
   
   
   This message was automatically generated.
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org


[GitHub] [hadoop] hfutatzhanghb commented on pull request #5398: HDFS-16922. The logic of IncrementalBlockReportManager#addRDBI method may cause missing blocks when cluster is busy.

Posted by "hfutatzhanghb (via GitHub)" <gi...@apache.org>.
hfutatzhanghb commented on PR #5398:
URL: https://github.com/apache/hadoop/pull/5398#issuecomment-1430888618

   > Requires a UT which can reproduce the said issue
   
   Hi, @ayushtkn ,  I have updated the description of this issue. please take a look~  thanks a lot.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


---------------------------------------------------------------------
To unsubscribe, e-mail: common-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-issues-help@hadoop.apache.org