You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-issues@hadoop.apache.org by "Viraj Jasani (Jira)" <ji...@apache.org> on 2023/02/24 23:08:00 UTC
[jira] [Commented] (HDFS-16935) TestFsDatasetImpl.testReportBadBlocks brittle
[ https://issues.apache.org/jira/browse/HDFS-16935?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17693371#comment-17693371 ]
Viraj Jasani commented on HDFS-16935:
-------------------------------------
If we run this test in debug mode, it can reproduced locally too.
> TestFsDatasetImpl.testReportBadBlocks brittle
> ---------------------------------------------
>
> Key: HDFS-16935
> URL: https://issues.apache.org/jira/browse/HDFS-16935
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: test
> Affects Versions: 3.4.0, 3.3.5, 3.3.9
> Reporter: Steve Loughran
> Priority: Minor
>
> jenkins failure as sleep() time not long enough
> {code}
> Failing for the past 1 build (Since #4 )
> Took 7.4 sec.
> Error Message
> expected:<1> but was:<0>
> Stacktrace
> java.lang.AssertionError: expected:<1> but was:<0>
> at org.junit.Assert.fail(Assert.java:89)
> at org.junit.Assert.failNotEquals(Assert.java:835)
> at org.junit.Assert.assertEquals(Assert.java:647)
> at org.junit.Assert.assertEquals(Assert.java:633)
> {code}
> assert is after a 3s sleep waiting for reports coming in.
> {code}
> dataNode.reportBadBlocks(block, dataNode.getFSDataset()
> .getFsVolumeReferences().get(0));
> Thread.sleep(3000); // 3s sleep
> BlockManagerTestUtil.updateState(cluster.getNamesystem()
> .getBlockManager());
> // Verify the bad block has been reported to namenode
> Assert.assertEquals(1, cluster.getNamesystem().getCorruptReplicaBlocks()); // here
> {code}
> LambdaTestUtils.eventually() should be used around this assert, maybe with an even shorter initial delay so on faster systems, test is faster.
--
This message was sent by Atlassian Jira
(v8.20.10#820010)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-issues-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-issues-help@hadoop.apache.org