You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ahmed Hussein (Jira)" <ji...@apache.org> on 2020/03/05 14:34:00 UTC
[jira] [Resolved] (HDFS-10498) Intermittent test failure
org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength
[ https://issues.apache.org/jira/browse/HDFS-10498?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ahmed Hussein resolved HDFS-10498.
----------------------------------
Resolution: Cannot Reproduce
For branches trunk and branch-2.10, I could not reproduce the failure.
My intuition is that it happens due to slow down on the server as a side effect of some other Units in parallel.
[~kihwal], [~xiaochen] I am going to close this Jiras for now, if you are ok with that.
> Intermittent test failure org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength
> -----------------------------------------------------------------------------------------------------------------------
>
> Key: HDFS-10498
> URL: https://issues.apache.org/jira/browse/HDFS-10498
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs, snapshots
> Affects Versions: 3.0.0-alpha1
> Reporter: Hanisha Koneru
> Assignee: Ahmed Hussein
> Priority: Major
> Attachments: test_failure.txt
>
>
> Error Details
> Per https://builds.apache.org/job/PreCommit-HDFS-Build/15646/testReport/, we had the following failure. Local rerun is successful.
> Error Details:
> {panel}
> Fail to get block MD5 for LocatedBlock{BP-145245805-172.17.0.3-1464981728847:blk_1073741826_1002; getBlockSize()=1; corrupt=false; offset=1024; locs=[DatanodeInfoWithStorage[127.0.0.1:55764,DS-a33d7c97-9d4a-4694-a47e-a3187a33ed5a,DISK]]}
> {panel}
> Stack Trace:
> {panel}
> java.io.IOException: Fail to get block MD5 for LocatedBlock{BP-145245805-172.17.0.3-1464981728847:blk_1073741826_1002; getBlockSize()=1; corrupt=false; offset=1024; locs=[DatanodeInfoWithStorage[127.0.0.1:55764,DS-a33d7c97-9d4a-4694-a47e-a3187a33ed5a,DISK]]}
> at org.apache.hadoop.hdfs.FileChecksumHelper$ReplicatedFileChecksumComputer.checksumBlocks(FileChecksumHelper.java:289)
> at org.apache.hadoop.hdfs.FileChecksumHelper$FileChecksumComputer.compute(FileChecksumHelper.java:206)
> at org.apache.hadoop.hdfs.DFSClient.getFileChecksum(DFSClient.java:1731)
> at org.apache.hadoop.hdfs.DistributedFileSystem$31.doCall(DistributedFileSystem.java:1482)
> at org.apache.hadoop.hdfs.DistributedFileSystem$31.doCall(DistributedFileSystem.java:1479)
> at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
> at org.apache.hadoop.hdfs.DistributedFileSystem.getFileChecksum(DistributedFileSystem.java:1490)
> at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength(TestSnapshotFileLength.java:137)
> Standard Output 7 sec
> {panel}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org