You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Ahmed Hussein (Jira)" <ji...@apache.org> on 2020/03/05 14:37:00 UTC
[jira] [Resolved] (HDFS-10961) Flaky test
TestSnapshotFileLength.testSnapshotfileLength
[ https://issues.apache.org/jira/browse/HDFS-10961?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Ahmed Hussein resolved HDFS-10961.
----------------------------------
Resolution: Cannot Reproduce
Similar to HDFS-10498, for branches trunk and branch-2.10, I could not reproduce the failure.
My intuition is that it happens due to slow down on the server as a side effect of some other Units in parallel.
I am going to close this Jiras for now.
> Flaky test TestSnapshotFileLength.testSnapshotfileLength
> --------------------------------------------------------
>
> Key: HDFS-10961
> URL: https://issues.apache.org/jira/browse/HDFS-10961
> Project: Hadoop HDFS
> Issue Type: Bug
> Components: hdfs, hdfs-client
> Affects Versions: 3.0.0-alpha1
> Reporter: Yongjun Zhang
> Assignee: Ahmed Hussein
> Priority: Major
> Labels: flaky-test
>
> Flaky test TestSnapshotFileLength.testSnapshotfileLength
> {code}
> Error Message
> Unable to close file because the last block does not have enough number of replicas.
> Stack Trace
> java.io.IOException: Unable to close file because the last block does not have enough number of replicas.
> at org.apache.hadoop.hdfs.DFSOutputStream.completeFile(DFSOutputStream.java:2630)
> at org.apache.hadoop.hdfs.DFSOutputStream.closeImpl(DFSOutputStream.java:2592)
> at org.apache.hadoop.hdfs.DFSOutputStream.close(DFSOutputStream.java:2546)
> at org.apache.hadoop.fs.FSDataOutputStream$PositionCache.close(FSDataOutputStream.java:72)
> at org.apache.hadoop.fs.FSDataOutputStream.close(FSDataOutputStream.java:106)
> at org.apache.hadoop.hdfs.server.namenode.snapshot.TestSnapshotFileLength.testSnapshotfileLength(TestSnapshotFileLength.java:130)
> {code}
--
This message was sent by Atlassian Jira
(v8.3.4#803005)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org