You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@hbase.apache.org by N Keywal <nk...@gmail.com> on 2012/05/26 01:30:05 UTC
hdfs & flaky testFSUtils
Hello all,
Does anyone have a idea on why the code below fails? It's a
simplification of a currently flaky test in TestFSUtils.
Fails 100% of the time without the sleep, never when the sleep is set.
On paper, when the close of a hdfs file finishes we are guaranteed
that the blocks are written...
@Test
public void testFSUTils() throws Exception {
final String hosts[] = {"host1", "host2", "host3", "host4"};
Path testFile = new Path("/test1.txt");
HBaseTestingUtility htu = new HBaseTestingUtility();
try {
htu.startMiniDFSCluster(hosts).waitActive();
FileSystem fs = htu.getDFSCluster().getFileSystem();
for (int i = 0; i < 100; ++i) {
FSDataOutputStream out = fs.create(testFile);
byte[] data = new byte[1];
out.write(data, 0, 1);
out.close();
// Put a sleep here to make me work
//Thread.sleep(2000);
FileStatus status = fs.getFileStatus(testFile);
HDFSBlocksDistribution blocksDistribution =
FSUtils.computeHDFSBlocksDistribution(fs, status, 0, status.getLen());
assertEquals("Wrong number of hosts distributing blocks. at
iteration "+i, 3,
blocksDistribution.getTopHosts().size());
fs.delete(testFile, true);
}
} finally {
htu.shutdownMiniDFSCluster();
}
}
Thank you in advance for your help,
N.
Fwd: hdfs & flaky testFSUtils
Posted by N Keywal <nk...@gmail.com>.
Hi,
I was not too successful with my question below. It was on a Friday
evening, may be I will have more results in the middle of the week
;-).
Cheers,
Nicolas
--
Hello all,
Does anyone have a idea on why the code below fails? It's a
simplification of a currently flaky test in TestFSUtils.
Fails 100% of the time without the sleep, never when the sleep is set.
On paper, when the close of a hdfs file finishes we are guaranteed
that the blocks are written...
@Test
public void testFSUTils() throws Exception {
final String hosts[] = {"host1", "host2", "host3", "host4"};
Path testFile = new Path("/test1.txt");
HBaseTestingUtility htu = new HBaseTestingUtility();
try {
htu.startMiniDFSCluster(hosts).waitActive();
FileSystem fs = htu.getDFSCluster().getFileSystem();
for (int i = 0; i < 100; ++i) {
FSDataOutputStream out = fs.create(testFile);
byte[] data = new byte[1];
out.write(data, 0, 1);
out.close();
// Put a sleep here to make me work
//Thread.sleep(2000);
FileStatus status = fs.getFileStatus(testFile);
HDFSBlocksDistribution blocksDistribution =
FSUtils.computeHDFSBlocksDistribution(fs, status, 0, status.getLen());
assertEquals("Wrong number of hosts distributing blocks. at
iteration "+i, 3,
blocksDistribution.getTopHosts().size());
fs.delete(testFile, true);
}
} finally {
htu.shutdownMiniDFSCluster();
}
}
Thank you in advance for your help,
N.