You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "ChiaPing Tsai (JIRA)" <ji...@apache.org> on 2016/07/15 06:42:20 UTC

[jira] [Created] (HBASE-16235) TestSnapshotFromMaster#testSnapshotHFileArchiving wil be failed if there are too many hfiles

ChiaPing Tsai created HBASE-16235:
-------------------------------------

             Summary: TestSnapshotFromMaster#testSnapshotHFileArchiving wil be failed if there are too many hfiles
                 Key: HBASE-16235
                 URL: https://issues.apache.org/jira/browse/HBASE-16235
             Project: HBase
          Issue Type: Bug
            Reporter: ChiaPing Tsai
            Priority: Trivial


TestSnapshotFromMaster#testSnapshotHFileArchiving assumes that all hfiles will be compacted and be moved to “archive folder” after cleaning. But not all hfiles will be compacted if there are large number of hfiles.
The above may be happened if changing the default config like smaller write buffer(hbase.client.write.buffer) or ExponentialClientBackoffPolicy.

{code:title=TestSnapshotFromMaster.java|borderStyle=solid}
// it should also check the hfiles in the normal path (/hbase/data/default/...)
public void testSnapshotHFileArchiving() throws Exception {
  //...
  // get the archived files for the table
    Collection<String> files = getArchivedHFiles(archiveDir, rootDir, fs, TABLE_NAME);

    // and make sure that there is a proper subset
    for (String fileName : snapshotHFiles) {
      assertTrue("Archived hfiles " + files + " is missing snapshot file:" + fileName,
        files.contains(fileName));
    }
  //...
}   
{code}



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)