You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Chia-Ping Tsai (JIRA)" <ji...@apache.org> on 2017/09/03 11:12:00 UTC

[jira] [Assigned] (HBASE-18743) HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table is deleted

     [ https://issues.apache.org/jira/browse/HBASE-18743?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Chia-Ping Tsai reassigned HBASE-18743:
--------------------------------------

    Assignee: wenbang

> HFiles that are in use by a table whitch have the same name and namespace with a default table cloned from snapshot may be deleted when that snapshot and default table is deleted
> ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
>
>                 Key: HBASE-18743
>                 URL: https://issues.apache.org/jira/browse/HBASE-18743
>             Project: HBase
>          Issue Type: Bug
>          Components: hbase
>    Affects Versions: 1.1.12
>            Reporter: wenbang
>            Assignee: wenbang
>            Priority: Critical
>         Attachments: HBASE_18743.patch, HBASE_18743_v1.patch, HBASE_18743_v2.patch
>
>
> We recently had a critical production issue in which HFiles that were still in use by a table were deleted.
> This appears to have been caused by conditions in which table have the same namespace and name with a default table cloned from snapshot.when snapshot and default table be deleted,HFiles that are still in use may be deleted.
> For example:
> Table with default namespace is: "t1"
> The namespace of the new table is the same as the name of the default table, and is generated by snapshot cloned : "t1: t1"
> When the snapshot and the default namespace table are deleted, the new table is also deleted in the used HFiles
> This is because the creation of the BackReferenceFile get the table Name is not normal, resulting in can not find the reference file, hfilecleaner to delete the HFiles in used, when the table has not been major compact
> {code:java}
>   public static boolean create(final Configuration conf, final FileSystem fs,
>       final Path dstFamilyPath, final TableName linkedTable, final String linkedRegion,
>       final String hfileName, final boolean createBackRef) throws IOException {
>     String familyName = dstFamilyPath.getName();
>     String regionName = dstFamilyPath.getParent().getName();
>     String tableName = FSUtils.getTableName(dstFamilyPath.getParent().getParent())
>         .getNameAsString();
> {code}
> {code:java}
>   public static TableName getTableName(Path tablePath) {
>     return TableName.valueOf(tablePath.getParent().getName(), tablePath.getName());
>   }
> {code}



--
This message was sent by Atlassian JIRA
(v6.4.14#64029)