You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Jarryd Lee (Jira)" <ji...@apache.org> on 2022/12/21 17:20:00 UTC

[jira] [Comment Edited] (HBASE-27542) Remove unneeded distcp log cleanup after incremental backups

    [ https://issues.apache.org/jira/browse/HBASE-27542?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17650942#comment-17650942 ] 

Jarryd Lee edited comment on HBASE-27542 at 12/21/22 5:19 PM:
--------------------------------------------------------------

The metafolder for DistCp is created at in the staging directory for the [current filesystem|https://github.com/apache/hadoop/blob/eec8ccd11915958d1bab9141f08f759266a236b0/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java#L412]. If running in an external yarn cluster, that is a separate filesystem than the one used for the hbase cluster. This becomes an issue because during the cleanup method, we [fetch the file statuses |https://github.com/apache/hbase/blob/2c3abae18aa35e2693b64b143316817d4569d0c3/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/TableBackupClient.java#L345]at the root path. Although the filesystem, and the path are referring to different filesystems, which will cause issues during the checkPath() [call here|https://github.com/apache/hadoop/blob/5187bd37ae9c38dc55bb1e0451064a8f191cfca0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L253]. 

My original patch was to, instead of using the current filesystem, to get the filesystem of the rootPath, and issue deletes to that filesystem instead.

 
{code:java}
FileSystem destFs = FileSystem.get(rootPath.toUri(), conf) {code}
This would ensure that we don't run into any "Wrong FS.." exceptions. Although it wouldn't solve for the case where the DistCp process ends un-gracefully because it wouldn't be attempting to clean up the metafolder in the staging dir created via the DistCp job, as they are on different filesystems.

 

 


was (Author: jlee):
The metafolder for DistCp is created at in the staging directory for the [current filesystem|https://github.com/apache/hadoop/blob/eec8ccd11915958d1bab9141f08f759266a236b0/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java#L412]. If running in an external yarn cluster, that is a separate filesystem than the one used for the hbase cluster. This becomes an issue because during the cleanup method, we [fetch the file statuses |https://github.com/apache/hbase/blob/2c3abae18aa35e2693b64b143316817d4569d0c3/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/TableBackupClient.java#L345]at the root path. Although the filesystem, and the path are referring to different filesystems, which will cause issues during the checkPath() [call here|https://github.com/apache/hadoop/blob/5187bd37ae9c38dc55bb1e0451064a8f191cfca0/hadoop-hdfs-project/hadoop-hdfs-client/src/main/java/org/apache/hadoop/hdfs/DistributedFileSystem.java#L253]. 

My original patch was to, instead of using the current filesystem, to get the filesystem of the rootPath, and issue deletes to that filesystem instead.

 
{code:java}
FileSystem destFs = FileSystem.get(rootPath.toUri(), conf) {code}
This would ensure that we don't run into any "Wrong FS.." exceptions. Although it wouldn't solve for the case where the DistCp process ends un-gracefully because it wouldn't be attempting to clean up the metafolder in the staging dir created via the DistCp job.

 

 

> Remove unneeded distcp log cleanup after incremental backups
> ------------------------------------------------------------
>
>                 Key: HBASE-27542
>                 URL: https://issues.apache.org/jira/browse/HBASE-27542
>             Project: HBase
>          Issue Type: Improvement
>          Components: backup&amp;restore
>    Affects Versions: 3.0.0-alpha-3
>            Reporter: Jarryd Lee
>            Priority: Minor
>
> During the completion step of incremental backups, the [TableBackupClient|https://github.com/apache/hbase/blob/2c3abae18aa35e2693b64b143316817d4569d0c3/hbase-backup/src/main/java/org/apache/hadoop/hbase/backup/impl/TableBackupClient.java#L343-L355] ensures distcp logs are cleaned up. However, [DistCp|https://github.com/apache/hadoop/blob/b87c0ea7ebde3edc312dcc8938809610a914df7f/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java#L465-L476] already ensures that the metafolder, where the distcp logs are stored, is cleaned up via a [shutdown hook|https://github.com/apache/hadoop/blob/b87c0ea7ebde3edc312dcc8938809610a914df7f/hadoop-tools/hadoop-distcp/src/main/java/org/apache/hadoop/tools/DistCp.java#L439-L442].
> The TableBackupClient cleanup method should be able to be safely removed.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)