You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@hbase.apache.org by "Guanghao Zhang (Jira)" <ji...@apache.org> on 2020/08/26 05:51:01 UTC

[jira] [Updated] (HBASE-20226) Performance Improvement Taking Large Snapshots In Remote Filesystems

     [ https://issues.apache.org/jira/browse/HBASE-20226?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Guanghao Zhang updated HBASE-20226:
-----------------------------------
    Fix Version/s:     (was: 2.2.7)
                   2.2.6

> Performance Improvement Taking Large Snapshots In Remote Filesystems
> --------------------------------------------------------------------
>
>                 Key: HBASE-20226
>                 URL: https://issues.apache.org/jira/browse/HBASE-20226
>             Project: HBase
>          Issue Type: Improvement
>          Components: snapshots
>    Affects Versions: 3.0.0-alpha-1, 2.3.0, 1.7.0
>         Environment: HBase 1.4.0 running on an AWS EMR cluster with the hbase.rootdir set to point to a folder in S3 
>            Reporter: Saad Mufti
>            Assignee: Bharath Vissapragada
>            Priority: Minor
>              Labels: perfomance
>             Fix For: 3.0.0-alpha-1, 2.3.1, 1.7.0, 2.4.0, 2.2.6
>
>         Attachments: HBASE-20226..01.patch
>
>
> When taking a snapshot of any table, one of the last steps is to delete the region manifests, which have already been rolled up into a larger overall manifest and thus have redundant information.
> This proposal is to do the deletion in a thread pool bounded by hbase.snapshot.thread.pool.max . For large tables with a lot of regions, the current single threaded deletion is taking longer than all the rest of the snapshot tasks when the Hbase data and the snapshot folder are both in a remote filesystem like S3.
> I have a patch for this proposal almost ready and will submit it tomorrow for feedback, although I haven't had a chance to write any tests yet.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)