You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@sentry.apache.org by "kalyan kumar kalvagadda (JIRA)" <ji...@apache.org> on 2018/10/08 20:58:00 UTC

[jira] [Updated] (SENTRY-2305) Optimize time taken for persistence HMS snapshot

     [ https://issues.apache.org/jira/browse/SENTRY-2305?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

kalyan kumar kalvagadda updated SENTRY-2305:
--------------------------------------------
    Attachment: SENTRY-2425.001.patch
        Status: Patch Available  (was: Open)

> Optimize time taken for persistence HMS snapshot 
> -------------------------------------------------
>
>                 Key: SENTRY-2305
>                 URL: https://issues.apache.org/jira/browse/SENTRY-2305
>             Project: Sentry
>          Issue Type: Sub-task
>          Components: Sentry
>    Affects Versions: 2.1.0
>            Reporter: kalyan kumar kalvagadda
>            Assignee: kalyan kumar kalvagadda
>            Priority: Major
>         Attachments: SENTRY-2425.001.patch
>
>
> There are couple of options
> # Break the total snapshot into to batches and persist all of them in parallel in different transactions. As sentry uses repeatable_read isolation level we should be able to have parallel writes on the same table. This bring an issue if there is a failure in persisting any of the batches. This approach needs additional logic of cleaning the partially persisted snapshot. I’m evaluating this option. 
> ** *Result:* Initial results are promising. Time to persist the snapshot came down by 60%.
> # Try disabling L1 Cache for persisting the snapshot.
> # Try persisting the snapshot entries sequentially in separate transactions. As transactions which commit huge data might take longer as they take a lot of CPU cycles to keep the rollback log up to date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)