You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@sentry.apache.org by "kalyan kumar kalvagadda (JIRA)" <ji...@apache.org> on 2018/07/10 13:32:00 UTC

[jira] [Created] (SENTRY-2305) Optimize time taken for persistence HMS snapshot

kalyan kumar kalvagadda created SENTRY-2305:
-----------------------------------------------

             Summary: Optimize time taken for persistence HMS snapshot 
                 Key: SENTRY-2305
                 URL: https://issues.apache.org/jira/browse/SENTRY-2305
             Project: Sentry
          Issue Type: Sub-task
          Components: Sentry
    Affects Versions: 2.1.0
            Reporter: kalyan kumar kalvagadda


There are couple of options

# Break the total snapshot into to batches and persist all of them in parallel in different transactions. As sentry uses repeatable_read isolation level we should be able to have parallel writes on the same table. This bring an issue if there is a failure in persisting any of the batches. This approach needs additional logic of cleaning the partially persisted snapshot. I’m evaluating this option. 
** *Result:* Initial results are promising. Time to persist the snapshot came down by 60%.
# Try disabling L1 Cache for persisting the snapshot.
# Try persisting the snapshot entries sequentially in separate transactions. As transactions which commit huge data might take longer as they take a lot of CPU cycles to keep the rollback log up to date.



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)