You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Grant Henke (Jira)" <ji...@apache.org> on 2020/06/02 17:46:00 UTC

[jira] [Updated] (KUDU-1954) Improve maintenance manager behavior in heavy write workload

     [ https://issues.apache.org/jira/browse/KUDU-1954?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Grant Henke updated KUDU-1954:
------------------------------
    Labels: roadmap-candidate scalability  (was: )

> Improve maintenance manager behavior in heavy write workload
> ------------------------------------------------------------
>
>                 Key: KUDU-1954
>                 URL: https://issues.apache.org/jira/browse/KUDU-1954
>             Project: Kudu
>          Issue Type: Improvement
>          Components: perf, tserver
>    Affects Versions: 1.3.0
>            Reporter: Todd Lipcon
>            Priority: Major
>              Labels: roadmap-candidate, scalability
>         Attachments: mm-trace.png
>
>
> During the investigation in [this doc|https://docs.google.com/document/d/1U1IXS1XD2erZyq8_qG81A1gZaCeHcq2i0unea_eEf5c/edit] I found a few maintenance-manager-related issues during heavy writes:
> - we don't schedule flushes until we are already in "backpressure" realm, so we spent most of our time doing backpressure
> - even if we configure N maintenance threads, we typically are only using ~50% of those threads due to the scheduling granularity
> - when we do hit the "memory-pressure flush" threshold, all threads quickly switch to flushing, which then brings us far beneath the threshold
> - long running compactions can temporarily starve flushes
> - high volume of writes can starve compactions



--
This message was sent by Atlassian Jira
(v8.3.4#803005)