You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@kudu.apache.org by "Alexey Serbin (Jira)" <ji...@apache.org> on 2022/10/24 17:49:00 UTC

[jira] [Commented] (KUDU-3407) MM: Give a chance to do other OP while server is under memory pressure

    [ https://issues.apache.org/jira/browse/KUDU-3407?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17623307#comment-17623307 ] 

Alexey Serbin commented on KUDU-3407:
-------------------------------------

Thank you for the suggestion!

Yes, it would be great if somebody could create a comprehensive testbench with various workloads and use that to check against various policies/strategies picking the most preferable maintenance operation to run (the current one and other variations, like something you mentioned).

Please feel free to pick up this task -- contributions are very welcome!

> MM: Give a chance to do other OP while server is under memory pressure
> ----------------------------------------------------------------------
>
>                 Key: KUDU-3407
>                 URL: https://issues.apache.org/jira/browse/KUDU-3407
>             Project: Kudu
>          Issue Type: Improvement
>          Components: compaction
>    Affects Versions: 1.14.0
>            Reporter: Song Jiacheng
>            Priority: Major
>
> For now, if the server is under pressure(60% memory as default), MaintenanceManager always find a memory-flush operation to run. So if all the tservers are under pressure for a long time, there will be many pending operations, which will lead to many problem, like redos not compact, undo not delete, etc..
> It might be better if we add a parameter to give a change to  do other ops, the formula is like this:
> Do other works? = (1 - (memory_now - pressure_threshold) / (soft_limit - pressure_threshold)) * kChance
> kChange should be configurable.
> This will give a probability to run other maintenance operations.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)