You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yu Li (Jira)" <ji...@apache.org> on 2020/01/13 07:40:00 UTC

[jira] [Closed] (FLINK-15512) Refactor the mechanism of how to constructure the cache and write buffer manager shared across RocksDB instances

     [ https://issues.apache.org/jira/browse/FLINK-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yu Li closed FLINK-15512.
-------------------------
    Resolution: Implemented

Merged in
master via: 346e2e02af385d7482376c25d2c3de09b89c1111
release-1.10 via: 1cd7cee8fc02061945220dcd8d83abf0f04cdaf6

> Refactor the mechanism of how to constructure the cache and write buffer manager shared across RocksDB instances 
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-15512
>                 URL: https://issues.apache.org/jira/browse/FLINK-15512
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Runtime / State Backends
>            Reporter: Yun Tang
>            Assignee: Yun Tang
>            Priority: Blocker
>              Labels: pull-request-available
>             Fix For: 1.10.0
>
>          Time Spent: 20m
>  Remaining Estimate: 0h
>
> FLINK-14484 introduce a {{LRUCache}} to share among RocksDB instances, so that the memory usage by RocksDB could be controlled well. However, due to the implementation and some bugs in RocksDB ([issue-6247|https://github.com/facebook/rocksdb/issues/6247]), we cannot limit the memory strictly.
> The way to walk around this issue is to consider the buffer which memtable would overuse (1/2 write buffer manager size). By introducing this, the actual cache size for user to share is not the same as the managed off-heap memory or user configured memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)