You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Yu Li (Jira)" <ji...@apache.org> on 2020/01/09 04:12:00 UTC

[jira] [Assigned] (FLINK-15512) Refactor the mechanism of how to constructure the cache and write buffer manager shared across RocksDB instances

     [ https://issues.apache.org/jira/browse/FLINK-15512?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Yu Li reassigned FLINK-15512:
-----------------------------

    Assignee: Yun Tang
    Priority: Blocker  (was: Critical)

Upgrade the priority to Blocker.

> Refactor the mechanism of how to constructure the cache and write buffer manager shared across RocksDB instances 
> -----------------------------------------------------------------------------------------------------------------
>
>                 Key: FLINK-15512
>                 URL: https://issues.apache.org/jira/browse/FLINK-15512
>             Project: Flink
>          Issue Type: Sub-task
>          Components: Runtime / State Backends
>            Reporter: Yun Tang
>            Assignee: Yun Tang
>            Priority: Blocker
>             Fix For: 1.10.0
>
>
> FLINK-14484 introduce a {{LRUCache}} to share among RocksDB instances, so that the memory usage by RocksDB could be controlled well. However, due to the implementation and some bugs in RocksDB ([issue-6247|https://github.com/facebook/rocksdb/issues/6247]), we cannot limit the memory strictly.
> The way to walk around this issue is to consider the buffer which memtable would overuse (1/2 write buffer manager size). By introducing this, the actual cache size for user to share is not the same as the managed off-heap memory or user configured memory.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)