You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@flink.apache.org by "Truong Duc Kien (JIRA)" <ji...@apache.org> on 2018/03/25 07:23:00 UTC

[jira] [Created] (FLINK-9070) Improve performance of RocksDBMapState.clear()

Truong Duc Kien created FLINK-9070:
--------------------------------------

             Summary: Improve performance of RocksDBMapState.clear()
                 Key: FLINK-9070
                 URL: https://issues.apache.org/jira/browse/FLINK-9070
             Project: Flink
          Issue Type: Improvement
          Components: State Backends, Checkpointing
    Affects Versions: 1.6.0
            Reporter: Truong Duc Kien


Currently, RocksDBMapState.clear() is implemented by iterating over all the keys and drop them one by one. This iteration can be quite slow with: 
 * Large maps
 * High-churn maps with a lot of tombstones

There are a few methods to speed-up deletion for a range of keys, each with their own caveats:
 * DeleteRange: still experimental, likely buggy
 * DeleteFilesInRange + CompactRange: only good for large ranges

 

Flink can also keep a list of inserted keys in-memory, then directly delete them without having to iterate over the Rocksdb database again. 

 

Reference:
 * [RocksDB article about range deletion|https://github.com/facebook/rocksdb/wiki/Delete-A-Range-Of-Keys]
 * [Bug in DeleteRange|https://pingcap.com/blog/2017-09-08-rocksdbbug]

 



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)