You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Sihua Zhou (JIRA)" <ji...@apache.org> on 2018/02/26 05:06:00 UTC

[jira] [Updated] (FLINK-8602) Improve recovery performance for rocksdb backend

     [ https://issues.apache.org/jira/browse/FLINK-8602?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Sihua Zhou updated FLINK-8602:
------------------------------
    Summary: Improve recovery performance for rocksdb backend  (was: Accelerate recover from failover when use incremental checkpoint)

> Improve recovery performance for rocksdb backend
> ------------------------------------------------
>
>                 Key: FLINK-8602
>                 URL: https://issues.apache.org/jira/browse/FLINK-8602
>             Project: Flink
>          Issue Type: Improvement
>          Components: State Backends, Checkpointing
>    Affects Versions: 1.5.0
>            Reporter: Sihua Zhou
>            Assignee: Sihua Zhou
>            Priority: Major
>
> Currently, when enable incremental checkpoint, if user change the parallelism then ```hasExtraKeys``` may be ```true```. If this occur, flink will loop all rocksdb instance and iterator all data to fetch the data that fails into current ```KeyGroupRange````, this can be improved as follows:
> - 1. For multi rocksdbs, we don't need to iterate the entry of them and insert into another one, we can use the `ingestExternalFile()` api to merge them.
> - 2. For the keyGroup which is not belong the target keyGroupRange, we can delete them lazily by set the `CompactFilter` for the `ColumnFamily`.
> Any advice would be highly appreciated!
>  



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)