You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Hangxiang Yu (Jira)" <ji...@apache.org> on 2022/07/29 06:17:00 UTC

[jira] [Commented] (FLINK-26590) Triggered checkpoints can be delayed by discarding shared state

    [ https://issues.apache.org/jira/browse/FLINK-26590?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17572762#comment-17572762 ] 

Hangxiang Yu commented on FLINK-26590:
--------------------------------------

Do we have some plans about the ticket?
I think maybe it's important for users to use changelog out of box.

WDYT? [~roman] 

> Triggered checkpoints can be delayed by discarding shared state
> ---------------------------------------------------------------
>
>                 Key: FLINK-26590
>                 URL: https://issues.apache.org/jira/browse/FLINK-26590
>             Project: Flink
>          Issue Type: Improvement
>          Components: Runtime / Checkpointing
>    Affects Versions: 1.14.3, 1.15.0
>            Reporter: Roman Khachatryan
>            Assignee: Roman Khachatryan
>            Priority: Major
>              Labels: pull-request-available, stale-assigned
>             Fix For: 1.16.0
>
>
> Quick note: CheckpointCleaner is not involved here.
> When a checkpoint is subsumed, SharedStateRegistry schedules its unused shared state for async deletion. It uses common IO pool for this and adds a Runnable per state handle. ( see SharedStateRegistryImpl.scheduleAsyncDelete)
> When a checkpoint is started, CheckpointCoordinator uses the same thread pool to initialize the location for it. (see CheckpointCoordinator.initializeCheckpoint)
> The thread pool is of fixed size [jobmanager.io-pool.size|https://nightlies.apache.org/flink/flink-docs-master/docs/deployment/config/#jobmanager-io-pool-size]; by default it's the number of CPU cores) and uses FIFO queue for tasks.
> When there is a spike in state deletion, the next checkpoint is delayed waiting for an available IO thread.
> Back-pressure seems reasonable here (similar to CheckpointCleaner); however, this shared state deletion could be spread across multiple subsequent checkpoints, not neccesarily the next one.
> ---- 
> I believe the issue is an pre-existing one; but it particularly affects changelog state backend, because 1) such spikes are likely there; 2) workloads are latency sensitive.
> In the tests, checkpoint duration grows from seconds to minutes immediately after the materialization.



--
This message was sent by Atlassian Jira
(v8.20.10#820010)