You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Aljoscha Krettek (Jira)" <ji...@apache.org> on 2020/05/04 07:45:00 UTC

[jira] [Updated] (FLINK-17487) Do not delete old checkpoints when stop the job.

     [ https://issues.apache.org/jira/browse/FLINK-17487?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Aljoscha Krettek updated FLINK-17487:
-------------------------------------
    Component/s: Client / Job Submission

> Do not delete old checkpoints when stop the job.
> ------------------------------------------------
>
>                 Key: FLINK-17487
>                 URL: https://issues.apache.org/jira/browse/FLINK-17487
>             Project: Flink
>          Issue Type: Improvement
>          Components: Client / Job Submission, Runtime / Checkpointing
>            Reporter: nobleyd
>            Priority: Major
>
> When stop flink job using 'flink stop jobId', the checkpoints data is deleted. 
> When the stop action is not succeed or failed because of some unknown errors, sometimes the job resumes using the latest checkpoint, while sometimes it just fails, and the checkpoints data is gone.
> You may say why I need these checkpoints since I stop the job and a savepoint will be generated. For example, my job uses a kafka source, while the kafka missed some data, and I want to stop the job and resume it using an old checkpoint. Anyway, I mean sometimes the action stop is failed and the checkpoint data is also deleted, which is not good. 
> This feature is different from the case 'flink cancel jobId' or 'flink savepoint jobId', which won't delete the checkpoint data.
>  



--
This message was sent by Atlassian Jira
(v8.3.4#803005)