You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Teng Fei Liao (Jira)" <ji...@apache.org> on 2020/05/11 18:36:00 UTC

[jira] [Commented] (FLINK-9043) Introduce a friendly way to resume the job from externalized checkpoints automatically

    [ https://issues.apache.org/jira/browse/FLINK-9043?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17104750#comment-17104750 ] 

Teng Fei Liao commented on FLINK-9043:
--------------------------------------

Hey, any more thoughts here? Another case that's a better user experience would be around checkpointing failures. Let's say the last checkpoint resulted in failure, we now still have some files created for this checkpoint. The proper recovery code in applications is:
 # Scan for the most recent checkpoint for the job ID
 # Inspect the contents for a properly written _metadata file.
 # If 2) is bad, repeat the process for the next most recent checkpoint

2) and 3) are implementation details of flink that seem error prone for users to have to know. _metadata file existence or the criteria for "properly written checkpoint" could change in the future and application authors will always need to know the latest implementation details to avoid regressions.

> Introduce a friendly way to resume the job from externalized checkpoints automatically
> --------------------------------------------------------------------------------------
>
>                 Key: FLINK-9043
>                 URL: https://issues.apache.org/jira/browse/FLINK-9043
>             Project: Flink
>          Issue Type: New Feature
>          Components: Runtime / Checkpointing
>            Reporter: godfrey johnson
>            Assignee: Sihua Zhou
>            Priority: Major
>              Labels: pull-request-available
>          Time Spent: 10m
>  Remaining Estimate: 0h
>
> I know a flink job can reovery from checkpoint with restart strategy, but can not recovery as spark streaming jobs when job is starting.
> Every time, the submitted flink job is regarded as a new job, while , in the spark streaming  job, which can detect the checkpoint directory first,  and then recovery from the latest succeed one. However, Flink only can recovery until the job failed first, then retry with strategy.
>  
> So, would flink support to recover from the checkpoint directly in a new job?
> h2. New description by [~sihuazhou]
> Currently, it's quite a bit not friendly for users to recover job from the externalized checkpoint, user need to find the dedicate dir for the job which is not a easy thing when there are too many jobs. This ticket attend to introduce a more friendly way to allow the user to use the externalized checkpoint to do recovery.
> The implementation steps are copied from the comments of [~StephanEwen]:
>  - We could make this an option where you pass a flag (-r) to automatically look for the latest checkpoint in a given directory.
>  - If more than one jobs checkpointed there before, this operation would fail.
>  - We might also need a way to have jobs not create the UUID subdirectory, otherwise the scanning for the latest checkpoint would not easily work.
>   



--
This message was sent by Atlassian Jira
(v8.3.4#803005)