You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Gyula Fora (Jira)" <ji...@apache.org> on 2023/03/01 07:49:00 UTC

[jira] [Comment Edited] (FLINK-30444) State recovery error not handled correctly and always causes JM failure

    [ https://issues.apache.org/jira/browse/FLINK-30444?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17694907#comment-17694907 ] 

Gyula Fora edited comment on FLINK-30444 at 3/1/23 7:48 AM:
------------------------------------------------------------

[~dmvk] I only tested this (the main method error) for 1.16 but it appears there.
I wanted to note it here


was (Author: gyfora):
[~dmvk] I only tested this for 1.16 but it appears there

> State recovery error not handled correctly and always causes JM failure
> -----------------------------------------------------------------------
>
>                 Key: FLINK-30444
>                 URL: https://issues.apache.org/jira/browse/FLINK-30444
>             Project: Flink
>          Issue Type: Bug
>          Components: Client / Job Submission
>    Affects Versions: 1.16.0, 1.14.6, 1.15.3
>            Reporter: Gyula Fora
>            Assignee: David Morávek
>            Priority: Critical
>
> When you submit a job in Application mode and you try to restore from an incompatible savepoint, there is a very unexpected behaviour.
> Even with the following config:
> {noformat}
> execution.shutdown-on-application-finish: false
> execution.submit-failed-job-on-application-error: true{noformat}
> The job goes into a FAILED state, and the jobmanager fails. In a kubernetes environment (when using the native kubernetes integration) this means that the JobManager is restarted automatically.
> This will mean that if you have jobresult store enabled, after the JM comes back you will end up with an empty application cluster.
> I think the correct behaviour would be, depending on the above mention config:
> 1. If there is a job recovery error and you have (execution.submit-failed-job-on-application-error) configured, then the job should show up as failed, and the JM should not exit (if execution.shutdown-on-application-finish is false)
> 2. If (execution.shutdown-on-application-finish is true) then the jobmanager should exit cleanly like on normal job terminal state and thus stop the deployment in Kubernetes, preventing a JM restart cycle



--
This message was sent by Atlassian Jira
(v8.20.10#820010)