You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Wencong Liu (Jira)" <ji...@apache.org> on 2023/03/14 09:03:00 UTC
[jira] [Commented] (FLINK-30077) k8s jobmanager pod repeated restart
[ https://issues.apache.org/jira/browse/FLINK-30077?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17700031#comment-17700031 ]
Wencong Liu commented on FLINK-30077:
-------------------------------------
Hello [~baibaiwuchang] . Thanks for your proposal! Actually I can't understand your statement completely. Could you please illustrate this sentence in detail ?
{code:java}
Flink kuberne moudle watch taskmanager pod. We always watch jobmanager pod status and flink cancel deployment In my particular situation. {code}
> k8s jobmanager pod repeated restart
> -----------------------------------
>
> Key: FLINK-30077
> URL: https://issues.apache.org/jira/browse/FLINK-30077
> Project: Flink
> Issue Type: Improvement
> Components: Deployment / Kubernetes
> Reporter: hanjie
> Priority: Major
>
> We use Flink K8S. When task exist bug, jobmanager pod repeated start.
> For example:
> xxxx-88b95598d-rlzxg 0/1 CrashLoopBackOff 215 19h
>
> then, i learned that k8s deployment could only set "restartPolicy:Always".
> ([https://github.com/kubernetes/kubernetes/issues/24725)]
> When jobmanager always restart, I don't think that's understandable。
> Flink kuberne moudle watch taskmanager pod. We always watch jobmanager pod status and flink cancel deployment In my particular situation.
>
--
This message was sent by Atlassian Jira
(v8.20.10#820010)