You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by "Binh-Nguyen Tran (Jira)" <ji...@apache.org> on 2022/12/28 15:53:00 UTC
[jira] [Updated] (FLINK-30518) [flink-operator] Kubernetes HA Service not working with standalone mode
[ https://issues.apache.org/jira/browse/FLINK-30518?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Binh-Nguyen Tran updated FLINK-30518:
-------------------------------------
Summary: [flink-operator] Kubernetes HA Service not working with standalone mode (was: [flink-operator] Kubernetes HA not working due to wrong jobmanager.rpc.address)
> [flink-operator] Kubernetes HA Service not working with standalone mode
> -----------------------------------------------------------------------
>
> Key: FLINK-30518
> URL: https://issues.apache.org/jira/browse/FLINK-30518
> Project: Flink
> Issue Type: Bug
> Components: Kubernetes Operator
> Affects Versions: kubernetes-operator-1.3.0
> Reporter: Binh-Nguyen Tran
> Priority: Major
> Attachments: flink-configmap.png, screenshot-1.png
>
>
> Since flink-conf.yaml is mounted as read-only configmap, the /docker-entrypoint.sh script is not able to inject correct Pod IP to `jobmanager.rpc.address`. This leads to same address (e.g flink.ns-ext) being set for all Job Manager pods. This causes:
> (1) flink-cluster-config-map always contains wrong address for all 3 component leaders (see screenshot, should be pod IP instead of clusterIP service name)
> (2) Accessing Web UI when jobmanager.replicas > 1 is not possible with error
> {code:java}
> {"errors":["Service temporarily unavailable due to an ongoing leader election. Please refresh."]} {code}
>
> ~ flinkdeployment.yaml ~
> {code:java}
> spec:
> mode: standalone
> flinkConfiguration:
> high-availability: kubernetes
> high-availability.storageDir: "file:///opt/flink/storage"
> ...
> jobManager:
> replicas: 3
> ... {code}
--
This message was sent by Atlassian Jira
(v8.20.10#820010)