You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by GitBox <gi...@apache.org> on 2021/10/12 13:51:18 UTC
[GitHub] [apisix-helm-chart] youngwookim commented on issue #156: etcd is keep crashing when upgrading chart to 0.6.0
youngwookim commented on issue #156:
URL: https://github.com/apache/apisix-helm-chart/issues/156#issuecomment-941033102
Thanks for the comment @tokers
Following is a command that I have done for upgrading chart version 0.4.0 to 0.6.0:
```
$ helm upgrade --install apisix apisix/apisix --namespace apisix --version 0.6.0 \
--set apisix.replicaCount=1 \
--set gateway.type=LoadBalancer \
--set gateway.loadBalancerIP="......" \
--set gateway.tls.enabled=true \
--set dashboard.enabled=true \
--set ingress-controller.enabled=true \
--set allow.ipList=""
```
After upgrading, a pod of statefulset 'apisix-etcd' is keep crashing:
```
$ kubectl describe -n apisix pod/apisix-etcd-2
Name: apisix-etcd-2
Namespace: apisix
Priority: 0
Node: aks-defaultpool-17674265-vmss000002/10.240.0.16
Start Time: Tue, 12 Oct 2021 12:31:52 +0900
Labels: app.kubernetes.io/instance=apisix
app.kubernetes.io/managed-by=Helm
app.kubernetes.io/name=etcd
controller-revision-hash=apisix-etcd-6579c5cbc8
helm.sh/chart=etcd-6.2.6
statefulset.kubernetes.io/pod-name=apisix-etcd-2
Annotations: cni.projectcalico.org/containerID: 25fa15ee0266d8c17d213f20bc16053477deab49f0e6e9ea82ed4867a14cbf03
cni.projectcalico.org/podIP: 10.244.2.133/32
cni.projectcalico.org/podIPs: 10.244.2.133/32
Status: Running
IP: 10.244.2.133
IPs:
IP: 10.244.2.133
Controlled By: StatefulSet/apisix-etcd
Containers:
etcd:
Container ID: containerd://e5cd7024f5a7158474800be5012ab149e6237086699e4512cc13af3326c1cb12
Image: docker.io/bitnami/etcd:3.4.16-debian-10-r14
Image ID: docker.io/bitnami/etcd@sha256:ef2d499749c634588f7d281dd70cc1fb2514d57f6d42308c0fb0f2c8ca55bea4
Ports: 2379/TCP, 2380/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 128
Started: Tue, 12 Oct 2021 22:41:29 +0900
Finished: Tue, 12 Oct 2021 22:41:44 +0900
Ready: False
Restart Count: 118
Liveness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=30s #success=1 #failure=5
Readiness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=10s #success=1 #failure=5
Environment:
BITNAMI_DEBUG: false
MY_POD_IP: (v1:status.podIP)
MY_POD_NAME: apisix-etcd-2 (v1:metadata.name)
ETCDCTL_API: 3
ETCD_ON_K8S: yes
ETCD_START_FROM_SNAPSHOT: no
ETCD_DISASTER_RECOVERY: no
ETCD_NAME: $(MY_POD_NAME)
ETCD_DATA_DIR: /bitnami/etcd/data
ETCD_LOG_LEVEL: info
ALLOW_NONE_AUTHENTICATION: yes
ETCD_ADVERTISE_CLIENT_URLS: http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2379
ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379
ETCD_INITIAL_ADVERTISE_PEER_URLS: http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2380
ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380
ETCD_INITIAL_CLUSTER_TOKEN: etcd-cluster-k8s
ETCD_INITIAL_CLUSTER_STATE: existing
ETCD_INITIAL_CLUSTER: apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.apisix.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.apisix.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.apisix.svc.cluster.local:2380
ETCD_CLUSTER_DOMAIN: apisix-etcd-headless.apisix.svc.cluster.local
Mounts:
/bitnami/etcd from data (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-rlxf5 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
data:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: data-apisix-etcd-2
ReadOnly: false
kube-api-access-rlxf5:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 3m45s (x2732 over 10h) kubelet Back-off restarting failed container
```
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: notifications-unsubscribe@apisix.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org