You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@apisix.apache.org by GitBox <gi...@apache.org> on 2020/12/31 03:21:43 UTC
[GitHub] [apisix-helm-chart] Caelebs commented on issue #16: The Charts deployment apisix with etcd error
Caelebs commented on issue #16:
URL: https://github.com/apache/apisix-helm-chart/issues/16#issuecomment-752829149
apisix pods are always in init state and it seems that no etcd pod is created,
```
[root@master01 mnt]# kubectl get pods apisix-rax59c-bd7c7f8d6-2lclh -n helm-test
NAME READY STATUS RESTARTS AGE
apisix-rax59c-bd7c7f8d6-2lclh 0/1 Init:0/1 0 18h
```
```
[root@master01 mnt]# kubectl get svc -n helm-test
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
apisix-rax59c-gateway NodePort 10.106.74.60 <none> 80:31820/TCP 18h
```
```
[root@master01 mnt]# kubectl describe apisix-rax59c-bd7c7f8d6-2lclh -n helm-test
error: the server doesn't have a resource type "apisix-rax59c-bd7c7f8d6-2lclh"
[root@master01 mnt]# kubectl describe pod apisix-rax59c-bd7c7f8d6-2lclh -n helm-test
Name: apisix-rax59c-bd7c7f8d6-2lclh
Namespace: helm-test
Priority: 0
Node: node04/10.10.16.242
Start Time: Wed, 30 Dec 2020 16:29:12 +0800
Labels: app.kubernetes.io/instance=apisix-rax59c
app.kubernetes.io/name=apisix
pod-template-hash=bd7c7f8d6
Annotations: kubernetes.io/limit-ranger:
LimitRanger plugin set: memory request for container apisix; memory limit for container apisix; cpu, memory request for init container wai...
Status: Pending
IP: 10.20.4.107
IPs:
IP: 10.20.4.107
Controlled By: ReplicaSet/apisix-rax59c-bd7c7f8d6
Init Containers:
wait-etcd:
Container ID: docker://da4dfc52a9dad093d903a7793a1bc532ff549d0bb4f644858a10c49d7c6728a4
Image: busybox:1.28
Image ID: docker-pullable://busybox@sha256:141c253bc4c3fd0a201d32dc1f493bcf3fff003b6df416dea4f41046e0f37d47
Port: <none>
Host Port: <none>
Command:
sh
-c
until nc -z apisix-rax59c-etcd.helm-test.svc.cluster.local 2379; do echo waiting for etcd `date`; sleep 2; done;
State: Running
Started: Wed, 30 Dec 2020 16:29:14 +0800
Ready: False
Restart Count: 0
Limits:
cpu: 500m
memory: 500Mi
Requests:
cpu: 10m
memory: 10Mi
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mfwp4 (ro)
Containers:
apisix:
Container ID:
Image: caelebs/apisix:2.1-1218
Image ID:
Ports: 9080/TCP, 9443/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: PodInitializing
Ready: False
Restart Count: 0
Limits:
cpu: 2
memory: 500Mi
Requests:
cpu: 50m
memory: 10Mi
Readiness: tcp-socket :9080 delay=10s timeout=1s period=10s #success=1 #failure=6
Environment: <none>
Mounts:
/usr/local/apisix/conf/config.yaml from apisix-config (rw,path="config.yaml")
/var/run/secrets/kubernetes.io/serviceaccount from default-token-mfwp4 (ro)
Conditions:
Type Status
Initialized False
Ready False
ContainersReady False
PodScheduled True
Volumes:
apisix-config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: apisix-rax59c
Optional: false
default-token-mfwp4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-mfwp4
Optional: false
QoS Class: Burstable
Node-Selectors: node=node04
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
```
There is an etcd container running on node node4, but `bad address` is always output in the log
```
[root@node04 ~]# docker ps | grep wait-etcd
da4dfc52a9da 8c811b4aec35 "sh -c 'until nc -z …" 19 hours ago Up 19 hours k8s_wait-etcd_apisix-rax59c-bd7c7f8d6-2lclh_helm-test_ae45fd11-d508-4f6f-8dcd-6e93f45a9af4_0
```
```
nc: bad address 'apisix-rax59c-etcd.helm-test.svc.cluster.local'
waiting for etcd Wed Dec 30 20:53:07 UTC 2020
nc: bad address 'apisix-rax59c-etcd.helm-test.svc.cluster.local'
waiting for etcd Wed Dec 30 20:53:09 UTC 2020
```
Are there any other logs I need to provide?
@tokers
----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
For queries about this service, please contact Infrastructure at:
users@infra.apache.org