You are viewing a plain text version of this content. The canonical link for it is here.
Posted to notifications@couchdb.apache.org by "lolszowy (via GitHub)" <gi...@apache.org> on 2023/06/15 10:01:31 UTC
[GitHub] [couchdb-helm] lolszowy opened a new issue, #123: CrashLoopBackoff when PersistentVolume=true
lolszowy opened a new issue, #123:
URL: https://github.com/apache/couchdb-helm/issues/123
<!-- Thanks for filing an issue! Before hitting the button, please answer these questions. It's helpful to search the existing GitHub issues first. It's likely that another user has already reported the issue you're facing, or it's a known issue that we're already aware of
Fill in as much of the template below as you can. If you leave out information, we can't help you as well.
Be ready for followup questions, and please respond in a timely manner. If we can't reproduce a bug or think a feature already exists, we might close your issue. If we're wrong, PLEASE feel free to reopen it and explain why.
-->
**Describe the bug**
I am getting CrashLoopBackoff of a CouchDB pod, when I am deploying it with persistenVolume.
values for Helm Chart
```
clusterSize: '1'
persistentVolume:
enabled: 'true'
storageClass: 'efs-sc-couchdb'
couchdbConfig:
couchdb:
uuid: <some-uuid>
```
```
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-sc-couchdb
namespace: <namespace-name>
provisioner: efs.csi.aws.com
parameters:
provisioningMode: efs-ap
fileSystemId: fs-<fsidhere>
directoryPerms: "760"
```
**Version of Helm and Kubernetes**:
Chart version: couchdb-4.4.1
App version: 3.3.2
EKS K8s Cluster version: 1.25
**What happened**:
✗ kubectl get pod couchdb-beta-couchdb-0
NAME READY STATUS RESTARTS AGE
couchdb-beta-couchdb-0 0/1 CrashLoopBackOff 8 (3m48s ago) 19m
**What you expected to happen**:
✗ kubectl get pod couchdb-beta-couchdb-0
NAME READY STATUS RESTARTS AGE
couchdb-beta-couchdb-0 1/1 Running
**How to reproduce it** (as minimally and precisely as possible):
helm upgrade --install couchdb-beta couchdb/couchdb --set clusterSize=1 --set persistentVolume.enabled=true --set persistentVolume.storageClass=efs-sc-couchdb --set persistenVolume.accessModes=ReadWriteOnce --set couchdbConfig.couchdb.uuid=$(curl https://www.uuidgenerator.net/api/version4 2>/dev/null | tr -d -)
✗ kubectl logs couchdb-beta-couchdb-0 -c couchdb
✗ kubectl logs couchdb-beta-couchdb-0 -c init-copy
total 8
-rw-r--r-- 1 root root 101 Jun 15 09:36 seedlist.ini
-rw-r--r-- 1 root root 106 Jun 15 09:36 chart.ini
✗ kubectl get pvc database-storage-couchdb-beta-couchdb-0
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
database-storage-couchdb-beta-couchdb-0 Bound pvc-5fb45da0-1df7-4a5b-ae9z-6f797b210aex 10Gi RWO efs-sc-couchdb 19h
**Anything else we need to know**:
All infrastructure is based on AWS service like EKS EFS...
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [couchdb-helm] lolszowy commented on issue #123: CrashLoopBackoff when PersistentVolume=true
Posted by "lolszowy (via GitHub)" <gi...@apache.org>.
lolszowy commented on issue #123:
URL: https://github.com/apache/couchdb-helm/issues/123#issuecomment-1628632194
Sorry for late reply but I missed the notification about reply.
```✗ k describe pod couchdb-backup-couchdb-0
Name: couchdb-backup-couchdb-0
Namespace: tpos-sync
Priority: 0
Service Account: couchdb-backup-couchdb
Node: ip-10-3-3-185.eu-west-1.compute.internal/10.3.3.185
Start Time: Mon, 10 Jul 2023 11:21:41 +0200
Labels: app=couchdb
controller-revision-hash=couchdb-backup-couchdb-845c6bb8f
release=couchdb-backup
statefulset.kubernetes.io/pod-name=couchdb-backup-couchdb-0
Annotations: <none>
Status: Running
IP: 10.3.3.211
IPs:
IP: 10.3.3.211
Controlled By: StatefulSet/couchdb-backup-couchdb
Init Containers:
init-copy:
Container ID: containerd://8cd79c2a5667d7ea7156458417e02ddbae1835b43692ece5ae8ad8ee67a14429
Image: busybox:latest
Image ID: docker.io/library/busybox@sha256:2376a0c12759aa1214ba83e771ff252c7b1663216b192fbe5e0fb364e952f85c
Port: <none>
Host Port: <none>
Command:
sh
-c
cp /tmp/chart.ini /default.d; cp /tmp/seedlist.ini /default.d; ls -lrt /default.d;
State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 10 Jul 2023 11:21:43 +0200
Finished: Mon, 10 Jul 2023 11:21:43 +0200
Ready: True
Restart Count: 0
Environment: <none>
Mounts:
/default.d from config-storage (rw)
/tmp/ from config (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvxv8 (ro)
Containers:
couchdb:
Container ID: containerd://64c36aa72f51b433e131b3b0f64d42a975e166edb3f0660bce9c11481b3e22ed
Image: couchdb:2.3.1
Image ID: docker.io/library/couchdb@sha256:74652e868a3138638ed68eba103a92ec866aa5f1bf40103c654895f7fb802ca8
Ports: 5984/TCP, 4369/TCP, 9100/TCP
Host Ports: 0/TCP, 0/TCP, 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Mon, 10 Jul 2023 11:58:09 +0200
Finished: Mon, 10 Jul 2023 11:58:09 +0200
Ready: False
Restart Count: 12
Liveness: http-get http://:5984/_up delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:5984/_up delay=0s timeout=1s period=10s #success=1 #failure=3
Environment:
COUCHDB_USER: <set to the key 'adminUsername' in secret 'couchdb-backup-couchdb'> Optional: false
COUCHDB_PASSWORD: <set to the key 'adminPassword' in secret 'couchdb-backup-couchdb'> Optional: false
COUCHDB_SECRET: <set to the key 'cookieAuthSecret' in secret 'couchdb-backup-couchdb'> Optional: false
ERL_FLAGS: -name couchdb -setcookie monster
Mounts:
/opt/couchdb/data from database-storage (rw)
/opt/couchdb/etc/default.d from config-storage (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-nvxv8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
database-storage:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: database-storage-couchdb-backup-couchdb-0
ReadOnly: false
config-storage:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: couchdb-backup-couchdb
Optional: false
kube-api-access-nvxv8:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 38m default-scheduler Successfully assigned tpos-sync/couchdb-backup-couchdb-0 to ip-10-3-3-185.eu-west-1.compute.internal
Normal Pulling 38m kubelet Pulling image "busybox:latest"
Normal Pulled 38m kubelet Successfully pulled image "busybox:latest" in 558.731924ms (558.747224ms including waiting)
Normal Created 38m kubelet Created container init-copy
Normal Started 38m kubelet Started container init-copy
Warning Unhealthy 38m kubelet Readiness probe failed: Get "http://10.3.3.211:5984/_up": dial tcp 10.3.3.211:5984: connect: connection refused
Normal Pulled 37m (x4 over 38m) kubelet Container image "couchdb:2.3.1" already present on machine
Normal Created 37m (x4 over 38m) kubelet Created container couchdb
Normal Started 37m (x4 over 38m) kubelet Started container couchdb
Warning BackOff 3m38s (x171 over 38m) kubelet Back-off restarting failed container couchdb in pod couchdb-backup-couchdb-0_tpos-sync(d2f7ac43-2ac1-4d78-827a-aa2e27b18886)
```
logs from previous container are empty as well.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org
[GitHub] [couchdb-helm] willholley commented on issue #123: CrashLoopBackoff when PersistentVolume=true
Posted by "willholley (via GitHub)" <gi...@apache.org>.
willholley commented on issue #123:
URL: https://github.com/apache/couchdb-helm/issues/123#issuecomment-1593233195
This seems like an environment specific/general kubernetes problem rather than an issue with the helm chart.
If you can report the output of `kubectl describe couchdb-beta-couchdb-0` and `kubectl logs couchdb-beta-couchdb-0 --previous` those may provide a clue as to why the pod is crashlooping.
--
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
To unsubscribe, e-mail: notifications-unsubscribe@couchdb.apache.org
For queries about this service, please contact Infrastructure at:
users@infra.apache.org