You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@openwhisk.apache.org by GitBox <gi...@apache.org> on 2018/11/06 21:03:33 UTC

[GitHub] PLoecker commented on issue #339: Controller and invoker stuck after reboot the cluster

PLoecker commented on issue #339: Controller and invoker stuck after reboot the cluster
URL: https://github.com/apache/incubator-openwhisk-deploy-kube/issues/339#issuecomment-436407609
 
 
   Thank for your answer.
   
   I thought that could be the problem. I tried to set up a persistence volume, but I did not get it running.
   Because the server did not run on a cloud service, I think it is not able to use the dynamic Volume Provision.
   So I tried the local storage and created a storage class and a persistent volume. But now the CouchDB, zookeeper, kafka, and redis stuck in pending-mode. 
   
   I believe my master is not able to share his storage with the nodes. Can you help me to fix it or is there an easier solution to create a persistence volume?
   
   ### create volume
   ```
   kind: StorageClass
   apiVersion: storage.k8s.io/v1
   metadata:
     name: local-storage
     namespace: openwhisk
   provisioner: kubernetes.io/no-provisioner
   volumeBindingMode: WaitForFirstConsumer
   ---
   apiVersion: v1
   kind: PersistentVolume
   metadata:
     name: local-pv
     namespace: openwhisk
   spec:
     capacity:
       storage: 20Gi
     volumeMode: Filesystem
     accessModes:
     - ReadWriteOnce
     persistentVolumeReclaimPolicy: Delete
     storageClassName: local-storage
     local:
       path: /mnt/disks/vol1
     nodeAffinity:
       required:
         nodeSelectorTerms:
         - matchExpressions:
           - key: kubernetes.io/hostname
             operator: In
             values:
             - linux-k8-master
   
   ```
   ### mycluster file
   ```
   whisk:
     ingress:
       type: NodePort
       apiHostName: 192.168.178.100
       apiHostPort: 30010
     runtimes: "runtimes-custom.json"
   
   nginx:
     httpsNodePort: 30010
     
   zookeeper:
     persistence:
       enabled: true
       storageClass: local-storage
   
   kafka:
     persistence:
       enabled: true
       storageClass: local-storage
   
   db:
     persistence:
       enabled: true
       storageClass: local-storage
   
   redis:
     persistence:
       enabled: true
       storageClass: local-storage
   ```
   
   ### CoucheDB description
   ```
   Name:               couchdb-6c85d5457-2qf2p
   Namespace:          openwhisk
   Priority:           0
   PriorityClassName:  <none>
   Node:               <none>
   Labels:             name=couchdb
                       pod-template-hash=6c85d5457
   Annotations:        <none>
   Status:             Pending
   IP:                 
   Controlled By:      ReplicaSet/couchdb-6c85d5457
   Containers:
     couchdb:
       Image:      apache/couchdb:2.1
       Port:       5984/TCP
       Host Port:  0/TCP
       Environment:
         COUCHDB_USER:      <set to the key 'db_username' in secret 'db.auth'>  Optional: false
         COUCHDB_PASSWORD:  <set to the key 'db_password' in secret 'db.auth'>  Optional: false
         NODENAME:          couchdb0
       Mounts:
         /opt/couchdb/data from database-storage (rw)
         /var/run/secrets/kubernetes.io/serviceaccount from default-token-lnb7r (ro)
   Conditions:
     Type           Status
     PodScheduled   False 
   Volumes:
     database-storage:
       Type:       PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
       ClaimName:  couchdb-pvc
       ReadOnly:   false
     default-token-lnb7r:
       Type:        Secret (a volume populated by a Secret)
       SecretName:  default-token-lnb7r
       Optional:    false
   QoS Class:       BestEffort
   Node-Selectors:  <none>
   Tolerations:     node.kubernetes.io/not-ready:NoExecute for 300s
                    node.kubernetes.io/unreachable:NoExecute for 300s
   Events:
     Type     Reason            Age                  From               Message
     ----     ------            ----                 ----               -------
     Warning  FailedScheduling  88s (x845 over 36m)  default-scheduler  0/4 nodes are available: 1 node(s) had taints that the pod didn't tolerate, 3 node(s) didn't find available persistent volumes to bind.
   ```
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services