You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by GitBox <gi...@apache.org> on 2021/12/02 09:21:31 UTC

[GitHub] [cloudstack] xuanyuanaosheng opened a new issue #5741: Can not create ceph primary storage?

xuanyuanaosheng opened a new issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741


   ##### ISSUE TYPE
    * Bug Report
   
   
   ##### CLOUDSTACK VERSION
   The cloudstack management version: 4.15.2.0 
   The  cloudstack agent version:  cloudstack-agent-4.15.2.0-1.el8.x86_64
   The  
   
   ![image](https://user-images.githubusercontent.com/4197714/144393521-2832a689-579f-4a9f-869a-07d3b127de8b.png)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node can access the ceph node
   ```
   # telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   @weizhouapache   @wido 
   
   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: 10.29.44.1,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   
   ---
   I use the methods as you advice, But the error info is too little,  I don't know how to repair it. I google a lot, But failed.
   
   **# virsh pool-create --file pool.xml** 
   error: Failed to create pool from pool.xml
   error: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM  can access the ceph node.
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   ![72b81e3f-5633-4d8d-9dc9-9f52cf0f93da](https://user-images.githubusercontent.com/4197714/144823225-f4e18cf0-a992-4c27-ba8c-4aedc942ea81.png)
   
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret.
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring. 
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI. 
   
   The error is still:   **_`org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory`_**
   
   ```
   2021-12-06 17:42:02,899 DEBUG [kvm.resource.LibvirtConnection] (Thread-4654:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:42:02,900 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Found NFS storage pool c4355ed4-8833-381f-b3f7-2981782ee3fa in libvirt, continuing
   2021-12-06 17:42:02,900 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.29.44.1 -p /cephfs/cloudstack -m /mnt/c4355ed4-8833-381f-b3f7-2981782ee3fa -h 10.26.246.6 
   2021-12-06 17:42:02,901 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:42:02,920 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Execution is successful.
   2021-12-06 17:42:04,533 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:null) (logid:21625d39) Processing command: com.cloud.agent.api.GetHostStatsCommand
   2021-12-06 17:42:29,342 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:a04d2914) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:42:29,342 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:a04d2914) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:42:29,343 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:a04d2914) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:42:29,344 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:a04d2914) Seq 15-6687845446645196317:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985954792


   @weizhouapache  I use the command on the kvm node:
   ```
   # rbd map -p cloudstack cloudstack-primary --user cloudstack
   rbd: warning: image already mapped as /dev/rbd0
   /dev/rbd5
   
   # ls /dev/rbd
   rbd/  rbd0  rbd1  rbd2  rbd3  rbd4  rbd5
   
   ```
   
   I think `the pool 'cloudstack'` is existed.


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985954792


   @weizhouapache  I use the command on the kvm node:
   ```
   # rbd map -p cloudstack cloudstack-primary --user cloudstack
   rbd: warning: image already mapped as /dev/rbd0
   /dev/rbd5
   
   # ls /dev/rbd
   rbd/  rbd0  rbd1  rbd2  rbd3  rbd4  rbd5
   
   ```
   
   I think `the pool 'cloudstack'` is existed.
   
   Cloud you please give some advice?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] wido commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
wido commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986552812


   > @weizhouapache @wido Could you please give some advice? The new info, I found in cloudstack UI:
   > 
   > ![image](https://user-images.githubusercontent.com/4197714/144799583-0383ed99-3729-4a18-ae43-d666f6c21d6f.png)
   
   This seems like something outside CloudStack, double check:
   
   - All Hosts can reach the Ceph cluster nodes
   - RBD pool exists
   - User has access to the RBD pool
   - There is no / (slash) in the secret


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node can access the ceph node
   ```
   # telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] wido commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
wido commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-990248554


   This seems like a client <> server issue with Ceph and does not have anything to do with CloudStack.
   
   <pre>attempt to reclaim global_id 392850 without presenting ticket</pre>
   
   That suggest that you have something misconfigured in Ceph, there have been recent changes around that. Please refer to the Release Notes of Ceph.
   
   Once that is fixed it should also work in CloudStack.
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   @weizhouapache   @wido 
   
   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: 10.29.44.1,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   
   ---
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node can access the ceph node
   ```
   # telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The libvirt secrets has nothing:
   ```
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM  can access the ceph node.
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   ![72b81e3f-5633-4d8d-9dc9-9f52cf0f93da](https://user-images.githubusercontent.com/4197714/144823225-f4e18cf0-a992-4c27-ba8c-4aedc942ea81.png)
   
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret.
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring. 
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI and the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_**  will change when I reclick the add primary stotage button.
   
   After checked all the config, I restart the management-server and cloudstack-agent  services.  The error is still the same:   **_`org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory`_**
   
   ```
   2021-12-06 17:54:17,921 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='6789'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-06 17:54:39,461 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms 
   2021-12-06 17:54:39,463 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing while with timeout : 1800000
   2021-12-06 17:54:39,534 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Execution is successful.
   2021-12-06 17:54:39,535 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:39,551 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) Sending ping: Seq 15-6:  { Cmd , MgmtId: -1, via: 15, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-255-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"v-249-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"s-250-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"r-254-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:39,620 DEBUG [cloud.agent.Agent] (Agent-Handler-2:null) (logid:) Received response: Seq 15-6:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"},"result":"true","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:47,958 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:54:47,959 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:54:47,961 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:54:47,966 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Seq 15-6627046851675684885:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:52,709 DEBUG [kvm.resource.LibvirtConnection] (Thread-6:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Found NFS storage pool c8e9ca6a-c004-3851-a074-19f4948b28ff in libvirt, continuing
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.26.246.6 -p /kvm-data -m /mnt/c8e9ca6a-c004-3851-a074-19f4948b28ff -h 10.26.246.6 
   2021-12-06 17:54:52,726 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:54:52,737 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Execution is successful.
   ```
   
   
   Any idears? Can you give some test scripts that can test the ceph storage is ok on the KVM node?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-route node can access the ceph node
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI. 
   
   The error is still: 
   
   ```
   2021-12-06 17:42:02,899 DEBUG [kvm.resource.LibvirtConnection] (Thread-4654:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:42:02,900 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Found NFS storage pool c4355ed4-8833-381f-b3f7-2981782ee3fa in libvirt, continuing
   2021-12-06 17:42:02,900 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.29.44.1 -p /cephfs/cloudstack -m /mnt/c4355ed4-8833-381f-b3f7-2981782ee3fa -h 10.26.246.6 
   2021-12-06 17:42:02,901 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:42:02,920 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Execution is successful.
   2021-12-06 17:42:04,533 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:null) (logid:21625d39) Processing command: com.cloud.agent.api.GetHostStatsCommand
   2021-12-06 17:42:29,342 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:a04d2914) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:42:29,342 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:a04d2914) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:42:29,343 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:a04d2914) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:42:29,344 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:a04d2914) Seq 15-6687845446645196317:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986461003


   @weizhouapache  @wido  Could you please give some advice?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   @weizhouapache   @wido 
   
   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: 10.29.44.1,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   
   ---
   I use the methods as you advice, But the error info is too little,  I don't know how to repair it. I google a lot, But failed.
   
   **# virsh pool-create --file pool.xml** 
   _error: Failed to create pool from pool.xml
   error: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory_
   
   
   The ceph error is 
   ```
   debug 2021-12-09T08:31:10.091+0000 7f2d4b620700  0 cephx server client.cloudstack:  attempt to reclaim global_id 392850 without presenting ticket
   debug 2021-12-09T08:31:10.091+0000 7f2d4b620700  0 cephx server client.cloudstack:  could not verify old ticket
   ```


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789
   2.
   ![Uploading image.png…]()
   
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985399164


   @wido  I will use the 3300 port to do some test. thanks


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM  can access the ceph node.
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   ![72b81e3f-5633-4d8d-9dc9-9f52cf0f93da](https://user-images.githubusercontent.com/4197714/144823225-f4e18cf0-a992-4c27-ba8c-4aedc942ea81.png)
   
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret.
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring. 
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI. 
   
   The error is still:   **_`org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory`_**
   
   ```
   2021-12-06 17:54:17,921 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='6789'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-06 17:54:39,461 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms 
   2021-12-06 17:54:39,463 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing while with timeout : 1800000
   2021-12-06 17:54:39,534 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Execution is successful.
   2021-12-06 17:54:39,535 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:39,551 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) Sending ping: Seq 15-6:  { Cmd , MgmtId: -1, via: 15, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-255-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"v-249-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"s-250-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"r-254-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:39,620 DEBUG [cloud.agent.Agent] (Agent-Handler-2:null) (logid:) Received response: Seq 15-6:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"},"result":"true","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:47,958 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:54:47,959 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:54:47,961 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:54:47,966 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Seq 15-6627046851675684885:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:52,709 DEBUG [kvm.resource.LibvirtConnection] (Thread-6:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Found NFS storage pool c8e9ca6a-c004-3851-a074-19f4948b28ff in libvirt, continuing
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.26.246.6 -p /kvm-data -m /mnt/c8e9ca6a-c004-3851-a074-19f4948b28ff -h 10.26.246.6 
   2021-12-06 17:54:52,726 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:54:52,737 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Execution is successful.
   ```
   
   
   Any idears? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node can access the ceph node
   ```
   # telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985408456


   @wido   I have test on port 3300
   
   **The cloudstack client error message is  Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory**
   
   and the details is :
   ```
   
   2021-12-03 18:38:26,318 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Attempting to create storage pool b90eae9d-973c-362c-8afc-af88f0743892
   2021-12-03 18:38:26,318 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) <secret ephemeral='no' private='no'>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <usage type='ceph'>
   <name>cloudstack@10.29.44.1:3300/cloudstack</name>
   </usage>
   </secret>
   
   2021-12-03 18:38:26,320 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='3300'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-03 18:38:32,197 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:null) (logid:2f5b05d6) Processing command: com.cloud.agent.api.GetHostStatsCommand
   2021-12-03 18:38:56,354 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory
   2021-12-03 18:38:56,354 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-03 18:38:56,355 WARN  [cloud.agent.Agent] (agentRequest-Handler-1:null) (logid:53efb58d) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-03 18:38:56,357 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) (logid:53efb58d) Seq 15-9029435777901133923:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] weizhouapache commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
weizhouapache commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986546267


   > @weizhouapache @wido Could you please give some advice? The new info, I found in cloudstack UI:
   > 
   > ![image](https://user-images.githubusercontent.com/4197714/144799583-0383ed99-3729-4a18-ae43-d666f6c21d6f.png)
   
   @xuanyuanaosheng 
   have you seen @leolleeooleo 's comment (https://github.com/apache/cloudstack/issues/3523#issuecomment-823766493) ?
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986461003


   @weizhouapache  @wido  Could you please give some advice? 
   
   ![image](https://user-images.githubusercontent.com/4197714/144799583-0383ed99-3729-4a18-ae43-d666f6c21d6f.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM  can access the ceph node.
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   ![72b81e3f-5633-4d8d-9dc9-9f52cf0f93da](https://user-images.githubusercontent.com/4197714/144823225-f4e18cf0-a992-4c27-ba8c-4aedc942ea81.png)
   
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret.
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring. 
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI. 
   
   The error is still:   **_`org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory`_**
   
   ```
   2021-12-06 17:42:02,899 DEBUG [kvm.resource.LibvirtConnection] (Thread-4654:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:42:02,900 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Found NFS storage pool c4355ed4-8833-381f-b3f7-2981782ee3fa in libvirt, continuing
   2021-12-06 17:42:02,900 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.29.44.1 -p /cephfs/cloudstack -m /mnt/c4355ed4-8833-381f-b3f7-2981782ee3fa -h 10.26.246.6 
   2021-12-06 17:42:02,901 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:42:02,920 DEBUG [kvm.resource.KVMHAMonitor] (Thread-4654:null) (logid:) Execution is successful.
   2021-12-06 17:42:04,533 DEBUG [cloud.agent.Agent] (agentRequest-Handler-2:null) (logid:21625d39) Processing command: com.cloud.agent.api.GetHostStatsCommand
   2021-12-06 17:42:29,342 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:a04d2914) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:42:29,342 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:a04d2914) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:42:29,343 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:a04d2914) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:42:29,344 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:a04d2914) Seq 15-6687845446645196317:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   ```
   
   
   Any idears? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM  can access the ceph node.
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   ![72b81e3f-5633-4d8d-9dc9-9f52cf0f93da](https://user-images.githubusercontent.com/4197714/144823225-f4e18cf0-a992-4c27-ba8c-4aedc942ea81.png)
   
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret.
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring. 
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI. 
   
   After checked all the config, I restart the management-server and cloudstack-agent  services.  The error is still the same:   **_`org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory`_**
   
   ```
   2021-12-06 17:54:17,921 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='6789'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-06 17:54:39,461 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms 
   2021-12-06 17:54:39,463 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing while with timeout : 1800000
   2021-12-06 17:54:39,534 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Execution is successful.
   2021-12-06 17:54:39,535 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:39,551 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) Sending ping: Seq 15-6:  { Cmd , MgmtId: -1, via: 15, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-255-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"v-249-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"s-250-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"r-254-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:39,620 DEBUG [cloud.agent.Agent] (Agent-Handler-2:null) (logid:) Received response: Seq 15-6:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"},"result":"true","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:47,958 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:54:47,959 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:54:47,961 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:54:47,966 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Seq 15-6627046851675684885:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:52,709 DEBUG [kvm.resource.LibvirtConnection] (Thread-6:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Found NFS storage pool c8e9ca6a-c004-3851-a074-19f4948b28ff in libvirt, continuing
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.26.246.6 -p /kvm-data -m /mnt/c8e9ca6a-c004-3851-a074-19f4948b28ff -h 10.26.246.6 
   2021-12-06 17:54:52,726 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:54:52,737 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Execution is successful.
   ```
   
   
   Any idears? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] weizhouapache commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
weizhouapache commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985453473


   @xuanyuanaosheng 
   the error message asked `Does the pool 'cloudstack' exist?`, can you check it ?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] wido commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
wido commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985388228


   6789 as port number, do the MONs run on that port?
   
   New monitors might only bind on 3300


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985408456


   @wido   I have test on port 3300
   
   **The ERROR message is  Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory**
   
   and the details is :
   ```
   
   2021-12-03 18:38:26,318 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Attempting to create storage pool b90eae9d-973c-362c-8afc-af88f0743892
   2021-12-03 18:38:26,318 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) <secret ephemeral='no' private='no'>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <usage type='ceph'>
   <name>cloudstack@10.29.44.1:3300/cloudstack</name>
   </usage>
   </secret>
   
   2021-12-03 18:38:26,320 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='3300'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-03 18:38:32,197 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:null) (logid:2f5b05d6) Processing command: com.cloud.agent.api.GetHostStatsCommand
   2021-12-03 18:38:56,354 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory
   2021-12-03 18:38:56,354 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-03 18:38:56,355 WARN  [cloud.agent.Agent] (agentRequest-Handler-1:null) (logid:53efb58d) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-03 18:38:56,357 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) (logid:53efb58d) Seq 15-9029435777901133923:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] weizhouapache commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
weizhouapache commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-987674301


   @xuanyuanaosheng 
   
   in my opinion, it is not a cloudstack issue. 
   
   Could you please try to add ceph storage by libvirt on kvm nodes ?
   
   1. virsh secret-define
   2. virsh secret-set-value
   3. virsh pool-create


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   @weizhouapache   @wido 
   
   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: 10.29.44.1,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   
   ---
   I use the methods as you advice, But the error info is too little,  I don't know how to repair it. I google a lot, But failed.
   
   **# virsh pool-create --file pool.xml** 
   _error: Failed to create pool from pool.xml
   error: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory_


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng commented on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng commented on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   I find the below error when create storage pool on virt-manager.
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   @weizhouapache   @wido 
   
   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-988474007


   @weizhouapache   @wido 
   
   I find the below error when create storage pool on virt-manager.
   
   
   ![image](https://user-images.githubusercontent.com/4197714/145145103-d8e5e96f-9d2a-4018-83ac-022c3ec32e76.png)
   
   ```
   Error creating pool: Could not start storage pool: failed to connect to the RADOS monitor on: 10.29.44.1,: Success
   
   Traceback (most recent call last):
     File "/usr/share/virt-manager/virtManager/asyncjob.py", line 75, in cb_wrapper
       callback(asyncjob, *args, **kwargs)
     File "/usr/share/virt-manager/virtManager/createpool.py", line 378, in _async_pool_create
       poolobj = pool.install(create=True, meter=meter, build=build)
     File "/usr/share/virt-manager/virtinst/storage.py", line 415, in install
       raise RuntimeError(errmsg)
   RuntimeError: Could not start storage pool: failed to connect to the RADOS monitor on: whdrcceph001.cn.prod,: Success
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node can access the ceph node
   ```
   # telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node can access the ceph node
   ```
   # telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is 'AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But the storage pool d8dabcb0-1a57-4e13-8a82-339b2052dec1 has no data on cloudstack UI, Is this the problem?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-route node can access the ceph node
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But the storage pool d8dabcb0-1a57-4e13-8a82-339b2052dec1 has no data on cloudstack UI, Is this the problem?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986555736


   @wido @weizhouapache 
   
   I have check all the config as your advices.
   
   1.   The label.rados.monitor is 10.29.44.1:6789 , The kvm node and the v-xxx-VM and s-xxx-VM  can access the ceph node.
   ```
   kvm001:~# telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
   񗲨ell
   
   
   root@s-252-VM:~#  telnet 10.29.44.1 6789
   Trying 10.29.44.1...
   Connected to 10.29.44.1.
   Escape character is '^]'.
   ceph v027 
   , 
    
   
   ```
   2.   The RBD pool is exists:  cloudstack
   
   ![d3c6cea2-8e68-41f4-84dd-06d74869de06](https://user-images.githubusercontent.com/4197714/144822456-eac3322b-befb-4626-ac60-ecdd4513d2e2.png)
   
   
   3.   The user cloudstack has the access to the RBD pool: cloudstack
   
   ![72b81e3f-5633-4d8d-9dc9-9f52cf0f93da](https://user-images.githubusercontent.com/4197714/144823225-f4e18cf0-a992-4c27-ba8c-4aedc942ea81.png)
   
   
   4.   The secret is '_AQDLxqlhIdOLJRAABPqps8O6eSGbFnyR7aSJwQ==_' that has no  / (slash) in the secret.
   
   5.   The KVM node has ceph config: ceph.client.admin.keyring. 
   
   ```
   # ls /etc/ceph/
   ceph.client.admin.keyring  ceph.client.cloudstack.keyring  ceph.conf  rbdmap
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144814652-b2e6a598-ffa7-4866-8440-bca53c6e4d34.png)
   
   6.  The related info and The libvirt secrets has nothing:
   ```
   # virsh pool-list
    Name                                   State    Autostart
   ------------------------------------------------------------
    60b59087-7c53-3058-a50c-f50737e556bc   active   no  
    c4355ed4-8833-381f-b3f7-2981782ee3fa   active   no
    c8e9ca6a-c004-3851-a074-19f4948b28ff   active   no
    d8dabcb0-1a57-4e13-8a82-339b2052dec1   active   no
   
   # virsh secret-list
    UUID   Usage
   ---------------
   
   # ls -a /etc/libvirt/secrets/
   .  ..
   
   ```
   
   ![image](https://user-images.githubusercontent.com/4197714/144817204-69f11163-732e-4237-a675-af2c8deeabe5.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817352-1a5bb99c-24e0-4a00-a145-0af6baa83e93.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817429-dec66d11-195c-450d-a653-219c0470497f.png)
   
   
   ![image](https://user-images.githubusercontent.com/4197714/144817491-51a9d333-e007-4427-88cc-a3998cf7e034.png)
   
   
   But I can not find the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_** on cloudstack UI and the storage pool **_d8dabcb0-1a57-4e13-8a82-339b2052dec1_**  will change when I reclick the add primary stotage button.
   
   After checked all the config, I restart the management-server and cloudstack-agent  services.  The error is still the same:   **_`org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory`_**
   
   ```
   2021-12-06 17:54:17,921 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='6789'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-06 17:54:39,461 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/network/security_group.py get_rule_logs_for_vms 
   2021-12-06 17:54:39,463 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Executing while with timeout : 1800000
   2021-12-06 17:54:39,534 DEBUG [kvm.resource.LibvirtComputingResource] (UgentTask-5:null) (logid:) Execution is successful.
   2021-12-06 17:54:39,535 DEBUG [kvm.resource.LibvirtConnection] (UgentTask-5:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:39,551 DEBUG [cloud.agent.Agent] (UgentTask-5:null) (logid:) Sending ping: Seq 15-6:  { Cmd , MgmtId: -1, via: 15, Ver: v1, Flags: 11, [{"com.cloud.agent.api.PingRoutingWithNwGroupsCommand":{"newGroupStates":{},"_hostVmStateReport":{"v-255-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"v-249-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"s-250-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"},"r-254-VM":{"state":"PowerOn","host":"whdckvm002.cn.prod"}},"_gatewayAccessible":"true","_vnetAccessible":"true","hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:39,620 DEBUG [cloud.agent.Agent] (Agent-Handler-2:null) (logid:) Received response: Seq 15-6:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 100010, [{"com.cloud.agent.api.PingAnswer":{"_command":{"hostType":"Routing","hostId":"15","wait":"0","bypassHostMaintenance":"false"},"result":"true","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:47,958 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to create the RBD IoCTX. Does the pool 'cloudstack' exist?: No such file or directory
   2021-12-06 17:54:47,959 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-4:null) (logid:96eddfd2) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-06 17:54:47,961 WARN  [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-06 17:54:47,966 DEBUG [cloud.agent.Agent] (agentRequest-Handler-4:null) (logid:96eddfd2) Seq 15-6627046851675684885:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   2021-12-06 17:54:52,709 DEBUG [kvm.resource.LibvirtConnection] (Thread-6:null) (logid:) Looking for libvirtd connection at: qemu:///system
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Found NFS storage pool c8e9ca6a-c004-3851-a074-19f4948b28ff in libvirt, continuing
   2021-12-06 17:54:52,725 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing: /usr/share/cloudstack-common/scripts/vm/hypervisor/kvm/kvmheartbeat.sh -i 10.26.246.6 -p /kvm-data -m /mnt/c8e9ca6a-c004-3851-a074-19f4948b28ff -h 10.26.246.6 
   2021-12-06 17:54:52,726 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Executing while with timeout : 60000
   2021-12-06 17:54:52,737 DEBUG [kvm.resource.KVMHAMonitor] (Thread-6:null) (logid:) Execution is successful.
   ```
   
   
   Any idears? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-986461003


   @weizhouapache  @wido  Could you please give some advice? 
   The new info, I found in cloudstack UI:
   
   ![image](https://user-images.githubusercontent.com/4197714/144799583-0383ed99-3729-4a18-ae43-d666f6c21d6f.png)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [cloudstack] xuanyuanaosheng edited a comment on issue #5741: Can not create ceph primary storage?

Posted by GitBox <gi...@apache.org>.
xuanyuanaosheng edited a comment on issue #5741:
URL: https://github.com/apache/cloudstack/issues/5741#issuecomment-985408456


   @wido   I have tested on port 3300, But It is still not work.
   
   **The cloudstack client error message is  Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory**
   
   and the details is :
   ```
   
   2021-12-03 18:38:26,318 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Attempting to create storage pool b90eae9d-973c-362c-8afc-af88f0743892
   2021-12-03 18:38:26,318 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) <secret ephemeral='no' private='no'>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <usage type='ceph'>
   <name>cloudstack@10.29.44.1:3300/cloudstack</name>
   </usage>
   </secret>
   
   2021-12-03 18:38:26,320 DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) <pool type='rbd'>
   <name>b90eae9d-973c-362c-8afc-af88f0743892</name>
   <uuid>b90eae9d-973c-362c-8afc-af88f0743892</uuid>
   <source>
   <host name='10.29.44.1' port='3300'/>
   <name>cloudstack</name>
   <auth username='cloudstack' type='ceph'>
   <secret uuid='b90eae9d-973c-362c-8afc-af88f0743892'/>
   </auth>
   </source>
   </pool>
   
   2021-12-03 18:38:32,197 DEBUG [cloud.agent.Agent] (agentRequest-Handler-5:null) (logid:2f5b05d6) Processing command: com.cloud.agent.api.GetHostStatsCommand
   2021-12-03 18:38:56,354 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Failed to create RBD storage pool: org.libvirt.LibvirtException: failed to connect to the RADOS monitor on: 10.29.44.1:3300,: No such file or directory
   2021-12-03 18:38:56,354 ERROR [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-1:null) (logid:53efb58d) Failed to create the RBD storage pool, cleaning up the libvirt secret
   2021-12-03 18:38:56,355 WARN  [cloud.agent.Agent] (agentRequest-Handler-1:null) (logid:53efb58d) Caught: 
   com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   2021-12-03 18:38:56,357 DEBUG [cloud.agent.Agent] (agentRequest-Handler-1:null) (logid:53efb58d) Seq 15-9029435777901133923:  { Ans: , MgmtId: 345052215515, via: 15, Ver: v1, Flags: 10, [{"com.cloud.agent.api.Answer":{"result":"false","details":"com.cloud.utils.exception.CloudRuntimeException: Failed to create storage pool: b90eae9d-973c-362c-8afc-af88f0743892
   	at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:645)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:329)
   	at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:323)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:42)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtModifyStoragePoolCommandWrapper.execute(LibvirtModifyStoragePoolCommandWrapper.java:35)
   	at com.cloud.hypervisor.kvm.resource.wrapper.LibvirtRequestWrapper.execute(LibvirtRequestWrapper.java:78)
   	at com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1648)
   	at com.cloud.agent.Agent.processRequest(Agent.java:661)
   	at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:1079)
   	at com.cloud.utils.nio.Task.call(Task.java:83)
   	at com.cloud.utils.nio.Task.call(Task.java:29)
   	at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
   	at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1128)
   	at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:628)
   	at java.base/java.lang.Thread.run(Thread.java:829)
   ","wait":"0","bypassHostMaintenance":"false"}}] }
   
   ```
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: commits-unsubscribe@cloudstack.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org