You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Kurt K <ku...@ihnetworks.com> on 2017/03/22 12:12:14 UTC

Re: Getting errors while adding ceph storage to cloudstack

Hi,

We have successfully added ceph storage to cloud stack. But we still 
have a doubt regarding creating VMs via ceph primary.

Any idea about the VM's data store location in ceph primary?

/-Kurt/

On 02/23/2017 11:20 AM, Kurt K wrote:
> Hi Simon,
>
> Thanks for the reply.
>
> >> Also run ceph health and make sure your agent can talk to your ceph 
> monitors.
>
> ceph health status shows fine and also we can connect our OSD's from 
> monitor server. Snippets pasted below.
>
> ===
> [root@mon1 ~]# ceph -s
>     cluster ebac75fc-e631-4c9f-a310-880cbcdd1d25
>      health HEALTH_OK
>      monmap e1: 1 mons at {mon1=10.10.48.7:6789/0}
>             election epoch 3, quorum 0 mon1
>      osdmap e12: 2 osds: 2 up, 2 in
>             flags sortbitwise,require_jewel_osds
>       pgmap v3376: 192 pgs, 2 pools, 0 bytes data, 0 objects
>             73108 kB used, 1852 GB / 1852 GB avail
>                  192 active+clean
> ==
> [root@mon1 ~]# ceph osd tree
> ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY
> -1 1.80878 root default
> -2 0.90439     host osd4
>  0 0.90439         osd.0          up  1.00000          1.00000
> -3 0.90439     host osdceph3
>  1 0.90439         osd.1          up  1.00000          1.00000
> ====
>
> >> Which OS are you running on your hosts?
>
> Our cloudstack servers are on Centos 6 and ceph admin/mon/osd servers 
> are running on Centos 7.
>
> After we have enabled the cloudstack agent log to DEBUG on hypervisor 
> server, we are seeing below errors while adding the ceph primary.
>
> ==
> 2017-02-22 21:01:00,444 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
> (agentRequest-Handler-4:null) <pool type='rbd'>
> ==
>
> In a quick search, we found that the hypervisor KVM has no kernel 
> module loaded for RBD. Then we have upgraded the kernel to elrepo and 
> loaded the module with modprobe. When we tried to rebuild rbd module 
> with existing libvirtd configuration, that didn't worked. Apart from 
> this, we have custom compiled libvirtd with rbd support and we have no 
> clues regarding how to connect custom libvirtd with the qemu image 
> utility.
>
> =====
> [root@hyperkvm3 ~]# libvirtd --version (default)
> libvirtd (libvirt) 0.10.2
> =
> [root@hyperkvm3 ~]# lsmod | grep rbd
> rbd                    56743  0
> libceph               148605  1 rbd
> =
> [root@hyperkvm3 ~]# qemu-img -h  | grep "Supported formats"
> Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat 
> qcow2 qed vhdx parallels nbd blkdebug null host_cdrom  host_floppy 
> host_device file gluster
> == // no rbd support
> [root@hyperkvm3 ~]# /usr/bin/sbin/libvirtd  --version  (custom)
> /usr/bin/sbin/libvirtd (libvirt) 1.3.5
> =====
>
> Do you have any ideas/suggestions?
>
> -Kurt
>
> On 02/22/2017 08:20 PM, Simon Weller wrote:
>> I agree,  agent logs would be good to look at.
>>
>>
>> You can enable kvm agent debugging by running this: sed -i 
>> 's/INFO/DEBUG/g' /etc/cloudstack/agent/log4j-cloud.xml
>>
>> Restart the agent and then tail -f /var/log/cloudstack/agent/agent.log
>>
>>
>> Also run ceph health and make sure your agent can talk to your ceph 
>> monitors.
>>
>> Which OS are you running on your hosts?
>>
>>
>> - Si
>>
>> ________________________________
>> From: Abhinandan Prateek <ab...@shapeblue.com>
>> Sent: Wednesday, February 22, 2017 12:45 AM
>> To: users@cloudstack.apache.org
>> Subject: Re: Getting errors while adding ceph storage to cloudstack
>>
>> Take a look at the agent logs on kvm host there will be more clues.
>>
>>
>>
>>
>> On 22/02/17, 8:10 AM, "Kurt K" <ku...@ihnetworks.com> wrote:
>>
>>> Hello,
>>>
>>> I have created a ceph cluster with one admin server, one monitor and 
>>> two
>>> osd's. The setup is completed. But when trying to add the ceph as
>>> primary storage of cloudstack, I am getting the below error in error 
>>> logs.
>>>
>>> Am I missing something ? Please help.
>>>
>>> ================
>>> 2017-02-20 21:03:02,842 DEBUG
>>> [o.a.c.s.d.l.CloudStackPrimaryDataStoreLifeCycleImpl]
>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) In createPool Adding the
>>> pool to each of the hosts
>>> 2017-02-20 21:03:02,843 DEBUG [c.c.s.StorageManagerImpl]
>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Adding pool null to host 1
>>> 2017-02-20 21:03:02,845 DEBUG [c.c.a.t.Request]
>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294:
>>> Sending  { Cmd , MgmtId: 207381009036, via: 1(hyperkvm.xxxxx.com), Ver:
>>> v1, Flags: 100011,
>>> [{"com.cloud.agent.api.ModifyStoragePoolCommand":{"add":true,"pool":{"id":14,"uuid":"9c51d737-3a6f-3bb3-8f28-109954fc2ef0","host":"mon1.xxxx.com","path":"cloudstack","userInfo":"cloudstack:AQDagqZYgSSpOBAATFvSt4tz3cOUWhNtR-NaoQ==","port":6789,"type":"RBD"},"localPath":"/mnt//ac5436a6-5889-30eb-b079-ac1e05a30526","wait":0}}] 
>>>
>>> }
>>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request]
>>> (AgentManager-Handler-15:null) Seq 1-653584895922143294: Processing:  {
>>> Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 10,
>>> [{"com.cloud.agent.api.Answer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException: 
>>>
>>> Failed to create storage pool:
>>> 9c51d737-3a6f-3bb3-8f28-109954fc2ef0\n\tat
>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:524)\n\tat 
>>>
>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:277)\n\tat 
>>>
>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:271)\n\tat 
>>>
>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2823)\n\tat 
>>>
>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1325)\n\tat 
>>>
>>> com.cloud.agent.Agent.processRequest(Agent.java:501)\n\tat
>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:808)\n\tat
>>> com.cloud.utils.nio.Task.run(Task.java:84)\n\tat
>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
>>>
>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
>>>
>>> java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
>>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request]
>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294:
>>> Received:  { Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 10, {
>>> Answer } }
>>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Details from executing 
>>> class
>>> com.cloud.agent.api.ModifyStoragePoolCommand:
>>> com.cloud.utils.exception.CloudRuntimeException: Failed to create
>>> storage pool: 9c51d737-3a6f-3bb3-8f28-109954fc2ef0
>>>      at
>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:524) 
>>>
>>>      at
>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:277) 
>>>
>>>      at
>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:271) 
>>>
>>> ================
>>>
>>> Any suggestions?
>>>
>>> -Kurt
>> abhinandan.prateek@shapeblue.com
>> www.shapeblue.com<http://www.shapeblue.com>
>> ShapeBlue - The CloudStack Company<http://www.shapeblue.com/>
>> www.shapeblue.com
>> Introduction Upgrading CloudStack can sometimes be a little daunting 
>> - but as the 5P's proverb goes - Proper Planning Prevents Poor 
>> Performance.
>>
>>
>>
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>>
>
>


-- 
*~Kurt *

Re: Getting errors while adding ceph storage to cloudstack

Posted by Kurt K <ku...@ihnetworks.com>.
Hi,

We are facing a problem while creating new instance. On checking the 
logs, it shows the below errors. It looks like the error is related to 
Domain router VM.


=====
com.cloud.exception.AgentUnavailableException: Resource [Host:1] is 
unreachable: Host 1: Unable to start instance due to Unable to start 
VM[DomainRouter|r-9-VM] due to error in finalizeStart, not retrying

Resource [Host:1] is unreachable: Host 1: Unable to start instance due 
to Unable to start VM[DomainRouter|r-14-VM] due to error in 
finalizeStart, not retrying

com.cloud.exception.InsufficientServerCapacityException: Unable to 
create a deployment for VM[User|i-2-8-VM]Scope=interface 
com.cloud.dc.DataCenter; id=1

=====

Please let us know your thoughts

-kurt


On 03/22/2017 05:42 PM, Kurt K wrote:
> Hi,
>
> We have successfully added ceph storage to cloud stack. But we still 
> have a doubt regarding creating VMs via ceph primary.
>
> Any idea about the VM's data store location in ceph primary?
>
> /-Kurt/
>
> On 02/23/2017 11:20 AM, Kurt K wrote:
>> Hi Simon,
>>
>> Thanks for the reply.
>>
>> >> Also run ceph health and make sure your agent can talk to your 
>> ceph monitors.
>>
>> ceph health status shows fine and also we can connect our OSD's from 
>> monitor server. Snippets pasted below.
>>
>> ===
>> [root@mon1 ~]# ceph -s
>>     cluster ebac75fc-e631-4c9f-a310-880cbcdd1d25
>>      health HEALTH_OK
>>      monmap e1: 1 mons at {mon1=10.10.48.7:6789/0}
>>             election epoch 3, quorum 0 mon1
>>      osdmap e12: 2 osds: 2 up, 2 in
>>             flags sortbitwise,require_jewel_osds
>>       pgmap v3376: 192 pgs, 2 pools, 0 bytes data, 0 objects
>>             73108 kB used, 1852 GB / 1852 GB avail
>>                  192 active+clean
>> ==
>> [root@mon1 ~]# ceph osd tree
>> ID WEIGHT  TYPE NAME         UP/DOWN REWEIGHT PRIMARY-AFFINITY
>> -1 1.80878 root default
>> -2 0.90439     host osd4
>>  0 0.90439         osd.0          up  1.00000          1.00000
>> -3 0.90439     host osdceph3
>>  1 0.90439         osd.1          up  1.00000          1.00000
>> ====
>>
>> >> Which OS are you running on your hosts?
>>
>> Our cloudstack servers are on Centos 6 and ceph admin/mon/osd servers 
>> are running on Centos 7.
>>
>> After we have enabled the cloudstack agent log to DEBUG on hypervisor 
>> server, we are seeing below errors while adding the ceph primary.
>>
>> ==
>> 2017-02-22 21:01:00,444 DEBUG [kvm.storage.LibvirtStorageAdaptor] 
>> (agentRequest-Handler-4:null) <pool type='rbd'>
>> ==
>>
>> In a quick search, we found that the hypervisor KVM has no kernel 
>> module loaded for RBD. Then we have upgraded the kernel to elrepo and 
>> loaded the module with modprobe. When we tried to rebuild rbd module 
>> with existing libvirtd configuration, that didn't worked. Apart from 
>> this, we have custom compiled libvirtd with rbd support and we have 
>> no clues regarding how to connect custom libvirtd with the qemu image 
>> utility.
>>
>> =====
>> [root@hyperkvm3 ~]# libvirtd --version (default)
>> libvirtd (libvirt) 0.10.2
>> =
>> [root@hyperkvm3 ~]# lsmod | grep rbd
>> rbd                    56743  0
>> libceph               148605  1 rbd
>> =
>> [root@hyperkvm3 ~]# qemu-img -h  | grep "Supported formats"
>> Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat 
>> qcow2 qed vhdx parallels nbd blkdebug null host_cdrom host_floppy 
>> host_device file gluster
>> == // no rbd support
>> [root@hyperkvm3 ~]# /usr/bin/sbin/libvirtd  --version  (custom)
>> /usr/bin/sbin/libvirtd (libvirt) 1.3.5
>> =====
>>
>> Do you have any ideas/suggestions?
>>
>> -Kurt
>>
>> On 02/22/2017 08:20 PM, Simon Weller wrote:
>>> I agree,  agent logs would be good to look at.
>>>
>>>
>>> You can enable kvm agent debugging by running this: sed -i 
>>> 's/INFO/DEBUG/g' /etc/cloudstack/agent/log4j-cloud.xml
>>>
>>> Restart the agent and then tail -f /var/log/cloudstack/agent/agent.log
>>>
>>>
>>> Also run ceph health and make sure your agent can talk to your ceph 
>>> monitors.
>>>
>>> Which OS are you running on your hosts?
>>>
>>>
>>> - Si
>>>
>>> ________________________________
>>> From: Abhinandan Prateek <ab...@shapeblue.com>
>>> Sent: Wednesday, February 22, 2017 12:45 AM
>>> To: users@cloudstack.apache.org
>>> Subject: Re: Getting errors while adding ceph storage to cloudstack
>>>
>>> Take a look at the agent logs on kvm host there will be more clues.
>>>
>>>
>>>
>>>
>>> On 22/02/17, 8:10 AM, "Kurt K" <ku...@ihnetworks.com> wrote:
>>>
>>>> Hello,
>>>>
>>>> I have created a ceph cluster with one admin server, one monitor 
>>>> and two
>>>> osd's. The setup is completed. But when trying to add the ceph as
>>>> primary storage of cloudstack, I am getting the below error in 
>>>> error logs.
>>>>
>>>> Am I missing something ? Please help.
>>>>
>>>> ================
>>>> 2017-02-20 21:03:02,842 DEBUG
>>>> [o.a.c.s.d.l.CloudStackPrimaryDataStoreLifeCycleImpl]
>>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) In createPool Adding the
>>>> pool to each of the hosts
>>>> 2017-02-20 21:03:02,843 DEBUG [c.c.s.StorageManagerImpl]
>>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Adding pool null to host 1
>>>> 2017-02-20 21:03:02,845 DEBUG [c.c.a.t.Request]
>>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294:
>>>> Sending  { Cmd , MgmtId: 207381009036, via: 1(hyperkvm.xxxxx.com), 
>>>> Ver:
>>>> v1, Flags: 100011,
>>>> [{"com.cloud.agent.api.ModifyStoragePoolCommand":{"add":true,"pool":{"id":14,"uuid":"9c51d737-3a6f-3bb3-8f28-109954fc2ef0","host":"mon1.xxxx.com","path":"cloudstack","userInfo":"cloudstack:AQDagqZYgSSpOBAATFvSt4tz3cOUWhNtR-NaoQ==","port":6789,"type":"RBD"},"localPath":"/mnt//ac5436a6-5889-30eb-b079-ac1e05a30526","wait":0}}] 
>>>>
>>>> }
>>>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request]
>>>> (AgentManager-Handler-15:null) Seq 1-653584895922143294: 
>>>> Processing:  {
>>>> Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 10,
>>>> [{"com.cloud.agent.api.Answer":{"result":false,"details":"com.cloud.utils.exception.CloudRuntimeException: 
>>>>
>>>> Failed to create storage pool:
>>>> 9c51d737-3a6f-3bb3-8f28-109954fc2ef0\n\tat
>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:524)\n\tat 
>>>>
>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:277)\n\tat 
>>>>
>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:271)\n\tat 
>>>>
>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2823)\n\tat 
>>>>
>>>> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1325)\n\tat 
>>>>
>>>> com.cloud.agent.Agent.processRequest(Agent.java:501)\n\tat
>>>> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:808)\n\tat
>>>> com.cloud.utils.nio.Task.run(Task.java:84)\n\tat
>>>> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145)\n\tat 
>>>>
>>>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat 
>>>>
>>>> java.lang.Thread.run(Thread.java:745)\n","wait":0}}] }
>>>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.t.Request]
>>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Seq 1-653584895922143294:
>>>> Received:  { Ans: , MgmtId: 207381009036, via: 1, Ver: v1, Flags: 
>>>> 10, {
>>>> Answer } }
>>>> 2017-02-20 21:03:02,944 DEBUG [c.c.a.m.AgentManagerImpl]
>>>> (catalina-exec-6:ctx-f293a10c ctx-093b4faf) Details from executing 
>>>> class
>>>> com.cloud.agent.api.ModifyStoragePoolCommand:
>>>> com.cloud.utils.exception.CloudRuntimeException: Failed to create
>>>> storage pool: 9c51d737-3a6f-3bb3-8f28-109954fc2ef0
>>>>      at
>>>> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:524) 
>>>>
>>>>      at
>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:277) 
>>>>
>>>>      at
>>>> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:271) 
>>>>
>>>> ================
>>>>
>>>> Any suggestions?
>>>>
>>>> -Kurt
>>> abhinandan.prateek@shapeblue.com
>>> www.shapeblue.com<http://www.shapeblue.com>
>>> ShapeBlue - The CloudStack Company<http://www.shapeblue.com/>
>>> www.shapeblue.com
>>> Introduction Upgrading CloudStack can sometimes be a little daunting 
>>> - but as the 5P's proverb goes - Proper Planning Prevents Poor 
>>> Performance.
>>>
>>>
>>>
>>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>>> @shapeblue
>>>
>>>
>>>
>>>
>>
>>
>
>


-- 
*~Kurt *