You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Guangjian Liu <gu...@gmail.com> on 2013/04/17 01:44:19 UTC

Create rbd primary storage fail in CS 4.0.1

Create rbd primary storage fail in CS 4.0.1
Anybody can help about it!

Environment:
1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
3. Server C: KVM/Qemu OS: Ubuntu 12.04
    compile libvirt and Qemu as document
root@ubuntu:/usr/local/lib# virsh version
Compiled against library: libvirt 0.10.2
Using library: libvirt 0.10.2
Using API: QEMU 0.10.2
Running hypervisor: QEMU 1.0.0

Problem:
create primary storage fail with rbd device.

Fail log:
2013-04-16 16:27:14,224 DEBUG [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) createPool Params @ scheme - rbd storageHost -
10.0.0.41 hostPath - /cloudstack port - -1
2013-04-16 16:27:14,270 DEBUG [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) In createPool Setting poolId - 218 uuid -
5924a2df-d658-3119-8aba-f90307683206 zoneId - 4 podId - 4 poolName - ceph
2013-04-16 16:27:14,318 DEBUG [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) creating pool ceph on  host 18
2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
(catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd , MgmtId:
37528005876872, via: 18, Ver: v1, Flags: 100011,
[{"CreateStoragePoolCommand":{"add":true,"pool":{"id":218,"uuid":"5924a2df-d658-3119-8aba-f90307683206","host":"10.0.0.41","path":"cloudstack","userInfo":":","port":6789,"type":"RBD"},"localPath":"/mnt//3cf4f0e8-781d-39d8-b81c-9896da212335","wait":0}}]
}
2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
(AgentManager-Handler-2:null) Seq 18-1625162275: Processing:  { Ans: ,
MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
[{"Answer":{"result":true,"details":"success","wait":0}}] }
2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
(catalina-exec-9:null) Seq 18-1625162275: Received:  { Ans: , MgmtId:
37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
2013-04-16 16:27:14,323 DEBUG [agent.manager.AgentManagerImpl]
(catalina-exec-9:null) Details from executing class
com.cloud.agent.api.CreateStoragePoolCommand: success
2013-04-16 16:27:14,323 DEBUG [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) In createPool Adding the pool to each of the hosts
2013-04-16 16:27:14,323 DEBUG [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) Adding pool ceph to  host 18
2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
(catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd , MgmtId:
37528005876872, via: 18, Ver: v1, Flags: 100011,
[{"ModifyStoragePoolCommand":{"add":true,"pool":{"id":218,"uuid":"5924a2df-d658-3119-8aba-f90307683206","host":"10.0.0.41","path":"cloudstack","userInfo":":","port":6789,"type":"RBD"},"localPath":"/mnt//3cf4f0e8-781d-39d8-b81c-9896da212335","wait":0}}]
}
2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
(AgentManager-Handler-6:null) Seq 18-1625162276: Processing:  { Ans: ,
MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
[{"Answer":{"result":false,"details":"java.lang.NullPointerException\n\tat
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:462)\n\tat
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)\n\tat
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2087)\n\tat
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1053)\n\tat
com.cloud.agent.Agent.processRequest(Agent.java:518)\n\tat
com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)\n\tat
com.cloud.utils.nio.Task.run(Task.java:83)\n\tat
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)\n\tat
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
java.lang.Thread.run(Thread.java:679)\n","wait":0}}] }
2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
(catalina-exec-9:null) Seq 18-1625162276: Received:  { Ans: , MgmtId:
37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
2013-04-16 16:27:14,412 DEBUG [agent.manager.AgentManagerImpl]
(catalina-exec-9:null) Details from executing class
com.cloud.agent.api.ModifyStoragePoolCommand: java.lang.NullPointerException
        at
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:462)
        at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2087)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1053)
        at com.cloud.agent.Agent.processRequest(Agent.java:518)
        at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)
        at com.cloud.utils.nio.Task.run(Task.java:83)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)

2013-04-16 16:27:14,451 WARN  [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) Unable to establish a connection between
Host[-18-Routing] and Pool[218|RBD]
com.cloud.exception.StorageUnavailableException: Resource [StoragePool:218]
is unreachable: Unable establish connection from storage head to storage
pool 218 due to java.lang.NullPointerException
        at
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:462)
        at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2087)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1053)
        at com.cloud.agent.Agent.processRequest(Agent.java:518)
        at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)
        at com.cloud.utils.nio.Task.run(Task.java:83)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)

        at
com.cloud.storage.StorageManagerImpl.connectHostToSharedPool(StorageManagerImpl.java:1685)
        at
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:1450)
        at
com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:215)
        at
com.cloud.api.commands.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:120)
        at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:138)
        at com.cloud.api.ApiServer.queueCommand(ApiServer.java:543)
        at com.cloud.api.ApiServer.handleRequest(ApiServer.java:422)
        at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
        at com.cloud.api.ApiServlet.doGet(ApiServlet.java:63)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
        at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
        at
org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
        at
org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
        at
org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
        at
org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
        at
org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
        at
org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
        at
org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
        at
org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
        at
org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
        at
org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
        at
org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
        at
org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2260)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
        at java.lang.Thread.run(Thread.java:679)
2013-04-16 16:27:14,452 WARN  [cloud.storage.StorageManagerImpl]
(catalina-exec-9:null) No host can access storage pool Pool[218|RBD] on
cluster 5
2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
(catalina-exec-9:null) class com.cloud.api.ServerApiException : Failed to
add storage pool
2013-04-16 16:27:15,293 DEBUG [agent.manager.AgentManagerImpl]
(AgentManager-Handler-12:null) Ping from 18
^C
[root@RDR02S02 management]#

-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Guangjian Liu <gu...@gmail.com>.
Hi Wido, RBD in CS 4.0.1 works now, It may cause by my environment.

Environment:
1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
3. Server C: KVM/Qemu OS: Ubuntu 12.04

What I did is upgrade Ceph to 0.56 in Server C (Ubuntu 12.04). Original
version is 0.4x. So I think KVM connect librbd which installed in same
server, is it right?

I have a little confuse that KVM communicate with ceph client in same
server, not communicate with ceph server directly(which in install in
Server B).

I'll try install Ceph, KVM in Unbuntu 12.04(or CentOS 6.3) all in one, and
validate the same solution.


On Sat, Apr 27, 2013 at 1:03 AM, Wido den Hollander <wi...@widodh.nl> wrote:

>
>
> On 04/22/2013 03:57 AM, Guangjian Liu wrote:
>
>> Wido,
>>
>> Please see information below
>>
>> How big is the Ceph cluster? (ceph -s).
>> [root@RDR02S01 ~]# ceph -s
>>     health HEALTH_OK
>>     monmap e1: 1 mons at {a=10.0.0.41:6789/0}, election epoch 1, quorum
>> 0 a
>>     osdmap e35: 2 osds: 2 up, 2 in
>>      pgmap v17450: 714 pgs: 714 active+clean; 2522 MB data, 10213 MB used,
>> 126 GB / 136 GB avail
>>     mdsmap e557: 1/1/1 up {0=a=up:active}
>>
>> And what does libvirt say?
>>
>> 1. Log on to a hypervisor
>> 2. virsh pool-list
>> root@ubuntu:~# virsh pool-list
>> Name                 State      Autostart
>> ------------------------------**-----------
>> 18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c active     no
>> ac2949d9-16fb-4c54-a551-**1cdbeb501f05 active     no
>> d9474b9d-afa1-3737-a13b-**df333dae295f active     no
>> f81769c1-a31b-4202-a3aa-**bdf616eef019 active     no
>>
>> 3. virsh pool-info <uuid of ceph pool>
>> root@ubuntu:~# virsh pool-info 18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c
>> Name:           18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c
>> UUID:           18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c
>> State:          running
>> Persistent:     yes
>> Autostart:      no
>> Capacity:       136.60 GiB
>> *Allocation:     30723.02 TiB*
>> Available:      126.63 GiB
>>
>>
> That is really odd. Could it be that you have created a lot of RBD images
> in that pool? Still, 30723TB is still a big number.
>
> I have no clue what that could be right now.
>
> Wido
>
>
>> On Fri, Apr 19, 2013 at 4:43 PM, Wido den Hollander <wi...@widodh.nl>
>> wrote:
>>
>>  Hi,
>>>
>>>
>>> On 04/19/2013 08:48 AM, Guangjian Liu wrote:
>>>
>>>  Ceph file system can added as Primary storage by RBD, Thanks.
>>>>
>>>>
>>>>  Great!
>>>
>>>
>>>   Now I meet the problem in creating VM to Ceph.
>>>
>>>>
>>>> First I create computer offer and storage offer with tag "ceph", and
>>>> define Ceph Primary storage with tag "ceph".
>>>> Then I create VM use "ceph" computer offer, It display exception in Log.
>>>> It seems the storage size out off usage.
>>>>
>>>>
>>>>  That is indeed odd. How big is the Ceph cluster? (ceph -s).
>>>
>>> And what does libvirt say?
>>>
>>> 1. Log on to a hypervisor
>>> 2. virsh pool-list
>>> 3. virsh pool-info <uuid of ceph pool>
>>>
>>> I'm curious about what libvirt reports as pool size.
>>>
>>> Wido
>>>
>>>   Create VM fail
>>>
>>>> 2013-04-18 10:23:27,559 DEBUG
>>>> [storage.allocator.****AbstractStoragePoolAllocator]
>>>>
>>>> (Job-Executor-1:job-24)
>>>> Is storage pool shared? true
>>>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
>>>> 146673582080, *usedBytes: 33780317214998658*, usedPct:
>>>>
>>>> 230309.48542985672, disable threshold: 0.85
>>>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
>>>> percentage: 230309.48542985672 has crossed the
>>>> pool.storage.capacity.****disablethreshold: 0.85
>>>>
>>>> 2013-04-18 10:23:27,561 DEBUG
>>>> [storage.allocator.****FirstFitStoragePoolAllocator]
>>>>
>>>> (Job-Executor-1:job-24)
>>>> FirstFitStoragePoolAllocator returning 0 suitable storage pools
>>>> 2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
>>>> (Job-Executor-1:job-24) No suitable pools found for volume:
>>>> Vol[8|vm=8|ROOT] under cluster: 1
>>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>>> (Job-Executor-1:job-24) No suitable pools found
>>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>>> (Job-Executor-1:job-24) No suitable storagePools found under this
>>>> Cluster: 1
>>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>>> (Job-Executor-1:job-24) Could not find suitable Deployment Destination
>>>> for this VM under any clusters, returning.
>>>> 2013-04-18 10:23:27,640 DEBUG [cloud.capacity.****CapacityManagerImpl]
>>>>
>>>> (Job-Executor-1:job-24) VM state transitted from :Starting to Stopped
>>>> with event: OperationFailedvm's original host id: null new host id: null
>>>> host id before state transition: null
>>>> 2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
>>>> (Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to
>>>> create
>>>> management-server.7z 2013-04-18 10:02
>>>>
>>>> I check CS Web GUI and Database, the storage Allocated 30723TB, but
>>>> actually the total size is 138GB.
>>>> Inline image 1
>>>>
>>>>
>>>> mysql> select * from storage_pool;
>>>> +-----+---------+-------------****-------------------------+--**--**
>>>> ---------------+------+-------****---------+--------+---------**--**
>>>> -+-------------------+--------****--------+--------------+----**--**
>>>> -----+-----------------+------****---------------+---------+--**--**
>>>>
>>>> ---------+--------+
>>>> | id  | name    | uuid                                 | pool_type
>>>>     | port | data_center_id | pod_id | cluster_id | available_bytes   |
>>>> capacity_bytes | host_address | user_info | path            | created
>>>>             | removed | update_time | status |
>>>> +-----+---------+-------------****-------------------------+--**--**
>>>> ---------------+------+-------****---------+--------+---------**--**
>>>> -+-------------------+--------****--------+--------------+----**--**
>>>> -----+-----------------+------****---------------+---------+--**--**
>>>> ---------+--------+
>>>> | 200 | primary | d9474b9d-afa1-3737-a13b-****df333dae295f |
>>>>
>>>> NetworkFilesystem | 2049 |              1 |      1 |          1 |
>>>> 10301210624 |    61927849984 | 10.0.0.42    | NULL      |
>>>> /export/primary | 2013-04-17 06:25:47 | NULL    | NULL        | Up     |
>>>> | 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-****e95cd4f2d78c | RBD
>>>>
>>>>     | 6789 |              1 |      1 |          1 | 33780317214998658 |
>>>> 146673582080 | 10.0.0.41    | :         | rbd             | 2013-04-18
>>>> 02:02:00 | NULL    | NULL        | Up     |
>>>> +-----+---------+-------------****-------------------------+--**--**
>>>> ---------------+------+-------****---------+--------+---------**--**
>>>> -+-------------------+--------****--------+--------------+----**--**
>>>> -----+-----------------+------****---------------+---------+--**--**
>>>>
>>>> ---------+--------+
>>>> 2 rows in set (0.00 sec)
>>>>
>>>>
>>>>
>>>>
>>>>
>>>> On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <wido@widodh.nl
>>>> <ma...@widodh.nl>> wrote:
>>>>
>>>>      Hi,
>>>>
>>>>
>>>>      On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>>>>
>>>>          I still meet the same result.
>>>>
>>>>          In ubuntu 12.04,
>>>>          1. I install libvirt-dev as below,
>>>>               apt-get install libvirt-dev
>>>>          2. rebuild libvirt, see detail build log in attach.
>>>>          root@ubuntu:~/install/libvirt-****__0.10.2# ./autogen.sh
>>>>
>>>>
>>>>          running CONFIG_SHELL=/bin/bash /bin/bash ./configure
>>>> --enable-rbd
>>>>          --no-create --no-recursion
>>>>          configure: WARNING: unrecognized options: --enable-rbd
>>>>
>>>>
>>>>      The correct option is "--with-storage-rbd"
>>>>
>>>>      But check the output of configure, it should tell you whether RBD
>>>>      was enabled or not.
>>>>
>>>>      Then verify again if you can create a RBD storage pool manually via
>>>>      libvirt.
>>>>
>>>>      Wido
>>>>
>>>>          .....
>>>>          make
>>>>          make install
>>>>
>>>>
>>>>
>>>>          On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander
>>>>          <wido@widodh.nl <ma...@widodh.nl>
>>>>          <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>>>
>>>>               Hi,
>>>>
>>>>
>>>>               On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>>>>
>>>>                   Thanks for your mail, you suggest compile libvirt with
>>>>          RBD enable.
>>>>                   I already build libvirt-0.10.2.tar.gz as document
>>>>          http://ceph.com/docs/master/__****__rbd/libvirt/<http://ceph.com/docs/master/__**__rbd/libvirt/>
>>>> <http://ceph.**com/docs/master/____rbd/**libvirt/<http://ceph.com/docs/master/____rbd/libvirt/>
>>>> >
>>>>          <http://ceph.com/docs/master/_****_rbd/libvirt/<http://ceph.com/docs/master/_**_rbd/libvirt/>
>>>> <http://ceph.**com/docs/master/__rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>>> **>
>>>>
>>>>>
>>>>>
>>>>
>>>>                   <http://ceph.com/docs/master/_****_rbd/libvirt/<http://ceph.com/docs/master/_**_rbd/libvirt/>
>>>> <http://ceph.**com/docs/master/__rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>>> **>
>>>>          <http://ceph.com/docs/master/****rbd/libvirt/<http://ceph.com/docs/master/**rbd/libvirt/>
>>>> <http://ceph.com/**docs/master/rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>
>>>> >>>
>>>>
>>>> in my SERVER C(Ubuntu
>>>>                   12.04),
>>>>                      Shall I build libvirt-0.10.2.tar.gz with RBD
>>>> enable?
>>>>            use
>>>>                   ./configure
>>>>                   --enable-rbd instead autogen.sh?
>>>>
>>>>
>>>>               Well, you don't have to add --enable-rbd to configure nor
>>>>               autogen.sh, but you have to make sure the development
>>>>          libraries for
>>>>               librbd are installed.
>>>>
>>>>               On CentOS do this:
>>>>
>>>>               yum install librbd-devel
>>>>
>>>>               And retry autogen.sh for libvirt, it should tell you RBD
>>>> is
>>>>          enabled.
>>>>
>>>>               Wido
>>>>
>>>>                   cd libvirt
>>>>                   ./autogen.sh
>>>>                   make
>>>>                   sudo make install
>>>>
>>>>
>>>>
>>>>                   On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>>>>                   <wido@widodh.nl <ma...@widodh.nl>
>>>>          <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>>>
>>>>                       Hi,
>>>>
>>>>
>>>>                       On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>>>
>>>>                           Create rbd primary storage fail in CS 4.0.1
>>>>                           Anybody can help about it!
>>>>
>>>>                           Environment:
>>>>                           1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>>>                           2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>>>                           3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>>>                                  compile libvirt and Qemu as document
>>>>                           root@ubuntu:/usr/local/lib# virsh version
>>>>                           Compiled against library: libvirt 0.10.2
>>>>                           Using library: libvirt 0.10.2
>>>>                           Using API: QEMU 0.10.2
>>>>                           Running hypervisor: QEMU 1.0.0
>>>>
>>>>
>>>>                       Are you sure both libvirt and Qemu are compiled
>>>>          with RBD
>>>>                       enabled?
>>>>
>>>>                       On your CentOS system you should make sure
>>>>          librbd-dev is
>>>>                       installed during
>>>>                       compilation of libvirt and Qemu.
>>>>
>>>>                       The most important part is the RBD storage pool
>>>>          support in
>>>>                       libvirt, that
>>>>                       should be enabled.
>>>>
>>>>                       In the e-mail you send me directly I saw this:
>>>>
>>>>                       root@ubuntu:~/scripts# virsh pool-define
>>>>          rbd-pool.xml error:
>>>>                       Failed to
>>>>                       define pool from rbd-pool.xml error: internal
>>>> error
>>>>          missing
>>>>                       backend for
>>>>                       pool type 8
>>>>
>>>>                       That suggest RBD storage pool support is not
>>>>          enabled in libvirt.
>>>>
>>>>                       Wido
>>>>
>>>>
>>>>                          Problem:
>>>>
>>>>                           create primary storage fail with rbd device.
>>>>
>>>>                           Fail log:
>>>>                           2013-04-16 16:27:14,224 DEBUG
>>>>                           [cloud.storage.**____****StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) createPool Params @
>>>>          scheme - rbd
>>>>                           storageHost -
>>>>                           10.0.0.41 hostPath - /cloudstack port - -1
>>>>                           2013-04-16 16:27:14,270 DEBUG
>>>>                           [cloud.storage.**____****StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) In createPool Setting
>>>>          poolId -
>>>>                           218 uuid -
>>>>                           5924a2df-d658-3119-8aba-**____**
>>>> **f90307683206
>>>>
>>>>
>>>>          zoneId - 4
>>>>
>>>>                           podId - 4 poolName -
>>>>                           ceph
>>>>                           2013-04-16 16:27:14,318 DEBUG
>>>>                           [cloud.storage.**____****StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) creating pool ceph on
>>>>            host 18
>>>>                           2013-04-16 16:27:14,320 DEBUG
>>>>          [agent.transport.Request]
>>>>                           (catalina-exec-9:null) Seq 18-1625162275:
>>>>          Sending  { Cmd
>>>>                           , MgmtId:
>>>>                           37528005876872, via: 18, Ver: v1, Flags:
>>>> 100011,
>>>>
>>>>          [{"CreateStoragePoolCommand":{****____**"add":true,"pool":{"**
>>>> id":**
>>>> __218,__"**
>>>>
>>>>          uuid":"5924a2df-d658-3119-**__****__8aba-f90307683206","host":
>>>> **"**
>>>> 10.____**
>>>>
>>>>          0.0.41","path":"cloudstack","*****____*userInfo":":","port":**
>>>> 6789,"__**
>>>>
>>>>          type":"RBD"},"localPath":"/**_****___mnt//3cf4f0e8-781d-39d8-*
>>>> ***
>>>>
>>>> b81c-__*__*
>>>>
>>>>
>>>>
>>>>                           9896da212335","wait":0}}]
>>>>                           }
>>>>                           2013-04-16 16:27:14,323 DEBUG
>>>>          [agent.transport.Request]
>>>>                           (AgentManager-Handler-2:null) Seq
>>>> 18-1625162275:
>>>>                           Processing:  { Ans: ,
>>>>                           MgmtId: 37528005876872, via: 18, Ver: v1,
>>>>          Flags: 10,
>>>>
>>>>          [{"Answer":{"result":true,"**_****___details":"success","wait"
>>>> **:**
>>>>
>>>> 0}}__]
>>>>
>>>>
>>>>                           }
>>>>
>>>>                           2013-04-16 16:27:14,323 DEBUG
>>>>          [agent.transport.Request]
>>>>                           (catalina-exec-9:null) Seq 18-1625162275:
>>>>          Received:  {
>>>>                           Ans: , MgmtId:
>>>>                           37528005876872, via: 18, Ver: v1, Flags: 10, {
>>>>          Answer } }
>>>>                           2013-04-16 16:27:14,323 DEBUG
>>>>                           [agent.manager.**____****AgentManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) Details from executing
>>>> class
>>>>
>>>>          com.cloud.agent.api.**____****CreateStoragePoolCommand:
>>>> success
>>>>
>>>>
>>>>
>>>>                           2013-04-16 16:27:14,323 DEBUG
>>>>                           [cloud.storage.**____****StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) In createPool Adding
>>>> the
>>>>          pool to
>>>>                           each of the hosts
>>>>                           2013-04-16 16:27:14,323 DEBUG
>>>>                           [cloud.storage.**____****StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) Adding pool ceph to
>>>>  host
>>>> 18
>>>>                           2013-04-16 16:27:14,326 DEBUG
>>>>          [agent.transport.Request]
>>>>                           (catalina-exec-9:null) Seq 18-1625162276:
>>>>          Sending  { Cmd
>>>>                           , MgmtId:
>>>>                           37528005876872, via: 18, Ver: v1, Flags:
>>>> 100011,
>>>>
>>>>          [{"ModifyStoragePoolCommand":{****____**"add":true,"pool":{"**
>>>> id":**
>>>> __218,__"**
>>>>
>>>>          uuid":"5924a2df-d658-3119-**__****__8aba-f90307683206","host":
>>>> **"**
>>>> 10.____**
>>>>
>>>>          0.0.41","path":"cloudstack","*****____*userInfo":":","port":**
>>>> 6789,"__**
>>>>
>>>>          type":"RBD"},"localPath":"/**_****___mnt//3cf4f0e8-781d-39d8-*
>>>> ***
>>>>
>>>> b81c-__*__*
>>>>
>>>>
>>>>
>>>>                           9896da212335","wait":0}}]
>>>>                           }
>>>>                           2013-04-16 16:27:14,411 DEBUG
>>>>          [agent.transport.Request]
>>>>                           (AgentManager-Handler-6:null) Seq
>>>> 18-1625162276:
>>>>                           Processing:  { Ans: ,
>>>>                           MgmtId: 37528005876872, via: 18, Ver: v1,
>>>>          Flags: 10,
>>>>
>>>>          [{"Answer":{"result":false,"******____details":"java.lang.**
>>>>                           NullPointerException\n\tat
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_storage.**
>>>> LibvirtStorageAdaptor.____**
>>>>
>>>>          createStoragePool(**____****LibvirtStorageAdaptor.java:**_****
>>>> ___462)\n\tat
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_storage.**
>>>> KVMStoragePoolManager.____**
>>>>
>>>>          createStoragePool(**____****KVMStoragePoolManager.java:57)****
>>>> ____**\n\tat
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_resource.**____**
>>>> LibvirtComputingResource.**___****_execute(
>>>>                           **LibvirtComputingResource.___****
>>>> _java:**2087)\n\tat
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_resource.**____**
>>>> LibvirtComputingResource.**
>>>>
>>>>          executeRequest(**____****LibvirtComputingResource.java:****
>>>> ____**1053)\n\tat
>>>>
>>>>          com.cloud.agent.Agent.**____****processRequest(Agent.java:518)
>>>> ****
>>>> ____**\n\tat
>>>>
>>>>          com.cloud.agent.Agent$**____****AgentRequestHandler.doTask(**
>>>>                           Agent.java:831)\n\tat
>>>>
>>>>          com.cloud.utils.nio.Task.run(*****____*Task.java:83)\n\tat
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor.runWorker(**
>>>> ***
>>>> ____*
>>>>                           ThreadPoolExecutor.java:1146)\****____**n\tat
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor$Worker.run(*
>>>> ***
>>>> ____**
>>>>                           ThreadPoolExecutor.java:615)\*****____*n\tat
>>>>
>>>>          java.lang.Thread.run(Thread.******____java:679)\n","wait":0}}]
>>>> }
>>>>
>>>>
>>>>
>>>>
>>>>                           2013-04-16 16:27:14,412 DEBUG
>>>>          [agent.transport.Request]
>>>>                           (catalina-exec-9:null) Seq 18-1625162276:
>>>>          Received:  {
>>>>                           Ans: , MgmtId:
>>>>                           37528005876872, via: 18, Ver: v1, Flags: 10, {
>>>>          Answer } }
>>>>                           2013-04-16 16:27:14,412 DEBUG
>>>>                           [agent.manager.**____****AgentManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) Details from executing
>>>> class
>>>>                           com.cloud.agent.api.**____**
>>>> ModifyStoragePoolCommand:
>>>>                           java.lang.NullPointerException
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_storage.**
>>>> LibvirtStorageAdaptor.____**
>>>>
>>>>          createStoragePool(**____****LibvirtStorageAdaptor.java:**_**
>>>> **___462)
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_storage.**
>>>> KVMStoragePoolManager.____**
>>>>
>>>>          createStoragePool(**____****KVMStoragePoolManager.java:57)
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_resource.**____**
>>>> LibvirtComputingResource.**___****_execute(
>>>>                           **LibvirtComputingResource.___**
>>>> **_java:**2087)
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_resource.**____**
>>>> LibvirtComputingResource.**
>>>>
>>>>          executeRequest(**____****LibvirtComputingResource.java:****
>>>> ____**1053)
>>>>                                      at
>>>>
>>>>          com.cloud.agent.Agent.**____****processRequest(Agent.java:518)
>>>>                                      at
>>>>
>>>>          com.cloud.agent.Agent$**____****AgentRequestHandler.doTask(**
>>>>                           Agent.java:831)
>>>>                                      at
>>>>          com.cloud.utils.nio.Task.run(*****____*Task.java:83)
>>>>                                      at
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor.runWorker(**
>>>> ***
>>>> ____*
>>>>                           ThreadPoolExecutor.java:1146)
>>>>                                      at
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor$Worker.run(*
>>>> ***
>>>> ____**
>>>>                           ThreadPoolExecutor.java:615)
>>>>                                      at
>>>>          java.lang.Thread.run(Thread.******____java:679)
>>>>
>>>>
>>>>
>>>>
>>>>                           2013-04-16 16:27:14,451 WARN
>>>>                             [cloud.storage.**____****
>>>> StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) Unable to establish a
>>>>          connection
>>>>                           between
>>>>                           Host[-18-Routing] and Pool[218|RBD]
>>>>
>>>>          com.cloud.exception.**____****StorageUnavailableException:
>>>>
>>>>
>>>>
>>>>                           Resource
>>>>
>>>>                           [StoragePool:218]
>>>>                           is unreachable: Unable establish connection
>>>>          from storage
>>>>                           head to storage
>>>>                           pool 218 due to java.lang.NullPointerException
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_storage.**
>>>> LibvirtStorageAdaptor.____**
>>>>
>>>>          createStoragePool(**____****LibvirtStorageAdaptor.java:**_**
>>>> **___462)
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_storage.**
>>>> KVMStoragePoolManager.____**
>>>>
>>>>          createStoragePool(**____****KVMStoragePoolManager.java:57)
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_resource.**____**
>>>> LibvirtComputingResource.**___****_execute(
>>>>                           **LibvirtComputingResource.___**
>>>> **_java:**2087)
>>>>                                      at
>>>>
>>>>          com.cloud.hypervisor.kvm.**___****_resource.**____**
>>>> LibvirtComputingResource.**
>>>>
>>>>          executeRequest(**____****LibvirtComputingResource.java:****
>>>> ____**1053)
>>>>                                      at
>>>>
>>>>          com.cloud.agent.Agent.**____****processRequest(Agent.java:518)
>>>>                                      at
>>>>
>>>>          com.cloud.agent.Agent$**____****AgentRequestHandler.doTask(**
>>>>                           Agent.java:831)
>>>>                                      at
>>>>          com.cloud.utils.nio.Task.run(*****____*Task.java:83)
>>>>                                      at
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor.runWorker(**
>>>> ***
>>>> ____*
>>>>                           ThreadPoolExecutor.java:1146)
>>>>                                      at
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor$Worker.run(*
>>>> ***
>>>> ____**
>>>>                           ThreadPoolExecutor.java:615)
>>>>                                      at
>>>>          java.lang.Thread.run(Thread.******____java:679)
>>>>
>>>>                                      at
>>>>
>>>>          com.cloud.storage.**____****StorageManagerImpl.**____**
>>>> connectHostToSharedPool(**
>>>>                           StorageManagerImpl.java:1685)
>>>>                                      at
>>>>
>>>>          com.cloud.storage.**____****StorageManagerImpl.createPool(**
>>>> **____**
>>>>                           StorageManagerImpl.java:1450)
>>>>                                      at
>>>>
>>>>          com.cloud.storage.**____****StorageManagerImpl.createPool(**
>>>> **____**
>>>>                           StorageManagerImpl.java:215)
>>>>                                      at
>>>>
>>>>          com.cloud.api.commands.**____****CreateStoragePoolCmd.execute(
>>>> *****
>>>> ____*
>>>>                           CreateStoragePoolCmd.java:120)
>>>>                                      at
>>>>
>>>>          com.cloud.api.ApiDispatcher.******____dispatch(ApiDispatcher.*
>>>> ***
>>>> java:__**
>>>>                           138)
>>>>                                      at
>>>>
>>>>          com.cloud.api.ApiServer.**____****queueCommand(ApiServer.java:
>>>> ******
>>>> ____543)
>>>>                                      at
>>>>
>>>>          com.cloud.api.ApiServer.**____****handleRequest(ApiServer.**
>>>> java:***
>>>> ____*422)
>>>>                                      at
>>>>
>>>>          com.cloud.api.ApiServlet.**___****_processRequest(ApiServlet.*
>>>> ***
>>>>                           java:304)
>>>>                                      at
>>>>
>>>>          com.cloud.api.ApiServlet.**___****_doGet(ApiServlet.java:63)
>>>>                                      at
>>>>          javax.servlet.http.**____****HttpServlet.service(**
>>>>                           HttpServlet.java:617)
>>>>                                      at
>>>>          javax.servlet.http.**____****HttpServlet.service(**
>>>>                           HttpServlet.java:717)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.core.**___****_ApplicationFilterChain.**__
>>>> **__**
>>>> internalDoFilter(**
>>>>                           ApplicationFilterChain.java:******____290)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.core.**___****_ApplicationFilterChain.**__
>>>> **__**
>>>> doFilter(**
>>>>                           ApplicationFilterChain.java:******____206)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.core.**___****_StandardWrapperValve.**
>>>> invoke(****
>>>>                           StandardWrapperValve.java:233)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.core.**___****_StandardContextValve.**
>>>> invoke(****
>>>>                           StandardContextValve.java:191)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.core.**___****_StandardHostValve.invoke(**
>>>>                           StandardHostValve.java:127)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.valves.**_****___ErrorReportValve.invoke(*
>>>> ***
>>>>                           ErrorReportValve.java:102)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.valves.**_****___AccessLogValve.invoke(**
>>>>                           AccessLogValve.java:555)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.core.**___****_StandardEngineValve.invoke(
>>>> ****
>>>>                           StandardEngineValve.java:109)
>>>>                                      at
>>>>
>>>>          org.apache.catalina.connector.****____**CoyoteAdapter.service(
>>>> ****
>>>>                           CoyoteAdapter.java:298)
>>>>                                      at
>>>>
>>>>          org.apache.coyote.http11.**___****_Http11NioProcessor.process(
>>>> ****
>>>>                           Http11NioProcessor.java:889)
>>>>                                      at
>>>>
>>>>          org.apache.coyote.http11.**___****_Http11NioProtocol$**____**
>>>> Http11ConnectionHandler.**
>>>>                           process(Http11NioProtocol.**__****__java:721)
>>>>                                      at
>>>>
>>>>          org.apache.tomcat.util.net.**_****___NioEndpoint$**
>>>> SocketProcessor.*__*
>>>>                           run(NioEndpoint.java:2260)
>>>>                                      at
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor.runWorker(**
>>>> ***
>>>> ____*
>>>>                           ThreadPoolExecutor.java:1110)
>>>>                                      at
>>>>
>>>>          java.util.concurrent.**____****ThreadPoolExecutor$Worker.run(*
>>>> ***
>>>> ____**
>>>>                           ThreadPoolExecutor.java:603)
>>>>                                      at
>>>>          java.lang.Thread.run(Thread.******____java:679)
>>>>
>>>>
>>>>
>>>>                           2013-04-16 16:27:14,452 WARN
>>>>                             [cloud.storage.**____****
>>>> StorageManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (catalina-exec-9:null) No host can access
>>>>          storage pool
>>>>                           Pool[218|RBD] on
>>>>                           cluster 5
>>>>                           2013-04-16 16:27:14,504 WARN
>>>>            [cloud.api.ApiDispatcher]
>>>>                           (catalina-exec-9:null) class
>>>>                           com.cloud.api.**____****ServerApiException :
>>>>
>>>> Failed
>>>>
>>>>
>>>>                           to
>>>>                           add storage pool
>>>>                           2013-04-16 16:27:15,293 DEBUG
>>>>                           [agent.manager.**____****AgentManagerImpl]
>>>>
>>>>
>>>>
>>>>
>>>>                           (AgentManager-Handler-12:null) Ping from 18
>>>>                           ^C
>>>>                           [root@RDR02S02 management]#
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>
>>>>          --
>>>>          Guangjian
>>>>
>>>>
>>>>
>>>>
>>>> --
>>>> Guangjian
>>>>
>>>>
>>>
>>
>>


-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Guangjian Liu <gu...@gmail.com>.
list other three pool information as below:
root@ubuntu:~# virsh pool-info ac2949d9-16fb-4c54-a551-1cdbeb501f05
Name:           ac2949d9-16fb-4c54-a551-1cdbeb501f05
UUID:           ac2949d9-16fb-4c54-a551-1cdbeb501f05
State:          running
Persistent:     yes
Autostart:      no
Capacity:       811.17 GiB
Allocation:     13.11 GiB
Available:      798.05 GiB

root@ubuntu:~# virsh pool-info d9474b9d-afa1-3737-a13b-df333dae295f
Name:           d9474b9d-afa1-3737-a13b-df333dae295f
UUID:           d9474b9d-afa1-3737-a13b-df333dae295f
State:          running
Persistent:     yes
Autostart:      no
Capacity:       57.67 GiB
Allocation:     9.86 GiB
Available:      47.81 GiB

root@ubuntu:~# virsh pool-info f81769c1-a31b-4202-a3aa-bdf616eef019
Name:           f81769c1-a31b-4202-a3aa-bdf616eef019
UUID:           f81769c1-a31b-4202-a3aa-bdf616eef019
State:          running
Persistent:     yes
Autostart:      no
Capacity:       811.17 GiB
Allocation:     13.11 GiB
Available:      798.05 GiB


On Mon, Apr 22, 2013 at 9:57 AM, Guangjian Liu <gu...@gmail.com> wrote:

> Wido,
>
> Please see information below
>
> How big is the Ceph cluster? (ceph -s).
> [root@RDR02S01 ~]# ceph -s
>    health HEALTH_OK
>    monmap e1: 1 mons at {a=10.0.0.41:6789/0}, election epoch 1, quorum 0 a
>    osdmap e35: 2 osds: 2 up, 2 in
>     pgmap v17450: 714 pgs: 714 active+clean; 2522 MB data, 10213 MB used,
> 126 GB / 136 GB avail
>    mdsmap e557: 1/1/1 up {0=a=up:active}
>
> And what does libvirt say?
>
> 1. Log on to a hypervisor
> 2. virsh pool-list
> root@ubuntu:~# virsh pool-list
> Name                 State      Autostart
> -----------------------------------------
> 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c active     no
> ac2949d9-16fb-4c54-a551-1cdbeb501f05 active     no
> d9474b9d-afa1-3737-a13b-df333dae295f active     no
> f81769c1-a31b-4202-a3aa-bdf616eef019 active     no
>
> 3. virsh pool-info <uuid of ceph pool>
> root@ubuntu:~# virsh pool-info 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
> Name:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
> UUID:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
> State:          running
> Persistent:     yes
> Autostart:      no
> Capacity:       136.60 GiB
> *Allocation:     30723.02 TiB*
> Available:      126.63 GiB
>
>
> On Fri, Apr 19, 2013 at 4:43 PM, Wido den Hollander <wi...@widodh.nl>wrote:
>
>> Hi,
>>
>>
>> On 04/19/2013 08:48 AM, Guangjian Liu wrote:
>>
>>> Ceph file system can added as Primary storage by RBD, Thanks.
>>>
>>>
>> Great!
>>
>>
>>  Now I meet the problem in creating VM to Ceph.
>>>
>>> First I create computer offer and storage offer with tag "ceph", and
>>> define Ceph Primary storage with tag "ceph".
>>> Then I create VM use "ceph" computer offer, It display exception in Log.
>>> It seems the storage size out off usage.
>>>
>>>
>> That is indeed odd. How big is the Ceph cluster? (ceph -s).
>>
>> And what does libvirt say?
>>
>> 1. Log on to a hypervisor
>> 2. virsh pool-list
>> 3. virsh pool-info <uuid of ceph pool>
>>
>> I'm curious about what libvirt reports as pool size.
>>
>> Wido
>>
>>  Create VM fail
>>> 2013-04-18 10:23:27,559 DEBUG
>>> [storage.allocator.**AbstractStoragePoolAllocator]
>>> (Job-Executor-1:job-24)
>>> Is storage pool shared? true
>>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
>>> 146673582080, *usedBytes: 33780317214998658*, usedPct:
>>>
>>> 230309.48542985672, disable threshold: 0.85
>>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
>>> percentage: 230309.48542985672 has crossed the
>>> pool.storage.capacity.**disablethreshold: 0.85
>>> 2013-04-18 10:23:27,561 DEBUG
>>> [storage.allocator.**FirstFitStoragePoolAllocator]
>>> (Job-Executor-1:job-24)
>>> FirstFitStoragePoolAllocator returning 0 suitable storage pools
>>> 2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) No suitable pools found for volume:
>>> Vol[8|vm=8|ROOT] under cluster: 1
>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) No suitable pools found
>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) No suitable storagePools found under this
>>> Cluster: 1
>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) Could not find suitable Deployment Destination
>>> for this VM under any clusters, returning.
>>> 2013-04-18 10:23:27,640 DEBUG [cloud.capacity.**CapacityManagerImpl]
>>> (Job-Executor-1:job-24) VM state transitted from :Starting to Stopped
>>> with event: OperationFailedvm's original host id: null new host id: null
>>> host id before state transition: null
>>> 2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
>>> (Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to
>>> create
>>> management-server.7z 2013-04-18 10:02
>>>
>>> I check CS Web GUI and Database, the storage Allocated 30723TB, but
>>> actually the total size is 138GB.
>>> Inline image 1
>>>
>>>
>>> mysql> select * from storage_pool;
>>> +-----+---------+-------------**-------------------------+----**
>>> ---------------+------+-------**---------+--------+-----------**
>>> -+-------------------+--------**--------+--------------+------**
>>> -----+-----------------+------**---------------+---------+----**
>>> ---------+--------+
>>> | id  | name    | uuid                                 | pool_type
>>>    | port | data_center_id | pod_id | cluster_id | available_bytes   |
>>> capacity_bytes | host_address | user_info | path            | created
>>>            | removed | update_time | status |
>>> +-----+---------+-------------**-------------------------+----**
>>> ---------------+------+-------**---------+--------+-----------**
>>> -+-------------------+--------**--------+--------------+------**
>>> -----+-----------------+------**---------------+---------+----**
>>> ---------+--------+
>>> | 200 | primary | d9474b9d-afa1-3737-a13b-**df333dae295f |
>>> NetworkFilesystem | 2049 |              1 |      1 |          1 |
>>> 10301210624 |    61927849984 | 10.0.0.42    | NULL      |
>>> /export/primary | 2013-04-17 06:25:47 | NULL    | NULL        | Up     |
>>> | 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c | RBD
>>>    | 6789 |              1 |      1 |          1 | 33780317214998658 |
>>> 146673582080 | 10.0.0.41    | :         | rbd             | 2013-04-18
>>> 02:02:00 | NULL    | NULL        | Up     |
>>> +-----+---------+-------------**-------------------------+----**
>>> ---------------+------+-------**---------+--------+-----------**
>>> -+-------------------+--------**--------+--------------+------**
>>> -----+-----------------+------**---------------+---------+----**
>>> ---------+--------+
>>> 2 rows in set (0.00 sec)
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <wido@widodh.nl
>>> <ma...@widodh.nl>> wrote:
>>>
>>>     Hi,
>>>
>>>
>>>     On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>>>
>>>         I still meet the same result.
>>>
>>>         In ubuntu 12.04,
>>>         1. I install libvirt-dev as below,
>>>              apt-get install libvirt-dev
>>>         2. rebuild libvirt, see detail build log in attach.
>>>         root@ubuntu:~/install/libvirt-**__0.10.2# ./autogen.sh
>>>
>>>         running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
>>>         --no-create --no-recursion
>>>         configure: WARNING: unrecognized options: --enable-rbd
>>>
>>>
>>>     The correct option is "--with-storage-rbd"
>>>
>>>     But check the output of configure, it should tell you whether RBD
>>>     was enabled or not.
>>>
>>>     Then verify again if you can create a RBD storage pool manually via
>>>     libvirt.
>>>
>>>     Wido
>>>
>>>         .....
>>>         make
>>>         make install
>>>
>>>
>>>
>>>         On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander
>>>         <wido@widodh.nl <ma...@widodh.nl>
>>>         <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>>
>>>              Hi,
>>>
>>>
>>>              On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>>>
>>>                  Thanks for your mail, you suggest compile libvirt with
>>>         RBD enable.
>>>                  I already build libvirt-0.10.2.tar.gz as document
>>>         http://ceph.com/docs/master/__**__rbd/libvirt/<http://ceph.com/docs/master/____rbd/libvirt/>
>>>         <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>> >
>>>
>>>
>>>                  <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>>         <http://ceph.com/docs/master/**rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>>>
>>> in my SERVER C(Ubuntu
>>>                  12.04),
>>>                     Shall I build libvirt-0.10.2.tar.gz with RBD enable?
>>>           use
>>>                  ./configure
>>>                  --enable-rbd instead autogen.sh?
>>>
>>>
>>>              Well, you don't have to add --enable-rbd to configure nor
>>>              autogen.sh, but you have to make sure the development
>>>         libraries for
>>>              librbd are installed.
>>>
>>>              On CentOS do this:
>>>
>>>              yum install librbd-devel
>>>
>>>              And retry autogen.sh for libvirt, it should tell you RBD is
>>>         enabled.
>>>
>>>              Wido
>>>
>>>                  cd libvirt
>>>                  ./autogen.sh
>>>                  make
>>>                  sudo make install
>>>
>>>
>>>
>>>                  On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>>>                  <wido@widodh.nl <ma...@widodh.nl>
>>>         <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>>
>>>                      Hi,
>>>
>>>
>>>                      On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>>
>>>                          Create rbd primary storage fail in CS 4.0.1
>>>                          Anybody can help about it!
>>>
>>>                          Environment:
>>>                          1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>>                          2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>>                          3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>>                                 compile libvirt and Qemu as document
>>>                          root@ubuntu:/usr/local/lib# virsh version
>>>                          Compiled against library: libvirt 0.10.2
>>>                          Using library: libvirt 0.10.2
>>>                          Using API: QEMU 0.10.2
>>>                          Running hypervisor: QEMU 1.0.0
>>>
>>>
>>>                      Are you sure both libvirt and Qemu are compiled
>>>         with RBD
>>>                      enabled?
>>>
>>>                      On your CentOS system you should make sure
>>>         librbd-dev is
>>>                      installed during
>>>                      compilation of libvirt and Qemu.
>>>
>>>                      The most important part is the RBD storage pool
>>>         support in
>>>                      libvirt, that
>>>                      should be enabled.
>>>
>>>                      In the e-mail you send me directly I saw this:
>>>
>>>                      root@ubuntu:~/scripts# virsh pool-define
>>>         rbd-pool.xml error:
>>>                      Failed to
>>>                      define pool from rbd-pool.xml error: internal error
>>>         missing
>>>                      backend for
>>>                      pool type 8
>>>
>>>                      That suggest RBD storage pool support is not
>>>         enabled in libvirt.
>>>
>>>                      Wido
>>>
>>>
>>>                         Problem:
>>>
>>>                          create primary storage fail with rbd device.
>>>
>>>                          Fail log:
>>>                          2013-04-16 16:27:14,224 DEBUG
>>>                          [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) createPool Params @
>>>         scheme - rbd
>>>                          storageHost -
>>>                          10.0.0.41 hostPath - /cloudstack port - -1
>>>                          2013-04-16 16:27:14,270 DEBUG
>>>                          [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) In createPool Setting
>>>         poolId -
>>>                          218 uuid -
>>>                          5924a2df-d658-3119-8aba-**____**f90307683206
>>>
>>>         zoneId - 4
>>>
>>>                          podId - 4 poolName -
>>>                          ceph
>>>                          2013-04-16 16:27:14,318 DEBUG
>>>                          [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) creating pool ceph on
>>>           host 18
>>>                          2013-04-16 16:27:14,320 DEBUG
>>>         [agent.transport.Request]
>>>                          (catalina-exec-9:null) Seq 18-1625162275:
>>>         Sending  { Cmd
>>>                          , MgmtId:
>>>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>>>
>>>         [{"CreateStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>>> __218,__"**
>>>
>>>         uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>>> 10.____**
>>>
>>>         0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>>> 6789,"__**
>>>
>>>         type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>>> b81c-__*__*
>>>
>>>
>>>
>>>                          9896da212335","wait":0}}]
>>>                          }
>>>                          2013-04-16 16:27:14,323 DEBUG
>>>         [agent.transport.Request]
>>>                          (AgentManager-Handler-2:null) Seq 18-1625162275:
>>>                          Processing:  { Ans: ,
>>>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>>>         Flags: 10,
>>>
>>>         [{"Answer":{"result":true,"**_**___details":"success","wait":**
>>> 0}}__]
>>>
>>>
>>>                          }
>>>
>>>                          2013-04-16 16:27:14,323 DEBUG
>>>         [agent.transport.Request]
>>>                          (catalina-exec-9:null) Seq 18-1625162275:
>>>         Received:  {
>>>                          Ans: , MgmtId:
>>>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>>>         Answer } }
>>>                          2013-04-16 16:27:14,323 DEBUG
>>>                          [agent.manager.**____**AgentManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) Details from executing
>>> class
>>>
>>>         com.cloud.agent.api.**____**CreateStoragePoolCommand: success
>>>
>>>
>>>                          2013-04-16 16:27:14,323 DEBUG
>>>                          [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) In createPool Adding the
>>>         pool to
>>>                          each of the hosts
>>>                          2013-04-16 16:27:14,323 DEBUG
>>>                          [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) Adding pool ceph to
>>>  host 18
>>>                          2013-04-16 16:27:14,326 DEBUG
>>>         [agent.transport.Request]
>>>                          (catalina-exec-9:null) Seq 18-1625162276:
>>>         Sending  { Cmd
>>>                          , MgmtId:
>>>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>>>
>>>         [{"ModifyStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>>> __218,__"**
>>>
>>>         uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>>> 10.____**
>>>
>>>         0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>>> 6789,"__**
>>>
>>>         type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>>> b81c-__*__*
>>>
>>>
>>>
>>>                          9896da212335","wait":0}}]
>>>                          }
>>>                          2013-04-16 16:27:14,411 DEBUG
>>>         [agent.transport.Request]
>>>                          (AgentManager-Handler-6:null) Seq 18-1625162276:
>>>                          Processing:  { Ans: ,
>>>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>>>         Flags: 10,
>>>
>>>         [{"Answer":{"result":false,"****____details":"java.lang.**
>>>                          NullPointerException\n\tat
>>>
>>>         com.cloud.hypervisor.kvm.**___**_storage.**
>>> LibvirtStorageAdaptor.____**
>>>
>>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**
>>> ___462)\n\tat
>>>
>>>         com.cloud.hypervisor.kvm.**___**_storage.**
>>> KVMStoragePoolManager.____**
>>>
>>>         createStoragePool(**____**KVMStoragePoolManager.java:57)**
>>> ____**\n\tat
>>>
>>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**___**_execute(
>>>                          **LibvirtComputingResource.___**
>>> _java:**2087)\n\tat
>>>
>>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**
>>>
>>>         executeRequest(**____**LibvirtComputingResource.java:**
>>> ____**1053)\n\tat
>>>
>>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)**
>>> ____**\n\tat
>>>
>>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>>                          Agent.java:831)\n\tat
>>>
>>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)\n\tat
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                          ThreadPoolExecutor.java:1146)\**____**n\tat
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                          ThreadPoolExecutor.java:615)\***____*n\tat
>>>
>>>         java.lang.Thread.run(Thread.****____java:679)\n","wait":0}}] }
>>>
>>>
>>>
>>>                          2013-04-16 16:27:14,412 DEBUG
>>>         [agent.transport.Request]
>>>                          (catalina-exec-9:null) Seq 18-1625162276:
>>>         Received:  {
>>>                          Ans: , MgmtId:
>>>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>>>         Answer } }
>>>                          2013-04-16 16:27:14,412 DEBUG
>>>                          [agent.manager.**____**AgentManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) Details from executing
>>> class
>>>                          com.cloud.agent.api.**____**
>>> ModifyStoragePoolCommand:
>>>                          java.lang.NullPointerException
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_storage.**
>>> LibvirtStorageAdaptor.____**
>>>
>>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**
>>> ___462)
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_storage.**
>>> KVMStoragePoolManager.____**
>>>
>>>         createStoragePool(**____**KVMStoragePoolManager.java:57)
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**___**_execute(
>>>                          **LibvirtComputingResource.___**_java:**2087)
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**
>>>
>>>         executeRequest(**____**LibvirtComputingResource.java:**
>>> ____**1053)
>>>                                     at
>>>
>>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>>                                     at
>>>
>>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>>                          Agent.java:831)
>>>                                     at
>>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>>                                     at
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                          ThreadPoolExecutor.java:1146)
>>>                                     at
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                          ThreadPoolExecutor.java:615)
>>>                                     at
>>>         java.lang.Thread.run(Thread.****____java:679)
>>>
>>>
>>>
>>>                          2013-04-16 16:27:14,451 WARN
>>>                            [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) Unable to establish a
>>>         connection
>>>                          between
>>>                          Host[-18-Routing] and Pool[218|RBD]
>>>
>>>         com.cloud.exception.**____**StorageUnavailableException:
>>>
>>>
>>>                          Resource
>>>
>>>                          [StoragePool:218]
>>>                          is unreachable: Unable establish connection
>>>         from storage
>>>                          head to storage
>>>                          pool 218 due to java.lang.NullPointerException
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_storage.**
>>> LibvirtStorageAdaptor.____**
>>>
>>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**
>>> ___462)
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_storage.**
>>> KVMStoragePoolManager.____**
>>>
>>>         createStoragePool(**____**KVMStoragePoolManager.java:57)
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**___**_execute(
>>>                          **LibvirtComputingResource.___**_java:**2087)
>>>                                     at
>>>
>>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**
>>>
>>>         executeRequest(**____**LibvirtComputingResource.java:**
>>> ____**1053)
>>>                                     at
>>>
>>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>>                                     at
>>>
>>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>>                          Agent.java:831)
>>>                                     at
>>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>>                                     at
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                          ThreadPoolExecutor.java:1146)
>>>                                     at
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                          ThreadPoolExecutor.java:615)
>>>                                     at
>>>         java.lang.Thread.run(Thread.****____java:679)
>>>
>>>                                     at
>>>
>>>         com.cloud.storage.**____**StorageManagerImpl.**____**
>>> connectHostToSharedPool(**
>>>                          StorageManagerImpl.java:1685)
>>>                                     at
>>>
>>>         com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>>                          StorageManagerImpl.java:1450)
>>>                                     at
>>>
>>>         com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>>                          StorageManagerImpl.java:215)
>>>                                     at
>>>
>>>         com.cloud.api.commands.**____**CreateStoragePoolCmd.execute(***
>>> ____*
>>>                          CreateStoragePoolCmd.java:120)
>>>                                     at
>>>
>>>         com.cloud.api.ApiDispatcher.****____dispatch(ApiDispatcher.**
>>> java:__**
>>>                          138)
>>>                                     at
>>>
>>>         com.cloud.api.ApiServer.**____**queueCommand(ApiServer.java:****
>>> ____543)
>>>                                     at
>>>
>>>         com.cloud.api.ApiServer.**____**handleRequest(ApiServer.java:***
>>> ____*422)
>>>                                     at
>>>
>>>         com.cloud.api.ApiServlet.**___**_processRequest(ApiServlet.**
>>>                          java:304)
>>>                                     at
>>>
>>>         com.cloud.api.ApiServlet.**___**_doGet(ApiServlet.java:63)
>>>                                     at
>>>         javax.servlet.http.**____**HttpServlet.service(**
>>>                          HttpServlet.java:617)
>>>                                     at
>>>         javax.servlet.http.**____**HttpServlet.service(**
>>>                          HttpServlet.java:717)
>>>                                     at
>>>
>>>         org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>>> internalDoFilter(**
>>>                          ApplicationFilterChain.java:****____290)
>>>                                     at
>>>
>>>         org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>>> doFilter(**
>>>                          ApplicationFilterChain.java:****____206)
>>>                                     at
>>>
>>>         org.apache.catalina.core.**___**_StandardWrapperValve.invoke(***
>>> *
>>>                          StandardWrapperValve.java:233)
>>>                                     at
>>>
>>>         org.apache.catalina.core.**___**_StandardContextValve.invoke(***
>>> *
>>>                          StandardContextValve.java:191)
>>>                                     at
>>>
>>>         org.apache.catalina.core.**___**_StandardHostValve.invoke(**
>>>                          StandardHostValve.java:127)
>>>                                     at
>>>
>>>         org.apache.catalina.valves.**_**___ErrorReportValve.invoke(**
>>>                          ErrorReportValve.java:102)
>>>                                     at
>>>
>>>         org.apache.catalina.valves.**_**___AccessLogValve.invoke(**
>>>                          AccessLogValve.java:555)
>>>                                     at
>>>
>>>         org.apache.catalina.core.**___**_StandardEngineValve.invoke(**
>>>                          StandardEngineValve.java:109)
>>>                                     at
>>>
>>>         org.apache.catalina.connector.**____**CoyoteAdapter.service(**
>>>                          CoyoteAdapter.java:298)
>>>                                     at
>>>
>>>         org.apache.coyote.http11.**___**_Http11NioProcessor.process(**
>>>                          Http11NioProcessor.java:889)
>>>                                     at
>>>
>>>         org.apache.coyote.http11.**___**_Http11NioProtocol$**____**
>>> Http11ConnectionHandler.**
>>>                          process(Http11NioProtocol.**__**__java:721)
>>>                                     at
>>>
>>>         org.apache.tomcat.util.net.**_**___NioEndpoint$**
>>> SocketProcessor.*__*
>>>                          run(NioEndpoint.java:2260)
>>>                                     at
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                          ThreadPoolExecutor.java:1110)
>>>                                     at
>>>
>>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                          ThreadPoolExecutor.java:603)
>>>                                     at
>>>         java.lang.Thread.run(Thread.****____java:679)
>>>
>>>
>>>                          2013-04-16 16:27:14,452 WARN
>>>                            [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                          (catalina-exec-9:null) No host can access
>>>         storage pool
>>>                          Pool[218|RBD] on
>>>                          cluster 5
>>>                          2013-04-16 16:27:14,504 WARN
>>>           [cloud.api.ApiDispatcher]
>>>                          (catalina-exec-9:null) class
>>>                          com.cloud.api.**____**ServerApiException :
>>> Failed
>>>
>>>
>>>                          to
>>>                          add storage pool
>>>                          2013-04-16 16:27:15,293 DEBUG
>>>                          [agent.manager.**____**AgentManagerImpl]
>>>
>>>
>>>
>>>                          (AgentManager-Handler-12:null) Ping from 18
>>>                          ^C
>>>                          [root@RDR02S02 management]#
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>         --
>>>         Guangjian
>>>
>>>
>>>
>>>
>>> --
>>> Guangjian
>>>
>>
>
>
> --
> Guangjian
>



-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Wido den Hollander <wi...@widodh.nl>.

On 04/22/2013 03:57 AM, Guangjian Liu wrote:
> Wido,
>
> Please see information below
>
> How big is the Ceph cluster? (ceph -s).
> [root@RDR02S01 ~]# ceph -s
>     health HEALTH_OK
>     monmap e1: 1 mons at {a=10.0.0.41:6789/0}, election epoch 1, quorum 0 a
>     osdmap e35: 2 osds: 2 up, 2 in
>      pgmap v17450: 714 pgs: 714 active+clean; 2522 MB data, 10213 MB used,
> 126 GB / 136 GB avail
>     mdsmap e557: 1/1/1 up {0=a=up:active}
>
> And what does libvirt say?
>
> 1. Log on to a hypervisor
> 2. virsh pool-list
> root@ubuntu:~# virsh pool-list
> Name                 State      Autostart
> -----------------------------------------
> 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c active     no
> ac2949d9-16fb-4c54-a551-1cdbeb501f05 active     no
> d9474b9d-afa1-3737-a13b-df333dae295f active     no
> f81769c1-a31b-4202-a3aa-bdf616eef019 active     no
>
> 3. virsh pool-info <uuid of ceph pool>
> root@ubuntu:~# virsh pool-info 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
> Name:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
> UUID:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
> State:          running
> Persistent:     yes
> Autostart:      no
> Capacity:       136.60 GiB
> *Allocation:     30723.02 TiB*
> Available:      126.63 GiB
>

That is really odd. Could it be that you have created a lot of RBD 
images in that pool? Still, 30723TB is still a big number.

I have no clue what that could be right now.

Wido

>
> On Fri, Apr 19, 2013 at 4:43 PM, Wido den Hollander <wi...@widodh.nl> wrote:
>
>> Hi,
>>
>>
>> On 04/19/2013 08:48 AM, Guangjian Liu wrote:
>>
>>> Ceph file system can added as Primary storage by RBD, Thanks.
>>>
>>>
>> Great!
>>
>>
>>   Now I meet the problem in creating VM to Ceph.
>>>
>>> First I create computer offer and storage offer with tag "ceph", and
>>> define Ceph Primary storage with tag "ceph".
>>> Then I create VM use "ceph" computer offer, It display exception in Log.
>>> It seems the storage size out off usage.
>>>
>>>
>> That is indeed odd. How big is the Ceph cluster? (ceph -s).
>>
>> And what does libvirt say?
>>
>> 1. Log on to a hypervisor
>> 2. virsh pool-list
>> 3. virsh pool-info <uuid of ceph pool>
>>
>> I'm curious about what libvirt reports as pool size.
>>
>> Wido
>>
>>   Create VM fail
>>> 2013-04-18 10:23:27,559 DEBUG
>>> [storage.allocator.**AbstractStoragePoolAllocator]
>>> (Job-Executor-1:job-24)
>>> Is storage pool shared? true
>>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
>>> 146673582080, *usedBytes: 33780317214998658*, usedPct:
>>>
>>> 230309.48542985672, disable threshold: 0.85
>>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
>>> percentage: 230309.48542985672 has crossed the
>>> pool.storage.capacity.**disablethreshold: 0.85
>>> 2013-04-18 10:23:27,561 DEBUG
>>> [storage.allocator.**FirstFitStoragePoolAllocator]
>>> (Job-Executor-1:job-24)
>>> FirstFitStoragePoolAllocator returning 0 suitable storage pools
>>> 2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) No suitable pools found for volume:
>>> Vol[8|vm=8|ROOT] under cluster: 1
>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) No suitable pools found
>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) No suitable storagePools found under this
>>> Cluster: 1
>>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>>> (Job-Executor-1:job-24) Could not find suitable Deployment Destination
>>> for this VM under any clusters, returning.
>>> 2013-04-18 10:23:27,640 DEBUG [cloud.capacity.**CapacityManagerImpl]
>>> (Job-Executor-1:job-24) VM state transitted from :Starting to Stopped
>>> with event: OperationFailedvm's original host id: null new host id: null
>>> host id before state transition: null
>>> 2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
>>> (Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to
>>> create
>>> management-server.7z 2013-04-18 10:02
>>>
>>> I check CS Web GUI and Database, the storage Allocated 30723TB, but
>>> actually the total size is 138GB.
>>> Inline image 1
>>>
>>>
>>> mysql> select * from storage_pool;
>>> +-----+---------+-------------**-------------------------+----**
>>> ---------------+------+-------**---------+--------+-----------**
>>> -+-------------------+--------**--------+--------------+------**
>>> -----+-----------------+------**---------------+---------+----**
>>> ---------+--------+
>>> | id  | name    | uuid                                 | pool_type
>>>     | port | data_center_id | pod_id | cluster_id | available_bytes   |
>>> capacity_bytes | host_address | user_info | path            | created
>>>             | removed | update_time | status |
>>> +-----+---------+-------------**-------------------------+----**
>>> ---------------+------+-------**---------+--------+-----------**
>>> -+-------------------+--------**--------+--------------+------**
>>> -----+-----------------+------**---------------+---------+----**
>>> ---------+--------+
>>> | 200 | primary | d9474b9d-afa1-3737-a13b-**df333dae295f |
>>> NetworkFilesystem | 2049 |              1 |      1 |          1 |
>>> 10301210624 |    61927849984 | 10.0.0.42    | NULL      |
>>> /export/primary | 2013-04-17 06:25:47 | NULL    | NULL        | Up     |
>>> | 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c | RBD
>>>     | 6789 |              1 |      1 |          1 | 33780317214998658 |
>>> 146673582080 | 10.0.0.41    | :         | rbd             | 2013-04-18
>>> 02:02:00 | NULL    | NULL        | Up     |
>>> +-----+---------+-------------**-------------------------+----**
>>> ---------------+------+-------**---------+--------+-----------**
>>> -+-------------------+--------**--------+--------------+------**
>>> -----+-----------------+------**---------------+---------+----**
>>> ---------+--------+
>>> 2 rows in set (0.00 sec)
>>>
>>>
>>>
>>>
>>>
>>> On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <wido@widodh.nl
>>> <ma...@widodh.nl>> wrote:
>>>
>>>      Hi,
>>>
>>>
>>>      On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>>>
>>>          I still meet the same result.
>>>
>>>          In ubuntu 12.04,
>>>          1. I install libvirt-dev as below,
>>>               apt-get install libvirt-dev
>>>          2. rebuild libvirt, see detail build log in attach.
>>>          root@ubuntu:~/install/libvirt-**__0.10.2# ./autogen.sh
>>>
>>>          running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
>>>          --no-create --no-recursion
>>>          configure: WARNING: unrecognized options: --enable-rbd
>>>
>>>
>>>      The correct option is "--with-storage-rbd"
>>>
>>>      But check the output of configure, it should tell you whether RBD
>>>      was enabled or not.
>>>
>>>      Then verify again if you can create a RBD storage pool manually via
>>>      libvirt.
>>>
>>>      Wido
>>>
>>>          .....
>>>          make
>>>          make install
>>>
>>>
>>>
>>>          On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander
>>>          <wido@widodh.nl <ma...@widodh.nl>
>>>          <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>>
>>>               Hi,
>>>
>>>
>>>               On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>>>
>>>                   Thanks for your mail, you suggest compile libvirt with
>>>          RBD enable.
>>>                   I already build libvirt-0.10.2.tar.gz as document
>>>          http://ceph.com/docs/master/__**__rbd/libvirt/<http://ceph.com/docs/master/____rbd/libvirt/>
>>>          <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>>>
>>>
>>>
>>>                   <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>>          <http://ceph.com/docs/master/**rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>>>
>>> in my SERVER C(Ubuntu
>>>                   12.04),
>>>                      Shall I build libvirt-0.10.2.tar.gz with RBD enable?
>>>            use
>>>                   ./configure
>>>                   --enable-rbd instead autogen.sh?
>>>
>>>
>>>               Well, you don't have to add --enable-rbd to configure nor
>>>               autogen.sh, but you have to make sure the development
>>>          libraries for
>>>               librbd are installed.
>>>
>>>               On CentOS do this:
>>>
>>>               yum install librbd-devel
>>>
>>>               And retry autogen.sh for libvirt, it should tell you RBD is
>>>          enabled.
>>>
>>>               Wido
>>>
>>>                   cd libvirt
>>>                   ./autogen.sh
>>>                   make
>>>                   sudo make install
>>>
>>>
>>>
>>>                   On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>>>                   <wido@widodh.nl <ma...@widodh.nl>
>>>          <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>>
>>>                       Hi,
>>>
>>>
>>>                       On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>>
>>>                           Create rbd primary storage fail in CS 4.0.1
>>>                           Anybody can help about it!
>>>
>>>                           Environment:
>>>                           1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>>                           2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>>                           3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>>                                  compile libvirt and Qemu as document
>>>                           root@ubuntu:/usr/local/lib# virsh version
>>>                           Compiled against library: libvirt 0.10.2
>>>                           Using library: libvirt 0.10.2
>>>                           Using API: QEMU 0.10.2
>>>                           Running hypervisor: QEMU 1.0.0
>>>
>>>
>>>                       Are you sure both libvirt and Qemu are compiled
>>>          with RBD
>>>                       enabled?
>>>
>>>                       On your CentOS system you should make sure
>>>          librbd-dev is
>>>                       installed during
>>>                       compilation of libvirt and Qemu.
>>>
>>>                       The most important part is the RBD storage pool
>>>          support in
>>>                       libvirt, that
>>>                       should be enabled.
>>>
>>>                       In the e-mail you send me directly I saw this:
>>>
>>>                       root@ubuntu:~/scripts# virsh pool-define
>>>          rbd-pool.xml error:
>>>                       Failed to
>>>                       define pool from rbd-pool.xml error: internal error
>>>          missing
>>>                       backend for
>>>                       pool type 8
>>>
>>>                       That suggest RBD storage pool support is not
>>>          enabled in libvirt.
>>>
>>>                       Wido
>>>
>>>
>>>                          Problem:
>>>
>>>                           create primary storage fail with rbd device.
>>>
>>>                           Fail log:
>>>                           2013-04-16 16:27:14,224 DEBUG
>>>                           [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) createPool Params @
>>>          scheme - rbd
>>>                           storageHost -
>>>                           10.0.0.41 hostPath - /cloudstack port - -1
>>>                           2013-04-16 16:27:14,270 DEBUG
>>>                           [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) In createPool Setting
>>>          poolId -
>>>                           218 uuid -
>>>                           5924a2df-d658-3119-8aba-**____**f90307683206
>>>
>>>          zoneId - 4
>>>
>>>                           podId - 4 poolName -
>>>                           ceph
>>>                           2013-04-16 16:27:14,318 DEBUG
>>>                           [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) creating pool ceph on
>>>            host 18
>>>                           2013-04-16 16:27:14,320 DEBUG
>>>          [agent.transport.Request]
>>>                           (catalina-exec-9:null) Seq 18-1625162275:
>>>          Sending  { Cmd
>>>                           , MgmtId:
>>>                           37528005876872, via: 18, Ver: v1, Flags: 100011,
>>>
>>>          [{"CreateStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>>> __218,__"**
>>>
>>>          uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>>> 10.____**
>>>
>>>          0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>>> 6789,"__**
>>>
>>>          type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>>> b81c-__*__*
>>>
>>>
>>>
>>>                           9896da212335","wait":0}}]
>>>                           }
>>>                           2013-04-16 16:27:14,323 DEBUG
>>>          [agent.transport.Request]
>>>                           (AgentManager-Handler-2:null) Seq 18-1625162275:
>>>                           Processing:  { Ans: ,
>>>                           MgmtId: 37528005876872, via: 18, Ver: v1,
>>>          Flags: 10,
>>>
>>>          [{"Answer":{"result":true,"**_**___details":"success","wait":**
>>> 0}}__]
>>>
>>>
>>>                           }
>>>
>>>                           2013-04-16 16:27:14,323 DEBUG
>>>          [agent.transport.Request]
>>>                           (catalina-exec-9:null) Seq 18-1625162275:
>>>          Received:  {
>>>                           Ans: , MgmtId:
>>>                           37528005876872, via: 18, Ver: v1, Flags: 10, {
>>>          Answer } }
>>>                           2013-04-16 16:27:14,323 DEBUG
>>>                           [agent.manager.**____**AgentManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) Details from executing
>>> class
>>>
>>>          com.cloud.agent.api.**____**CreateStoragePoolCommand: success
>>>
>>>
>>>                           2013-04-16 16:27:14,323 DEBUG
>>>                           [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) In createPool Adding the
>>>          pool to
>>>                           each of the hosts
>>>                           2013-04-16 16:27:14,323 DEBUG
>>>                           [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) Adding pool ceph to  host
>>> 18
>>>                           2013-04-16 16:27:14,326 DEBUG
>>>          [agent.transport.Request]
>>>                           (catalina-exec-9:null) Seq 18-1625162276:
>>>          Sending  { Cmd
>>>                           , MgmtId:
>>>                           37528005876872, via: 18, Ver: v1, Flags: 100011,
>>>
>>>          [{"ModifyStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>>> __218,__"**
>>>
>>>          uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>>> 10.____**
>>>
>>>          0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>>> 6789,"__**
>>>
>>>          type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>>> b81c-__*__*
>>>
>>>
>>>
>>>                           9896da212335","wait":0}}]
>>>                           }
>>>                           2013-04-16 16:27:14,411 DEBUG
>>>          [agent.transport.Request]
>>>                           (AgentManager-Handler-6:null) Seq 18-1625162276:
>>>                           Processing:  { Ans: ,
>>>                           MgmtId: 37528005876872, via: 18, Ver: v1,
>>>          Flags: 10,
>>>
>>>          [{"Answer":{"result":false,"****____details":"java.lang.**
>>>                           NullPointerException\n\tat
>>>
>>>          com.cloud.hypervisor.kvm.**___**_storage.**
>>> LibvirtStorageAdaptor.____**
>>>
>>>          createStoragePool(**____**LibvirtStorageAdaptor.java:**_**
>>> ___462)\n\tat
>>>
>>>          com.cloud.hypervisor.kvm.**___**_storage.**
>>> KVMStoragePoolManager.____**
>>>
>>>          createStoragePool(**____**KVMStoragePoolManager.java:57)**
>>> ____**\n\tat
>>>
>>>          com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**___**_execute(
>>>                           **LibvirtComputingResource.___**
>>> _java:**2087)\n\tat
>>>
>>>          com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**
>>>
>>>          executeRequest(**____**LibvirtComputingResource.java:**
>>> ____**1053)\n\tat
>>>
>>>          com.cloud.agent.Agent.**____**processRequest(Agent.java:518)**
>>> ____**\n\tat
>>>
>>>          com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>>                           Agent.java:831)\n\tat
>>>
>>>          com.cloud.utils.nio.Task.run(***____*Task.java:83)\n\tat
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                           ThreadPoolExecutor.java:1146)\**____**n\tat
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                           ThreadPoolExecutor.java:615)\***____*n\tat
>>>
>>>          java.lang.Thread.run(Thread.****____java:679)\n","wait":0}}] }
>>>
>>>
>>>
>>>                           2013-04-16 16:27:14,412 DEBUG
>>>          [agent.transport.Request]
>>>                           (catalina-exec-9:null) Seq 18-1625162276:
>>>          Received:  {
>>>                           Ans: , MgmtId:
>>>                           37528005876872, via: 18, Ver: v1, Flags: 10, {
>>>          Answer } }
>>>                           2013-04-16 16:27:14,412 DEBUG
>>>                           [agent.manager.**____**AgentManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) Details from executing
>>> class
>>>                           com.cloud.agent.api.**____**
>>> ModifyStoragePoolCommand:
>>>                           java.lang.NullPointerException
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_storage.**
>>> LibvirtStorageAdaptor.____**
>>>
>>>          createStoragePool(**____**LibvirtStorageAdaptor.java:**_**___462)
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_storage.**
>>> KVMStoragePoolManager.____**
>>>
>>>          createStoragePool(**____**KVMStoragePoolManager.java:57)
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**___**_execute(
>>>                           **LibvirtComputingResource.___**_java:**2087)
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**
>>>
>>>          executeRequest(**____**LibvirtComputingResource.java:**
>>> ____**1053)
>>>                                      at
>>>
>>>          com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>>                                      at
>>>
>>>          com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>>                           Agent.java:831)
>>>                                      at
>>>          com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>>                                      at
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                           ThreadPoolExecutor.java:1146)
>>>                                      at
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                           ThreadPoolExecutor.java:615)
>>>                                      at
>>>          java.lang.Thread.run(Thread.****____java:679)
>>>
>>>
>>>
>>>                           2013-04-16 16:27:14,451 WARN
>>>                             [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) Unable to establish a
>>>          connection
>>>                           between
>>>                           Host[-18-Routing] and Pool[218|RBD]
>>>
>>>          com.cloud.exception.**____**StorageUnavailableException:
>>>
>>>
>>>                           Resource
>>>
>>>                           [StoragePool:218]
>>>                           is unreachable: Unable establish connection
>>>          from storage
>>>                           head to storage
>>>                           pool 218 due to java.lang.NullPointerException
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_storage.**
>>> LibvirtStorageAdaptor.____**
>>>
>>>          createStoragePool(**____**LibvirtStorageAdaptor.java:**_**___462)
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_storage.**
>>> KVMStoragePoolManager.____**
>>>
>>>          createStoragePool(**____**KVMStoragePoolManager.java:57)
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**___**_execute(
>>>                           **LibvirtComputingResource.___**_java:**2087)
>>>                                      at
>>>
>>>          com.cloud.hypervisor.kvm.**___**_resource.**____**
>>> LibvirtComputingResource.**
>>>
>>>          executeRequest(**____**LibvirtComputingResource.java:**
>>> ____**1053)
>>>                                      at
>>>
>>>          com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>>                                      at
>>>
>>>          com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>>                           Agent.java:831)
>>>                                      at
>>>          com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>>                                      at
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                           ThreadPoolExecutor.java:1146)
>>>                                      at
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                           ThreadPoolExecutor.java:615)
>>>                                      at
>>>          java.lang.Thread.run(Thread.****____java:679)
>>>
>>>                                      at
>>>
>>>          com.cloud.storage.**____**StorageManagerImpl.**____**
>>> connectHostToSharedPool(**
>>>                           StorageManagerImpl.java:1685)
>>>                                      at
>>>
>>>          com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>>                           StorageManagerImpl.java:1450)
>>>                                      at
>>>
>>>          com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>>                           StorageManagerImpl.java:215)
>>>                                      at
>>>
>>>          com.cloud.api.commands.**____**CreateStoragePoolCmd.execute(***
>>> ____*
>>>                           CreateStoragePoolCmd.java:120)
>>>                                      at
>>>
>>>          com.cloud.api.ApiDispatcher.****____dispatch(ApiDispatcher.**
>>> java:__**
>>>                           138)
>>>                                      at
>>>
>>>          com.cloud.api.ApiServer.**____**queueCommand(ApiServer.java:****
>>> ____543)
>>>                                      at
>>>
>>>          com.cloud.api.ApiServer.**____**handleRequest(ApiServer.java:***
>>> ____*422)
>>>                                      at
>>>
>>>          com.cloud.api.ApiServlet.**___**_processRequest(ApiServlet.**
>>>                           java:304)
>>>                                      at
>>>
>>>          com.cloud.api.ApiServlet.**___**_doGet(ApiServlet.java:63)
>>>                                      at
>>>          javax.servlet.http.**____**HttpServlet.service(**
>>>                           HttpServlet.java:617)
>>>                                      at
>>>          javax.servlet.http.**____**HttpServlet.service(**
>>>                           HttpServlet.java:717)
>>>                                      at
>>>
>>>          org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>>> internalDoFilter(**
>>>                           ApplicationFilterChain.java:****____290)
>>>                                      at
>>>
>>>          org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>>> doFilter(**
>>>                           ApplicationFilterChain.java:****____206)
>>>                                      at
>>>
>>>          org.apache.catalina.core.**___**_StandardWrapperValve.invoke(****
>>>                           StandardWrapperValve.java:233)
>>>                                      at
>>>
>>>          org.apache.catalina.core.**___**_StandardContextValve.invoke(****
>>>                           StandardContextValve.java:191)
>>>                                      at
>>>
>>>          org.apache.catalina.core.**___**_StandardHostValve.invoke(**
>>>                           StandardHostValve.java:127)
>>>                                      at
>>>
>>>          org.apache.catalina.valves.**_**___ErrorReportValve.invoke(**
>>>                           ErrorReportValve.java:102)
>>>                                      at
>>>
>>>          org.apache.catalina.valves.**_**___AccessLogValve.invoke(**
>>>                           AccessLogValve.java:555)
>>>                                      at
>>>
>>>          org.apache.catalina.core.**___**_StandardEngineValve.invoke(**
>>>                           StandardEngineValve.java:109)
>>>                                      at
>>>
>>>          org.apache.catalina.connector.**____**CoyoteAdapter.service(**
>>>                           CoyoteAdapter.java:298)
>>>                                      at
>>>
>>>          org.apache.coyote.http11.**___**_Http11NioProcessor.process(**
>>>                           Http11NioProcessor.java:889)
>>>                                      at
>>>
>>>          org.apache.coyote.http11.**___**_Http11NioProtocol$**____**
>>> Http11ConnectionHandler.**
>>>                           process(Http11NioProtocol.**__**__java:721)
>>>                                      at
>>>
>>>          org.apache.tomcat.util.net.**_**___NioEndpoint$**
>>> SocketProcessor.*__*
>>>                           run(NioEndpoint.java:2260)
>>>                                      at
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>>> ____*
>>>                           ThreadPoolExecutor.java:1110)
>>>                                      at
>>>
>>>          java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>>> ____**
>>>                           ThreadPoolExecutor.java:603)
>>>                                      at
>>>          java.lang.Thread.run(Thread.****____java:679)
>>>
>>>
>>>                           2013-04-16 16:27:14,452 WARN
>>>                             [cloud.storage.**____**StorageManagerImpl]
>>>
>>>
>>>
>>>                           (catalina-exec-9:null) No host can access
>>>          storage pool
>>>                           Pool[218|RBD] on
>>>                           cluster 5
>>>                           2013-04-16 16:27:14,504 WARN
>>>            [cloud.api.ApiDispatcher]
>>>                           (catalina-exec-9:null) class
>>>                           com.cloud.api.**____**ServerApiException :
>>> Failed
>>>
>>>
>>>                           to
>>>                           add storage pool
>>>                           2013-04-16 16:27:15,293 DEBUG
>>>                           [agent.manager.**____**AgentManagerImpl]
>>>
>>>
>>>
>>>                           (AgentManager-Handler-12:null) Ping from 18
>>>                           ^C
>>>                           [root@RDR02S02 management]#
>>>
>>>
>>>
>>>
>>>
>>>
>>>
>>>          --
>>>          Guangjian
>>>
>>>
>>>
>>>
>>> --
>>> Guangjian
>>>
>>
>
>

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Guangjian Liu <gu...@gmail.com>.
Wido,

Please see information below

How big is the Ceph cluster? (ceph -s).
[root@RDR02S01 ~]# ceph -s
   health HEALTH_OK
   monmap e1: 1 mons at {a=10.0.0.41:6789/0}, election epoch 1, quorum 0 a
   osdmap e35: 2 osds: 2 up, 2 in
    pgmap v17450: 714 pgs: 714 active+clean; 2522 MB data, 10213 MB used,
126 GB / 136 GB avail
   mdsmap e557: 1/1/1 up {0=a=up:active}

And what does libvirt say?

1. Log on to a hypervisor
2. virsh pool-list
root@ubuntu:~# virsh pool-list
Name                 State      Autostart
-----------------------------------------
18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c active     no
ac2949d9-16fb-4c54-a551-1cdbeb501f05 active     no
d9474b9d-afa1-3737-a13b-df333dae295f active     no
f81769c1-a31b-4202-a3aa-bdf616eef019 active     no

3. virsh pool-info <uuid of ceph pool>
root@ubuntu:~# virsh pool-info 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
Name:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
UUID:           18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c
State:          running
Persistent:     yes
Autostart:      no
Capacity:       136.60 GiB
*Allocation:     30723.02 TiB*
Available:      126.63 GiB


On Fri, Apr 19, 2013 at 4:43 PM, Wido den Hollander <wi...@widodh.nl> wrote:

> Hi,
>
>
> On 04/19/2013 08:48 AM, Guangjian Liu wrote:
>
>> Ceph file system can added as Primary storage by RBD, Thanks.
>>
>>
> Great!
>
>
>  Now I meet the problem in creating VM to Ceph.
>>
>> First I create computer offer and storage offer with tag "ceph", and
>> define Ceph Primary storage with tag "ceph".
>> Then I create VM use "ceph" computer offer, It display exception in Log.
>> It seems the storage size out off usage.
>>
>>
> That is indeed odd. How big is the Ceph cluster? (ceph -s).
>
> And what does libvirt say?
>
> 1. Log on to a hypervisor
> 2. virsh pool-list
> 3. virsh pool-info <uuid of ceph pool>
>
> I'm curious about what libvirt reports as pool size.
>
> Wido
>
>  Create VM fail
>> 2013-04-18 10:23:27,559 DEBUG
>> [storage.allocator.**AbstractStoragePoolAllocator]
>> (Job-Executor-1:job-24)
>> Is storage pool shared? true
>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>> (Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
>> 146673582080, *usedBytes: 33780317214998658*, usedPct:
>>
>> 230309.48542985672, disable threshold: 0.85
>> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.**StorageManagerImpl]
>> (Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
>> percentage: 230309.48542985672 has crossed the
>> pool.storage.capacity.**disablethreshold: 0.85
>> 2013-04-18 10:23:27,561 DEBUG
>> [storage.allocator.**FirstFitStoragePoolAllocator]
>> (Job-Executor-1:job-24)
>> FirstFitStoragePoolAllocator returning 0 suitable storage pools
>> 2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) No suitable pools found for volume:
>> Vol[8|vm=8|ROOT] under cluster: 1
>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) No suitable pools found
>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) No suitable storagePools found under this
>> Cluster: 1
>> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
>> (Job-Executor-1:job-24) Could not find suitable Deployment Destination
>> for this VM under any clusters, returning.
>> 2013-04-18 10:23:27,640 DEBUG [cloud.capacity.**CapacityManagerImpl]
>> (Job-Executor-1:job-24) VM state transitted from :Starting to Stopped
>> with event: OperationFailedvm's original host id: null new host id: null
>> host id before state transition: null
>> 2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
>> (Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to
>> create
>> management-server.7z 2013-04-18 10:02
>>
>> I check CS Web GUI and Database, the storage Allocated 30723TB, but
>> actually the total size is 138GB.
>> Inline image 1
>>
>>
>> mysql> select * from storage_pool;
>> +-----+---------+-------------**-------------------------+----**
>> ---------------+------+-------**---------+--------+-----------**
>> -+-------------------+--------**--------+--------------+------**
>> -----+-----------------+------**---------------+---------+----**
>> ---------+--------+
>> | id  | name    | uuid                                 | pool_type
>>    | port | data_center_id | pod_id | cluster_id | available_bytes   |
>> capacity_bytes | host_address | user_info | path            | created
>>            | removed | update_time | status |
>> +-----+---------+-------------**-------------------------+----**
>> ---------------+------+-------**---------+--------+-----------**
>> -+-------------------+--------**--------+--------------+------**
>> -----+-----------------+------**---------------+---------+----**
>> ---------+--------+
>> | 200 | primary | d9474b9d-afa1-3737-a13b-**df333dae295f |
>> NetworkFilesystem | 2049 |              1 |      1 |          1 |
>> 10301210624 |    61927849984 | 10.0.0.42    | NULL      |
>> /export/primary | 2013-04-17 06:25:47 | NULL    | NULL        | Up     |
>> | 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-**e95cd4f2d78c | RBD
>>    | 6789 |              1 |      1 |          1 | 33780317214998658 |
>> 146673582080 | 10.0.0.41    | :         | rbd             | 2013-04-18
>> 02:02:00 | NULL    | NULL        | Up     |
>> +-----+---------+-------------**-------------------------+----**
>> ---------------+------+-------**---------+--------+-----------**
>> -+-------------------+--------**--------+--------------+------**
>> -----+-----------------+------**---------------+---------+----**
>> ---------+--------+
>> 2 rows in set (0.00 sec)
>>
>>
>>
>>
>>
>> On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <wido@widodh.nl
>> <ma...@widodh.nl>> wrote:
>>
>>     Hi,
>>
>>
>>     On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>>
>>         I still meet the same result.
>>
>>         In ubuntu 12.04,
>>         1. I install libvirt-dev as below,
>>              apt-get install libvirt-dev
>>         2. rebuild libvirt, see detail build log in attach.
>>         root@ubuntu:~/install/libvirt-**__0.10.2# ./autogen.sh
>>
>>         running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
>>         --no-create --no-recursion
>>         configure: WARNING: unrecognized options: --enable-rbd
>>
>>
>>     The correct option is "--with-storage-rbd"
>>
>>     But check the output of configure, it should tell you whether RBD
>>     was enabled or not.
>>
>>     Then verify again if you can create a RBD storage pool manually via
>>     libvirt.
>>
>>     Wido
>>
>>         .....
>>         make
>>         make install
>>
>>
>>
>>         On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander
>>         <wido@widodh.nl <ma...@widodh.nl>
>>         <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>
>>              Hi,
>>
>>
>>              On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>>
>>                  Thanks for your mail, you suggest compile libvirt with
>>         RBD enable.
>>                  I already build libvirt-0.10.2.tar.gz as document
>>         http://ceph.com/docs/master/__**__rbd/libvirt/<http://ceph.com/docs/master/____rbd/libvirt/>
>>         <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>> >
>>
>>
>>                  <http://ceph.com/docs/master/_**_rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>         <http://ceph.com/docs/master/**rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>>>
>> in my SERVER C(Ubuntu
>>                  12.04),
>>                     Shall I build libvirt-0.10.2.tar.gz with RBD enable?
>>           use
>>                  ./configure
>>                  --enable-rbd instead autogen.sh?
>>
>>
>>              Well, you don't have to add --enable-rbd to configure nor
>>              autogen.sh, but you have to make sure the development
>>         libraries for
>>              librbd are installed.
>>
>>              On CentOS do this:
>>
>>              yum install librbd-devel
>>
>>              And retry autogen.sh for libvirt, it should tell you RBD is
>>         enabled.
>>
>>              Wido
>>
>>                  cd libvirt
>>                  ./autogen.sh
>>                  make
>>                  sudo make install
>>
>>
>>
>>                  On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>>                  <wido@widodh.nl <ma...@widodh.nl>
>>         <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>>
>>                      Hi,
>>
>>
>>                      On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>
>>                          Create rbd primary storage fail in CS 4.0.1
>>                          Anybody can help about it!
>>
>>                          Environment:
>>                          1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>                          2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>                          3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>                                 compile libvirt and Qemu as document
>>                          root@ubuntu:/usr/local/lib# virsh version
>>                          Compiled against library: libvirt 0.10.2
>>                          Using library: libvirt 0.10.2
>>                          Using API: QEMU 0.10.2
>>                          Running hypervisor: QEMU 1.0.0
>>
>>
>>                      Are you sure both libvirt and Qemu are compiled
>>         with RBD
>>                      enabled?
>>
>>                      On your CentOS system you should make sure
>>         librbd-dev is
>>                      installed during
>>                      compilation of libvirt and Qemu.
>>
>>                      The most important part is the RBD storage pool
>>         support in
>>                      libvirt, that
>>                      should be enabled.
>>
>>                      In the e-mail you send me directly I saw this:
>>
>>                      root@ubuntu:~/scripts# virsh pool-define
>>         rbd-pool.xml error:
>>                      Failed to
>>                      define pool from rbd-pool.xml error: internal error
>>         missing
>>                      backend for
>>                      pool type 8
>>
>>                      That suggest RBD storage pool support is not
>>         enabled in libvirt.
>>
>>                      Wido
>>
>>
>>                         Problem:
>>
>>                          create primary storage fail with rbd device.
>>
>>                          Fail log:
>>                          2013-04-16 16:27:14,224 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) createPool Params @
>>         scheme - rbd
>>                          storageHost -
>>                          10.0.0.41 hostPath - /cloudstack port - -1
>>                          2013-04-16 16:27:14,270 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) In createPool Setting
>>         poolId -
>>                          218 uuid -
>>                          5924a2df-d658-3119-8aba-**____**f90307683206
>>
>>         zoneId - 4
>>
>>                          podId - 4 poolName -
>>                          ceph
>>                          2013-04-16 16:27:14,318 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) creating pool ceph on
>>           host 18
>>                          2013-04-16 16:27:14,320 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162275:
>>         Sending  { Cmd
>>                          , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>>
>>         [{"CreateStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>> __218,__"**
>>
>>         uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>> 10.____**
>>
>>         0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>> 6789,"__**
>>
>>         type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>> b81c-__*__*
>>
>>
>>
>>                          9896da212335","wait":0}}]
>>                          }
>>                          2013-04-16 16:27:14,323 DEBUG
>>         [agent.transport.Request]
>>                          (AgentManager-Handler-2:null) Seq 18-1625162275:
>>                          Processing:  { Ans: ,
>>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>>         Flags: 10,
>>
>>         [{"Answer":{"result":true,"**_**___details":"success","wait":**
>> 0}}__]
>>
>>
>>                          }
>>
>>                          2013-04-16 16:27:14,323 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162275:
>>         Received:  {
>>                          Ans: , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>>         Answer } }
>>                          2013-04-16 16:27:14,323 DEBUG
>>                          [agent.manager.**____**AgentManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Details from executing
>> class
>>
>>         com.cloud.agent.api.**____**CreateStoragePoolCommand: success
>>
>>
>>                          2013-04-16 16:27:14,323 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) In createPool Adding the
>>         pool to
>>                          each of the hosts
>>                          2013-04-16 16:27:14,323 DEBUG
>>                          [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Adding pool ceph to  host
>> 18
>>                          2013-04-16 16:27:14,326 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162276:
>>         Sending  { Cmd
>>                          , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>>
>>         [{"ModifyStoragePoolCommand":{**____**"add":true,"pool":{"id":**
>> __218,__"**
>>
>>         uuid":"5924a2df-d658-3119-**__**__8aba-f90307683206","host":"**
>> 10.____**
>>
>>         0.0.41","path":"cloudstack","***____*userInfo":":","port":**
>> 6789,"__**
>>
>>         type":"RBD"},"localPath":"/**_**___mnt//3cf4f0e8-781d-39d8-**
>> b81c-__*__*
>>
>>
>>
>>                          9896da212335","wait":0}}]
>>                          }
>>                          2013-04-16 16:27:14,411 DEBUG
>>         [agent.transport.Request]
>>                          (AgentManager-Handler-6:null) Seq 18-1625162276:
>>                          Processing:  { Ans: ,
>>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>>         Flags: 10,
>>
>>         [{"Answer":{"result":false,"****____details":"java.lang.**
>>                          NullPointerException\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> LibvirtStorageAdaptor.____**
>>
>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**
>> ___462)\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> KVMStoragePoolManager.____**
>>
>>         createStoragePool(**____**KVMStoragePoolManager.java:57)**
>> ____**\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**___**_execute(
>>                          **LibvirtComputingResource.___**
>> _java:**2087)\n\tat
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**
>>
>>         executeRequest(**____**LibvirtComputingResource.java:**
>> ____**1053)\n\tat
>>
>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)**
>> ____**\n\tat
>>
>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>                          Agent.java:831)\n\tat
>>
>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)\n\tat
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1146)\**____**n\tat
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:615)\***____*n\tat
>>
>>         java.lang.Thread.run(Thread.****____java:679)\n","wait":0}}] }
>>
>>
>>
>>                          2013-04-16 16:27:14,412 DEBUG
>>         [agent.transport.Request]
>>                          (catalina-exec-9:null) Seq 18-1625162276:
>>         Received:  {
>>                          Ans: , MgmtId:
>>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>>         Answer } }
>>                          2013-04-16 16:27:14,412 DEBUG
>>                          [agent.manager.**____**AgentManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Details from executing
>> class
>>                          com.cloud.agent.api.**____**
>> ModifyStoragePoolCommand:
>>                          java.lang.NullPointerException
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> LibvirtStorageAdaptor.____**
>>
>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**___462)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> KVMStoragePoolManager.____**
>>
>>         createStoragePool(**____**KVMStoragePoolManager.java:57)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**___**_execute(
>>                          **LibvirtComputingResource.___**_java:**2087)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**
>>
>>         executeRequest(**____**LibvirtComputingResource.java:**
>> ____**1053)
>>                                     at
>>
>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>                                     at
>>
>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>                          Agent.java:831)
>>                                     at
>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1146)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:615)
>>                                     at
>>         java.lang.Thread.run(Thread.****____java:679)
>>
>>
>>
>>                          2013-04-16 16:27:14,451 WARN
>>                            [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) Unable to establish a
>>         connection
>>                          between
>>                          Host[-18-Routing] and Pool[218|RBD]
>>
>>         com.cloud.exception.**____**StorageUnavailableException:
>>
>>
>>                          Resource
>>
>>                          [StoragePool:218]
>>                          is unreachable: Unable establish connection
>>         from storage
>>                          head to storage
>>                          pool 218 due to java.lang.NullPointerException
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> LibvirtStorageAdaptor.____**
>>
>>         createStoragePool(**____**LibvirtStorageAdaptor.java:**_**___462)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_storage.**
>> KVMStoragePoolManager.____**
>>
>>         createStoragePool(**____**KVMStoragePoolManager.java:57)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**___**_execute(
>>                          **LibvirtComputingResource.___**_java:**2087)
>>                                     at
>>
>>         com.cloud.hypervisor.kvm.**___**_resource.**____**
>> LibvirtComputingResource.**
>>
>>         executeRequest(**____**LibvirtComputingResource.java:**
>> ____**1053)
>>                                     at
>>
>>         com.cloud.agent.Agent.**____**processRequest(Agent.java:518)
>>                                     at
>>
>>         com.cloud.agent.Agent$**____**AgentRequestHandler.doTask(**
>>                          Agent.java:831)
>>                                     at
>>         com.cloud.utils.nio.Task.run(***____*Task.java:83)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1146)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:615)
>>                                     at
>>         java.lang.Thread.run(Thread.****____java:679)
>>
>>                                     at
>>
>>         com.cloud.storage.**____**StorageManagerImpl.**____**
>> connectHostToSharedPool(**
>>                          StorageManagerImpl.java:1685)
>>                                     at
>>
>>         com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>                          StorageManagerImpl.java:1450)
>>                                     at
>>
>>         com.cloud.storage.**____**StorageManagerImpl.createPool(**____**
>>                          StorageManagerImpl.java:215)
>>                                     at
>>
>>         com.cloud.api.commands.**____**CreateStoragePoolCmd.execute(***
>> ____*
>>                          CreateStoragePoolCmd.java:120)
>>                                     at
>>
>>         com.cloud.api.ApiDispatcher.****____dispatch(ApiDispatcher.**
>> java:__**
>>                          138)
>>                                     at
>>
>>         com.cloud.api.ApiServer.**____**queueCommand(ApiServer.java:****
>> ____543)
>>                                     at
>>
>>         com.cloud.api.ApiServer.**____**handleRequest(ApiServer.java:***
>> ____*422)
>>                                     at
>>
>>         com.cloud.api.ApiServlet.**___**_processRequest(ApiServlet.**
>>                          java:304)
>>                                     at
>>
>>         com.cloud.api.ApiServlet.**___**_doGet(ApiServlet.java:63)
>>                                     at
>>         javax.servlet.http.**____**HttpServlet.service(**
>>                          HttpServlet.java:617)
>>                                     at
>>         javax.servlet.http.**____**HttpServlet.service(**
>>                          HttpServlet.java:717)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>> internalDoFilter(**
>>                          ApplicationFilterChain.java:****____290)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_ApplicationFilterChain.**____**
>> doFilter(**
>>                          ApplicationFilterChain.java:****____206)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardWrapperValve.invoke(****
>>                          StandardWrapperValve.java:233)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardContextValve.invoke(****
>>                          StandardContextValve.java:191)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardHostValve.invoke(**
>>                          StandardHostValve.java:127)
>>                                     at
>>
>>         org.apache.catalina.valves.**_**___ErrorReportValve.invoke(**
>>                          ErrorReportValve.java:102)
>>                                     at
>>
>>         org.apache.catalina.valves.**_**___AccessLogValve.invoke(**
>>                          AccessLogValve.java:555)
>>                                     at
>>
>>         org.apache.catalina.core.**___**_StandardEngineValve.invoke(**
>>                          StandardEngineValve.java:109)
>>                                     at
>>
>>         org.apache.catalina.connector.**____**CoyoteAdapter.service(**
>>                          CoyoteAdapter.java:298)
>>                                     at
>>
>>         org.apache.coyote.http11.**___**_Http11NioProcessor.process(**
>>                          Http11NioProcessor.java:889)
>>                                     at
>>
>>         org.apache.coyote.http11.**___**_Http11NioProtocol$**____**
>> Http11ConnectionHandler.**
>>                          process(Http11NioProtocol.**__**__java:721)
>>                                     at
>>
>>         org.apache.tomcat.util.net.**_**___NioEndpoint$**
>> SocketProcessor.*__*
>>                          run(NioEndpoint.java:2260)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor.runWorker(***
>> ____*
>>                          ThreadPoolExecutor.java:1110)
>>                                     at
>>
>>         java.util.concurrent.**____**ThreadPoolExecutor$Worker.run(**
>> ____**
>>                          ThreadPoolExecutor.java:603)
>>                                     at
>>         java.lang.Thread.run(Thread.****____java:679)
>>
>>
>>                          2013-04-16 16:27:14,452 WARN
>>                            [cloud.storage.**____**StorageManagerImpl]
>>
>>
>>
>>                          (catalina-exec-9:null) No host can access
>>         storage pool
>>                          Pool[218|RBD] on
>>                          cluster 5
>>                          2013-04-16 16:27:14,504 WARN
>>           [cloud.api.ApiDispatcher]
>>                          (catalina-exec-9:null) class
>>                          com.cloud.api.**____**ServerApiException :
>> Failed
>>
>>
>>                          to
>>                          add storage pool
>>                          2013-04-16 16:27:15,293 DEBUG
>>                          [agent.manager.**____**AgentManagerImpl]
>>
>>
>>
>>                          (AgentManager-Handler-12:null) Ping from 18
>>                          ^C
>>                          [root@RDR02S02 management]#
>>
>>
>>
>>
>>
>>
>>
>>         --
>>         Guangjian
>>
>>
>>
>>
>> --
>> Guangjian
>>
>


-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Wido den Hollander <wi...@widodh.nl>.
Hi,

On 04/19/2013 08:48 AM, Guangjian Liu wrote:
> Ceph file system can added as Primary storage by RBD, Thanks.
>

Great!

> Now I meet the problem in creating VM to Ceph.
>
> First I create computer offer and storage offer with tag "ceph", and
> define Ceph Primary storage with tag "ceph".
> Then I create VM use "ceph" computer offer, It display exception in Log.
> It seems the storage size out off usage.
>

That is indeed odd. How big is the Ceph cluster? (ceph -s).

And what does libvirt say?

1. Log on to a hypervisor
2. virsh pool-list
3. virsh pool-info <uuid of ceph pool>

I'm curious about what libvirt reports as pool size.

Wido

> Create VM fail
> 2013-04-18 10:23:27,559 DEBUG
> [storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-1:job-24)
> Is storage pool shared? true
> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
> 146673582080, *usedBytes: 33780317214998658*, usedPct:
> 230309.48542985672, disable threshold: 0.85
> 2013-04-18 10:23:27,561 DEBUG [cloud.storage.StorageManagerImpl]
> (Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
> percentage: 230309.48542985672 has crossed the
> pool.storage.capacity.disablethreshold: 0.85
> 2013-04-18 10:23:27,561 DEBUG
> [storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-1:job-24)
> FirstFitStoragePoolAllocator returning 0 suitable storage pools
> 2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-1:job-24) No suitable pools found for volume:
> Vol[8|vm=8|ROOT] under cluster: 1
> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-1:job-24) No suitable pools found
> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-1:job-24) No suitable storagePools found under this Cluster: 1
> 2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
> (Job-Executor-1:job-24) Could not find suitable Deployment Destination
> for this VM under any clusters, returning.
> 2013-04-18 10:23:27,640 DEBUG [cloud.capacity.CapacityManagerImpl]
> (Job-Executor-1:job-24) VM state transitted from :Starting to Stopped
> with event: OperationFailedvm's original host id: null new host id: null
> host id before state transition: null
> 2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
> (Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to create
> management-server.7z 2013-04-18 10:02
>
> I check CS Web GUI and Database, the storage Allocated 30723TB, but
> actually the total size is 138GB.
> Inline image 1
>
>
> mysql> select * from storage_pool;
> +-----+---------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+--------------+-----------+-----------------+---------------------+---------+-------------+--------+
> | id  | name    | uuid                                 | pool_type
>    | port | data_center_id | pod_id | cluster_id | available_bytes   |
> capacity_bytes | host_address | user_info | path            | created
>            | removed | update_time | status |
> +-----+---------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+--------------+-----------+-----------------+---------------------+---------+-------------+--------+
> | 200 | primary | d9474b9d-afa1-3737-a13b-df333dae295f |
> NetworkFilesystem | 2049 |              1 |      1 |          1 |
> 10301210624 |    61927849984 | 10.0.0.42    | NULL      |
> /export/primary | 2013-04-17 06:25:47 | NULL    | NULL        | Up     |
> | 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c | RBD
>    | 6789 |              1 |      1 |          1 | 33780317214998658 |
> 146673582080 | 10.0.0.41    | :         | rbd             | 2013-04-18
> 02:02:00 | NULL    | NULL        | Up     |
> +-----+---------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+--------------+-----------+-----------------+---------------------+---------+-------------+--------+
> 2 rows in set (0.00 sec)
>
>
>
>
>
> On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <wido@widodh.nl
> <ma...@widodh.nl>> wrote:
>
>     Hi,
>
>
>     On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>
>         I still meet the same result.
>
>         In ubuntu 12.04,
>         1. I install libvirt-dev as below,
>              apt-get install libvirt-dev
>         2. rebuild libvirt, see detail build log in attach.
>         root@ubuntu:~/install/libvirt-__0.10.2# ./autogen.sh
>         running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
>         --no-create --no-recursion
>         configure: WARNING: unrecognized options: --enable-rbd
>
>
>     The correct option is "--with-storage-rbd"
>
>     But check the output of configure, it should tell you whether RBD
>     was enabled or not.
>
>     Then verify again if you can create a RBD storage pool manually via
>     libvirt.
>
>     Wido
>
>         .....
>         make
>         make install
>
>
>
>         On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander
>         <wido@widodh.nl <ma...@widodh.nl>
>         <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>
>              Hi,
>
>
>              On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>
>                  Thanks for your mail, you suggest compile libvirt with
>         RBD enable.
>                  I already build libvirt-0.10.2.tar.gz as document
>         http://ceph.com/docs/master/____rbd/libvirt/
>         <http://ceph.com/docs/master/__rbd/libvirt/>
>
>                  <http://ceph.com/docs/master/__rbd/libvirt/
>         <http://ceph.com/docs/master/rbd/libvirt/>> in my SERVER C(Ubuntu
>                  12.04),
>                     Shall I build libvirt-0.10.2.tar.gz with RBD enable?
>           use
>                  ./configure
>                  --enable-rbd instead autogen.sh?
>
>
>              Well, you don't have to add --enable-rbd to configure nor
>              autogen.sh, but you have to make sure the development
>         libraries for
>              librbd are installed.
>
>              On CentOS do this:
>
>              yum install librbd-devel
>
>              And retry autogen.sh for libvirt, it should tell you RBD is
>         enabled.
>
>              Wido
>
>                  cd libvirt
>                  ./autogen.sh
>                  make
>                  sudo make install
>
>
>
>                  On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>                  <wido@widodh.nl <ma...@widodh.nl>
>         <mailto:wido@widodh.nl <ma...@widodh.nl>>> wrote:
>
>                      Hi,
>
>
>                      On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>
>                          Create rbd primary storage fail in CS 4.0.1
>                          Anybody can help about it!
>
>                          Environment:
>                          1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>                          2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>                          3. Server C: KVM/Qemu OS: Ubuntu 12.04
>                                 compile libvirt and Qemu as document
>                          root@ubuntu:/usr/local/lib# virsh version
>                          Compiled against library: libvirt 0.10.2
>                          Using library: libvirt 0.10.2
>                          Using API: QEMU 0.10.2
>                          Running hypervisor: QEMU 1.0.0
>
>
>                      Are you sure both libvirt and Qemu are compiled
>         with RBD
>                      enabled?
>
>                      On your CentOS system you should make sure
>         librbd-dev is
>                      installed during
>                      compilation of libvirt and Qemu.
>
>                      The most important part is the RBD storage pool
>         support in
>                      libvirt, that
>                      should be enabled.
>
>                      In the e-mail you send me directly I saw this:
>
>                      root@ubuntu:~/scripts# virsh pool-define
>         rbd-pool.xml error:
>                      Failed to
>                      define pool from rbd-pool.xml error: internal error
>         missing
>                      backend for
>                      pool type 8
>
>                      That suggest RBD storage pool support is not
>         enabled in libvirt.
>
>                      Wido
>
>
>                         Problem:
>
>                          create primary storage fail with rbd device.
>
>                          Fail log:
>                          2013-04-16 16:27:14,224 DEBUG
>                          [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) createPool Params @
>         scheme - rbd
>                          storageHost -
>                          10.0.0.41 hostPath - /cloudstack port - -1
>                          2013-04-16 16:27:14,270 DEBUG
>                          [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) In createPool Setting
>         poolId -
>                          218 uuid -
>                          5924a2df-d658-3119-8aba-**____f90307683206
>         zoneId - 4
>
>                          podId - 4 poolName -
>                          ceph
>                          2013-04-16 16:27:14,318 DEBUG
>                          [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) creating pool ceph on
>           host 18
>                          2013-04-16 16:27:14,320 DEBUG
>         [agent.transport.Request]
>                          (catalina-exec-9:null) Seq 18-1625162275:
>         Sending  { Cmd
>                          , MgmtId:
>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>
>         [{"CreateStoragePoolCommand":{____**"add":true,"pool":{"id":__218,__"**
>
>         uuid":"5924a2df-d658-3119-**____8aba-f90307683206","host":"10.____**
>
>         0.0.41","path":"cloudstack","*____*userInfo":":","port":6789,"__**
>
>         type":"RBD"},"localPath":"/**____mnt//3cf4f0e8-781d-39d8-b81c-__*__*
>
>
>                          9896da212335","wait":0}}]
>                          }
>                          2013-04-16 16:27:14,323 DEBUG
>         [agent.transport.Request]
>                          (AgentManager-Handler-2:null) Seq 18-1625162275:
>                          Processing:  { Ans: ,
>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>         Flags: 10,
>
>         [{"Answer":{"result":true,"**____details":"success","wait":0}}__]
>
>                          }
>
>                          2013-04-16 16:27:14,323 DEBUG
>         [agent.transport.Request]
>                          (catalina-exec-9:null) Seq 18-1625162275:
>         Received:  {
>                          Ans: , MgmtId:
>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>         Answer } }
>                          2013-04-16 16:27:14,323 DEBUG
>                          [agent.manager.**____AgentManagerImpl]
>
>
>                          (catalina-exec-9:null) Details from executing class
>
>         com.cloud.agent.api.**____CreateStoragePoolCommand: success
>
>                          2013-04-16 16:27:14,323 DEBUG
>                          [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) In createPool Adding the
>         pool to
>                          each of the hosts
>                          2013-04-16 16:27:14,323 DEBUG
>                          [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) Adding pool ceph to  host 18
>                          2013-04-16 16:27:14,326 DEBUG
>         [agent.transport.Request]
>                          (catalina-exec-9:null) Seq 18-1625162276:
>         Sending  { Cmd
>                          , MgmtId:
>                          37528005876872, via: 18, Ver: v1, Flags: 100011,
>
>         [{"ModifyStoragePoolCommand":{____**"add":true,"pool":{"id":__218,__"**
>
>         uuid":"5924a2df-d658-3119-**____8aba-f90307683206","host":"10.____**
>
>         0.0.41","path":"cloudstack","*____*userInfo":":","port":6789,"__**
>
>         type":"RBD"},"localPath":"/**____mnt//3cf4f0e8-781d-39d8-b81c-__*__*
>
>
>                          9896da212335","wait":0}}]
>                          }
>                          2013-04-16 16:27:14,411 DEBUG
>         [agent.transport.Request]
>                          (AgentManager-Handler-6:null) Seq 18-1625162276:
>                          Processing:  { Ans: ,
>                          MgmtId: 37528005876872, via: 18, Ver: v1,
>         Flags: 10,
>
>         [{"Answer":{"result":false,"**____details":"java.lang.**
>                          NullPointerException\n\tat
>
>         com.cloud.hypervisor.kvm.**____storage.LibvirtStorageAdaptor.____**
>
>         createStoragePool(**____LibvirtStorageAdaptor.java:**____462)\n\tat
>
>         com.cloud.hypervisor.kvm.**____storage.KVMStoragePoolManager.____**
>
>         createStoragePool(**____KVMStoragePoolManager.java:57)____**\n\tat
>
>         com.cloud.hypervisor.kvm.**____resource.**____LibvirtComputingResource.**____execute(
>                          **LibvirtComputingResource.____java:**2087)\n\tat
>
>         com.cloud.hypervisor.kvm.**____resource.**____LibvirtComputingResource.**
>
>         executeRequest(**____LibvirtComputingResource.java:____**1053)\n\tat
>
>         com.cloud.agent.Agent.**____processRequest(Agent.java:518)____**\n\tat
>
>         com.cloud.agent.Agent$**____AgentRequestHandler.doTask(**
>                          Agent.java:831)\n\tat
>
>         com.cloud.utils.nio.Task.run(*____*Task.java:83)\n\tat
>
>         java.util.concurrent.**____ThreadPoolExecutor.runWorker(*____*
>                          ThreadPoolExecutor.java:1146)\____**n\tat
>
>         java.util.concurrent.**____ThreadPoolExecutor$Worker.run(____**
>                          ThreadPoolExecutor.java:615)\*____*n\tat
>
>         java.lang.Thread.run(Thread.**____java:679)\n","wait":0}}] }
>
>
>                          2013-04-16 16:27:14,412 DEBUG
>         [agent.transport.Request]
>                          (catalina-exec-9:null) Seq 18-1625162276:
>         Received:  {
>                          Ans: , MgmtId:
>                          37528005876872, via: 18, Ver: v1, Flags: 10, {
>         Answer } }
>                          2013-04-16 16:27:14,412 DEBUG
>                          [agent.manager.**____AgentManagerImpl]
>
>
>                          (catalina-exec-9:null) Details from executing class
>                          com.cloud.agent.api.**____ModifyStoragePoolCommand:
>                          java.lang.NullPointerException
>                                     at
>
>         com.cloud.hypervisor.kvm.**____storage.LibvirtStorageAdaptor.____**
>
>         createStoragePool(**____LibvirtStorageAdaptor.java:**____462)
>                                     at
>
>         com.cloud.hypervisor.kvm.**____storage.KVMStoragePoolManager.____**
>
>         createStoragePool(**____KVMStoragePoolManager.java:57)
>                                     at
>
>         com.cloud.hypervisor.kvm.**____resource.**____LibvirtComputingResource.**____execute(
>                          **LibvirtComputingResource.____java:**2087)
>                                     at
>
>         com.cloud.hypervisor.kvm.**____resource.**____LibvirtComputingResource.**
>
>         executeRequest(**____LibvirtComputingResource.java:____**1053)
>                                     at
>
>         com.cloud.agent.Agent.**____processRequest(Agent.java:518)
>                                     at
>
>         com.cloud.agent.Agent$**____AgentRequestHandler.doTask(**
>                          Agent.java:831)
>                                     at
>         com.cloud.utils.nio.Task.run(*____*Task.java:83)
>                                     at
>
>         java.util.concurrent.**____ThreadPoolExecutor.runWorker(*____*
>                          ThreadPoolExecutor.java:1146)
>                                     at
>
>         java.util.concurrent.**____ThreadPoolExecutor$Worker.run(____**
>                          ThreadPoolExecutor.java:615)
>                                     at
>         java.lang.Thread.run(Thread.**____java:679)
>
>
>                          2013-04-16 16:27:14,451 WARN
>                            [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) Unable to establish a
>         connection
>                          between
>                          Host[-18-Routing] and Pool[218|RBD]
>
>         com.cloud.exception.**____StorageUnavailableException:
>
>                          Resource
>
>                          [StoragePool:218]
>                          is unreachable: Unable establish connection
>         from storage
>                          head to storage
>                          pool 218 due to java.lang.NullPointerException
>                                     at
>
>         com.cloud.hypervisor.kvm.**____storage.LibvirtStorageAdaptor.____**
>
>         createStoragePool(**____LibvirtStorageAdaptor.java:**____462)
>                                     at
>
>         com.cloud.hypervisor.kvm.**____storage.KVMStoragePoolManager.____**
>
>         createStoragePool(**____KVMStoragePoolManager.java:57)
>                                     at
>
>         com.cloud.hypervisor.kvm.**____resource.**____LibvirtComputingResource.**____execute(
>                          **LibvirtComputingResource.____java:**2087)
>                                     at
>
>         com.cloud.hypervisor.kvm.**____resource.**____LibvirtComputingResource.**
>
>         executeRequest(**____LibvirtComputingResource.java:____**1053)
>                                     at
>
>         com.cloud.agent.Agent.**____processRequest(Agent.java:518)
>                                     at
>
>         com.cloud.agent.Agent$**____AgentRequestHandler.doTask(**
>                          Agent.java:831)
>                                     at
>         com.cloud.utils.nio.Task.run(*____*Task.java:83)
>                                     at
>
>         java.util.concurrent.**____ThreadPoolExecutor.runWorker(*____*
>                          ThreadPoolExecutor.java:1146)
>                                     at
>
>         java.util.concurrent.**____ThreadPoolExecutor$Worker.run(____**
>                          ThreadPoolExecutor.java:615)
>                                     at
>         java.lang.Thread.run(Thread.**____java:679)
>
>                                     at
>
>         com.cloud.storage.**____StorageManagerImpl.**____connectHostToSharedPool(**
>                          StorageManagerImpl.java:1685)
>                                     at
>
>         com.cloud.storage.**____StorageManagerImpl.createPool(____**
>                          StorageManagerImpl.java:1450)
>                                     at
>
>         com.cloud.storage.**____StorageManagerImpl.createPool(____**
>                          StorageManagerImpl.java:215)
>                                     at
>
>         com.cloud.api.commands.**____CreateStoragePoolCmd.execute(*____*
>                          CreateStoragePoolCmd.java:120)
>                                     at
>
>         com.cloud.api.ApiDispatcher.**____dispatch(ApiDispatcher.java:__**
>                          138)
>                                     at
>
>         com.cloud.api.ApiServer.**____queueCommand(ApiServer.java:**____543)
>                                     at
>
>         com.cloud.api.ApiServer.**____handleRequest(ApiServer.java:*____*422)
>                                     at
>
>         com.cloud.api.ApiServlet.**____processRequest(ApiServlet.**
>                          java:304)
>                                     at
>
>         com.cloud.api.ApiServlet.**____doGet(ApiServlet.java:63)
>                                     at
>         javax.servlet.http.**____HttpServlet.service(**
>                          HttpServlet.java:617)
>                                     at
>         javax.servlet.http.**____HttpServlet.service(**
>                          HttpServlet.java:717)
>                                     at
>
>         org.apache.catalina.core.**____ApplicationFilterChain.**____internalDoFilter(**
>                          ApplicationFilterChain.java:**____290)
>                                     at
>
>         org.apache.catalina.core.**____ApplicationFilterChain.**____doFilter(**
>                          ApplicationFilterChain.java:**____206)
>                                     at
>
>         org.apache.catalina.core.**____StandardWrapperValve.invoke(**
>                          StandardWrapperValve.java:233)
>                                     at
>
>         org.apache.catalina.core.**____StandardContextValve.invoke(**
>                          StandardContextValve.java:191)
>                                     at
>
>         org.apache.catalina.core.**____StandardHostValve.invoke(**
>                          StandardHostValve.java:127)
>                                     at
>
>         org.apache.catalina.valves.**____ErrorReportValve.invoke(**
>                          ErrorReportValve.java:102)
>                                     at
>
>         org.apache.catalina.valves.**____AccessLogValve.invoke(**
>                          AccessLogValve.java:555)
>                                     at
>
>         org.apache.catalina.core.**____StandardEngineValve.invoke(**
>                          StandardEngineValve.java:109)
>                                     at
>
>         org.apache.catalina.connector.____**CoyoteAdapter.service(**
>                          CoyoteAdapter.java:298)
>                                     at
>
>         org.apache.coyote.http11.**____Http11NioProcessor.process(**
>                          Http11NioProcessor.java:889)
>                                     at
>
>         org.apache.coyote.http11.**____Http11NioProtocol$**____Http11ConnectionHandler.**
>                          process(Http11NioProtocol.**____java:721)
>                                     at
>
>         org.apache.tomcat.util.net.**____NioEndpoint$SocketProcessor.*__*
>                          run(NioEndpoint.java:2260)
>                                     at
>
>         java.util.concurrent.**____ThreadPoolExecutor.runWorker(*____*
>                          ThreadPoolExecutor.java:1110)
>                                     at
>
>         java.util.concurrent.**____ThreadPoolExecutor$Worker.run(____**
>                          ThreadPoolExecutor.java:603)
>                                     at
>         java.lang.Thread.run(Thread.**____java:679)
>
>                          2013-04-16 16:27:14,452 WARN
>                            [cloud.storage.**____StorageManagerImpl]
>
>
>                          (catalina-exec-9:null) No host can access
>         storage pool
>                          Pool[218|RBD] on
>                          cluster 5
>                          2013-04-16 16:27:14,504 WARN
>           [cloud.api.ApiDispatcher]
>                          (catalina-exec-9:null) class
>                          com.cloud.api.**____ServerApiException : Failed
>
>                          to
>                          add storage pool
>                          2013-04-16 16:27:15,293 DEBUG
>                          [agent.manager.**____AgentManagerImpl]
>
>
>                          (AgentManager-Handler-12:null) Ping from 18
>                          ^C
>                          [root@RDR02S02 management]#
>
>
>
>
>
>
>
>         --
>         Guangjian
>
>
>
>
> --
> Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Guangjian Liu <gu...@gmail.com>.
Ceph file system can added as Primary storage by RBD, Thanks.

Now I meet the problem in creating VM to Ceph.

First I create computer offer and storage offer with tag "ceph", and define
Ceph Primary storage with tag "ceph".
Then I create VM use "ceph" computer offer, It display exception in Log. It
seems the storage size out off usage.

Create VM fail
2013-04-18 10:23:27,559 DEBUG
[storage.allocator.AbstractStoragePoolAllocator] (Job-Executor-1:job-24) Is
storage pool shared? true
2013-04-18 10:23:27,561 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-1:job-24) Checking pool 209 for storage, totalSize:
146673582080, *usedBytes: 33780317214998658*, usedPct: 230309.48542985672,
disable threshold: 0.85
2013-04-18 10:23:27,561 DEBUG [cloud.storage.StorageManagerImpl]
(Job-Executor-1:job-24) Insufficient space on pool: 209 since its usage
percentage: 230309.48542985672 has crossed the
pool.storage.capacity.disablethreshold: 0.85
2013-04-18 10:23:27,561 DEBUG
[storage.allocator.FirstFitStoragePoolAllocator] (Job-Executor-1:job-24)
FirstFitStoragePoolAllocator returning 0 suitable storage pools
2013-04-18 10:23:27,561 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-1:job-24) No suitable pools found for volume:
Vol[8|vm=8|ROOT] under cluster: 1
2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-1:job-24) No suitable pools found
2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-1:job-24) No suitable storagePools found under this Cluster: 1
2013-04-18 10:23:27,562 DEBUG [cloud.deploy.FirstFitPlanner]
(Job-Executor-1:job-24) Could not find suitable Deployment Destination for
this VM under any clusters, returning.
2013-04-18 10:23:27,640 DEBUG [cloud.capacity.CapacityManagerImpl]
(Job-Executor-1:job-24) VM state transitted from :Starting to Stopped with
event: OperationFailedvm's original host id: null new host id: null host id
before state transition: null
2013-04-18 10:23:27,645 DEBUG [cloud.vm.UserVmManagerImpl]
(Job-Executor-1:job-24) Destroying vm VM[User|ceph-1] as it failed to create
management-server.7z 2013-04-18 10:02

I check CS Web GUI and Database, the storage Allocated 30723TB, but
actually the total size is 138GB.
[image: Inline image 1]


mysql> select * from storage_pool;
+-----+---------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+--------------+-----------+-----------------+---------------------+---------+-------------+--------+
| id  | name    | uuid                                 | pool_type
| port | data_center_id | pod_id | cluster_id | available_bytes   |
capacity_bytes | host_address | user_info | path            | created
      | removed | update_time | status |
+-----+---------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+--------------+-----------+-----------------+---------------------+---------+-------------+--------+
| 200 | primary | d9474b9d-afa1-3737-a13b-df333dae295f | NetworkFilesystem
| 2049 |              1 |      1 |          1 |       10301210624 |
 61927849984 | 10.0.0.42    | NULL      | /export/primary | 2013-04-17
06:25:47 | NULL    | NULL        | Up     |
| 209 | ceph    | 18559bbf-cd1a-3fcc-a4d2-e95cd4f2d78c | RBD
| 6789 |              1 |      1 |          1 | 33780317214998658 |
146673582080
| 10.0.0.41    | :         | rbd             | 2013-04-18 02:02:00 | NULL
 | NULL        | Up     |
+-----+---------+--------------------------------------+-------------------+------+----------------+--------+------------+-------------------+----------------+--------------+-----------+-----------------+---------------------+---------+-------------+--------+
2 rows in set (0.00 sec)





On Thu, Apr 18, 2013 at 3:54 AM, Wido den Hollander <wi...@widodh.nl> wrote:

> Hi,
>
>
> On 04/17/2013 03:01 PM, Guangjian Liu wrote:
>
>> I still meet the same result.
>>
>> In ubuntu 12.04,
>> 1. I install libvirt-dev as below,
>>     apt-get install libvirt-dev
>> 2. rebuild libvirt, see detail build log in attach.
>> root@ubuntu:~/install/libvirt-**0.10.2# ./autogen.sh
>> running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
>> --no-create --no-recursion
>> configure: WARNING: unrecognized options: --enable-rbd
>>
>
> The correct option is "--with-storage-rbd"
>
> But check the output of configure, it should tell you whether RBD was
> enabled or not.
>
> Then verify again if you can create a RBD storage pool manually via
> libvirt.
>
> Wido
>
>  .....
>> make
>> make install
>>
>>
>>
>> On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander <wido@widodh.nl
>> <ma...@widodh.nl>> wrote:
>>
>>     Hi,
>>
>>
>>     On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>>
>>         Thanks for your mail, you suggest compile libvirt with RBD enable.
>>         I already build libvirt-0.10.2.tar.gz as document
>>         http://ceph.com/docs/master/__**rbd/libvirt/<http://ceph.com/docs/master/__rbd/libvirt/>
>>
>>         <http://ceph.com/docs/master/**rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>>
>> in my SERVER C(Ubuntu
>>         12.04),
>>            Shall I build libvirt-0.10.2.tar.gz with RBD enable?  use
>>         ./configure
>>         --enable-rbd instead autogen.sh?
>>
>>
>>     Well, you don't have to add --enable-rbd to configure nor
>>     autogen.sh, but you have to make sure the development libraries for
>>     librbd are installed.
>>
>>     On CentOS do this:
>>
>>     yum install librbd-devel
>>
>>     And retry autogen.sh for libvirt, it should tell you RBD is enabled.
>>
>>     Wido
>>
>>         cd libvirt
>>         ./autogen.sh
>>         make
>>         sudo make install
>>
>>
>>
>>         On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>>         <wido@widodh.nl <ma...@widodh.nl>> wrote:
>>
>>             Hi,
>>
>>
>>             On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>
>>                 Create rbd primary storage fail in CS 4.0.1
>>                 Anybody can help about it!
>>
>>                 Environment:
>>                 1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>                 2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>                 3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>                        compile libvirt and Qemu as document
>>                 root@ubuntu:/usr/local/lib# virsh version
>>                 Compiled against library: libvirt 0.10.2
>>                 Using library: libvirt 0.10.2
>>                 Using API: QEMU 0.10.2
>>                 Running hypervisor: QEMU 1.0.0
>>
>>
>>             Are you sure both libvirt and Qemu are compiled with RBD
>>             enabled?
>>
>>             On your CentOS system you should make sure librbd-dev is
>>             installed during
>>             compilation of libvirt and Qemu.
>>
>>             The most important part is the RBD storage pool support in
>>             libvirt, that
>>             should be enabled.
>>
>>             In the e-mail you send me directly I saw this:
>>
>>             root@ubuntu:~/scripts# virsh pool-define rbd-pool.xml error:
>>             Failed to
>>             define pool from rbd-pool.xml error: internal error missing
>>             backend for
>>             pool type 8
>>
>>             That suggest RBD storage pool support is not enabled in
>> libvirt.
>>
>>             Wido
>>
>>
>>                Problem:
>>
>>                 create primary storage fail with rbd device.
>>
>>                 Fail log:
>>                 2013-04-16 16:27:14,224 DEBUG
>>                 [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) createPool Params @ scheme - rbd
>>                 storageHost -
>>                 10.0.0.41 hostPath - /cloudstack port - -1
>>                 2013-04-16 16:27:14,270 DEBUG
>>                 [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) In createPool Setting poolId -
>>                 218 uuid -
>>                 5924a2df-d658-3119-8aba-**__**f90307683206 zoneId - 4
>>
>>                 podId - 4 poolName -
>>                 ceph
>>                 2013-04-16 16:27:14,318 DEBUG
>>                 [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) creating pool ceph on  host 18
>>                 2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
>>                 (catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd
>>                 , MgmtId:
>>                 37528005876872, via: 18, Ver: v1, Flags: 100011,
>>                 [{"CreateStoragePoolCommand":{**
>> __**"add":true,"pool":{"id":**218,__"**
>>                 uuid":"5924a2df-d658-3119-**__**
>> 8aba-f90307683206","host":"10.**__**
>>                 0.0.41","path":"cloudstack","***
>> __*userInfo":":","port":6789,"****
>>                 type":"RBD"},"localPath":"/**_**
>> _mnt//3cf4f0e8-781d-39d8-b81c-***__*
>>
>>
>>                 9896da212335","wait":0}}]
>>                 }
>>                 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>>                 (AgentManager-Handler-2:null) Seq 18-1625162275:
>>                 Processing:  { Ans: ,
>>                 MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>>                 [{"Answer":{"result":true,"**_**
>> _details":"success","wait":0}}**]
>>
>>                 }
>>
>>                 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>>                 (catalina-exec-9:null) Seq 18-1625162275: Received:  {
>>                 Ans: , MgmtId:
>>                 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>>                 2013-04-16 16:27:14,323 DEBUG
>>                 [agent.manager.**__**AgentManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) Details from executing class
>>                 com.cloud.agent.api.**__**CreateStoragePoolCommand:
>> success
>>
>>                 2013-04-16 16:27:14,323 DEBUG
>>                 [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) In createPool Adding the pool to
>>                 each of the hosts
>>                 2013-04-16 16:27:14,323 DEBUG
>>                 [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) Adding pool ceph to  host 18
>>                 2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
>>                 (catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd
>>                 , MgmtId:
>>                 37528005876872, via: 18, Ver: v1, Flags: 100011,
>>                 [{"ModifyStoragePoolCommand":{**
>> __**"add":true,"pool":{"id":**218,__"**
>>                 uuid":"5924a2df-d658-3119-**__**
>> 8aba-f90307683206","host":"10.**__**
>>                 0.0.41","path":"cloudstack","***
>> __*userInfo":":","port":6789,"****
>>                 type":"RBD"},"localPath":"/**_**
>> _mnt//3cf4f0e8-781d-39d8-b81c-***__*
>>
>>
>>                 9896da212335","wait":0}}]
>>                 }
>>                 2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
>>                 (AgentManager-Handler-6:null) Seq 18-1625162276:
>>                 Processing:  { Ans: ,
>>                 MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>>                 [{"Answer":{"result":false,"****__details":"java.lang.**
>>                 NullPointerException\n\tat
>>                 com.cloud.hypervisor.kvm.**__**
>> storage.LibvirtStorageAdaptor.**__**
>>                 createStoragePool(**__**LibvirtStorageAdaptor.java:**_**
>> _462)\n\tat
>>                 com.cloud.hypervisor.kvm.**__**
>> storage.KVMStoragePoolManager.**__**
>>                 createStoragePool(**__**KVMStoragePoolManager.java:57)**
>> __**\n\tat
>>                 com.cloud.hypervisor.kvm.**__**resource.**__**
>> LibvirtComputingResource.**__**execute(
>>                 **LibvirtComputingResource.__**java:**2087)\n\tat
>>                 com.cloud.hypervisor.kvm.**__**resource.**__**
>> LibvirtComputingResource.**
>>                 executeRequest(**__**LibvirtComputingResource.java:**
>> __**1053)\n\tat
>>                 com.cloud.agent.Agent.**__**
>> processRequest(Agent.java:518)**__**\n\tat
>>                 com.cloud.agent.Agent$**__**AgentRequestHandler.doTask(**
>>                 Agent.java:831)\n\tat
>>                 com.cloud.utils.nio.Task.run(***__*Task.java:83)\n\tat
>>                 java.util.concurrent.**__**ThreadPoolExecutor.runWorker(*
>> **__*
>>                 ThreadPoolExecutor.java:1146)\**__**n\tat
>>                 java.util.concurrent.**__**ThreadPoolExecutor$Worker.run(
>> **__**
>>                 ThreadPoolExecutor.java:615)\***__*n\tat
>>                 java.lang.Thread.run(Thread.****__java:679)\n","wait":0}}]
>> }
>>
>>
>>                 2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
>>                 (catalina-exec-9:null) Seq 18-1625162276: Received:  {
>>                 Ans: , MgmtId:
>>                 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>>                 2013-04-16 16:27:14,412 DEBUG
>>                 [agent.manager.**__**AgentManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) Details from executing class
>>                 com.cloud.agent.api.**__**ModifyStoragePoolCommand:
>>                 java.lang.NullPointerException
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**
>> storage.LibvirtStorageAdaptor.**__**
>>                 createStoragePool(**__**LibvirtStorageAdaptor.java:**_**
>> _462)
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**
>> storage.KVMStoragePoolManager.**__**
>>                 createStoragePool(**__**KVMStoragePoolManager.java:57)
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**resource.**__**
>> LibvirtComputingResource.**__**execute(
>>                 **LibvirtComputingResource.__**java:**2087)
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**resource.**__**
>> LibvirtComputingResource.**
>>                 executeRequest(**__**LibvirtComputingResource.java:**
>> __**1053)
>>                            at
>>                 com.cloud.agent.Agent.**__**
>> processRequest(Agent.java:518)
>>                            at
>>                 com.cloud.agent.Agent$**__**AgentRequestHandler.doTask(**
>>                 Agent.java:831)
>>                            at com.cloud.utils.nio.Task.run(***
>> __*Task.java:83)
>>                            at
>>                 java.util.concurrent.**__**ThreadPoolExecutor.runWorker(*
>> **__*
>>                 ThreadPoolExecutor.java:1146)
>>                            at
>>                 java.util.concurrent.**__**ThreadPoolExecutor$Worker.run(
>> **__**
>>                 ThreadPoolExecutor.java:615)
>>                            at java.lang.Thread.run(Thread.****__java:679)
>>
>>
>>                 2013-04-16 16:27:14,451 WARN
>>                   [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) Unable to establish a connection
>>                 between
>>                 Host[-18-Routing] and Pool[218|RBD]
>>                 com.cloud.exception.**__**StorageUnavailableException:
>>
>>                 Resource
>>
>>                 [StoragePool:218]
>>                 is unreachable: Unable establish connection from storage
>>                 head to storage
>>                 pool 218 due to java.lang.NullPointerException
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**
>> storage.LibvirtStorageAdaptor.**__**
>>                 createStoragePool(**__**LibvirtStorageAdaptor.java:**_**
>> _462)
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**
>> storage.KVMStoragePoolManager.**__**
>>                 createStoragePool(**__**KVMStoragePoolManager.java:57)
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**resource.**__**
>> LibvirtComputingResource.**__**execute(
>>                 **LibvirtComputingResource.__**java:**2087)
>>                            at
>>                 com.cloud.hypervisor.kvm.**__**resource.**__**
>> LibvirtComputingResource.**
>>                 executeRequest(**__**LibvirtComputingResource.java:**
>> __**1053)
>>                            at
>>                 com.cloud.agent.Agent.**__**
>> processRequest(Agent.java:518)
>>                            at
>>                 com.cloud.agent.Agent$**__**AgentRequestHandler.doTask(**
>>                 Agent.java:831)
>>                            at com.cloud.utils.nio.Task.run(***
>> __*Task.java:83)
>>                            at
>>                 java.util.concurrent.**__**ThreadPoolExecutor.runWorker(*
>> **__*
>>                 ThreadPoolExecutor.java:1146)
>>                            at
>>                 java.util.concurrent.**__**ThreadPoolExecutor$Worker.run(
>> **__**
>>                 ThreadPoolExecutor.java:615)
>>                            at java.lang.Thread.run(Thread.****__java:679)
>>
>>                            at
>>                 com.cloud.storage.**__**StorageManagerImpl.**__**
>> connectHostToSharedPool(**
>>                 StorageManagerImpl.java:1685)
>>                            at
>>                 com.cloud.storage.**__**StorageManagerImpl.createPool(**
>> __**
>>                 StorageManagerImpl.java:1450)
>>                            at
>>                 com.cloud.storage.**__**StorageManagerImpl.createPool(**
>> __**
>>                 StorageManagerImpl.java:215)
>>                            at
>>                 com.cloud.api.commands.**__**
>> CreateStoragePoolCmd.execute(***__*
>>                 CreateStoragePoolCmd.java:120)
>>                            at
>>                 com.cloud.api.ApiDispatcher.****
>> __dispatch(ApiDispatcher.java:****
>>                 138)
>>                            at
>>                 com.cloud.api.ApiServer.**__**
>> queueCommand(ApiServer.java:****__543)
>>                            at
>>                 com.cloud.api.ApiServer.**__**
>> handleRequest(ApiServer.java:***__*422)
>>                            at
>>                 com.cloud.api.ApiServlet.**__**
>> processRequest(ApiServlet.**
>>                 java:304)
>>                            at
>>                 com.cloud.api.ApiServlet.**__**doGet(ApiServlet.java:63)
>>                            at javax.servlet.http.**__**
>> HttpServlet.service(**
>>                 HttpServlet.java:617)
>>                            at javax.servlet.http.**__**
>> HttpServlet.service(**
>>                 HttpServlet.java:717)
>>                            at
>>                 org.apache.catalina.core.**__**
>> ApplicationFilterChain.**__**internalDoFilter(**
>>                 ApplicationFilterChain.java:****__290)
>>                            at
>>                 org.apache.catalina.core.**__**
>> ApplicationFilterChain.**__**doFilter(**
>>                 ApplicationFilterChain.java:****__206)
>>                            at
>>                 org.apache.catalina.core.**__**
>> StandardWrapperValve.invoke(**
>>                 StandardWrapperValve.java:233)
>>                            at
>>                 org.apache.catalina.core.**__**
>> StandardContextValve.invoke(**
>>                 StandardContextValve.java:191)
>>                            at
>>                 org.apache.catalina.core.**__**
>> StandardHostValve.invoke(**
>>                 StandardHostValve.java:127)
>>                            at
>>                 org.apache.catalina.valves.**_**
>> _ErrorReportValve.invoke(**
>>                 ErrorReportValve.java:102)
>>                            at
>>                 org.apache.catalina.valves.**_**_AccessLogValve.invoke(**
>>                 AccessLogValve.java:555)
>>                            at
>>                 org.apache.catalina.core.**__**
>> StandardEngineValve.invoke(**
>>                 StandardEngineValve.java:109)
>>                            at
>>                 org.apache.catalina.connector.**
>> __**CoyoteAdapter.service(**
>>                 CoyoteAdapter.java:298)
>>                            at
>>                 org.apache.coyote.http11.**__**
>> Http11NioProcessor.process(**
>>                 Http11NioProcessor.java:889)
>>                            at
>>                 org.apache.coyote.http11.**__**Http11NioProtocol$**__**
>> Http11ConnectionHandler.**
>>                 process(Http11NioProtocol.**__**java:721)
>>                            at
>>                 org.apache.tomcat.util.net.**_**
>> _NioEndpoint$SocketProcessor.****
>>                 run(NioEndpoint.java:2260)
>>                            at
>>                 java.util.concurrent.**__**ThreadPoolExecutor.runWorker(*
>> **__*
>>                 ThreadPoolExecutor.java:1110)
>>                            at
>>                 java.util.concurrent.**__**ThreadPoolExecutor$Worker.run(
>> **__**
>>                 ThreadPoolExecutor.java:603)
>>                            at java.lang.Thread.run(Thread.****__java:679)
>>
>>                 2013-04-16 16:27:14,452 WARN
>>                   [cloud.storage.**__**StorageManagerImpl]
>>
>>
>>                 (catalina-exec-9:null) No host can access storage pool
>>                 Pool[218|RBD] on
>>                 cluster 5
>>                 2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
>>                 (catalina-exec-9:null) class
>>                 com.cloud.api.**__**ServerApiException : Failed
>>
>>                 to
>>                 add storage pool
>>                 2013-04-16 16:27:15,293 DEBUG
>>                 [agent.manager.**__**AgentManagerImpl]
>>
>>
>>                 (AgentManager-Handler-12:null) Ping from 18
>>                 ^C
>>                 [root@RDR02S02 management]#
>>
>>
>>
>>
>>
>>
>>
>> --
>> Guangjian
>>
>


-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Wido den Hollander <wi...@widodh.nl>.
Hi,

On 04/17/2013 03:01 PM, Guangjian Liu wrote:
> I still meet the same result.
>
> In ubuntu 12.04,
> 1. I install libvirt-dev as below,
>     apt-get install libvirt-dev
> 2. rebuild libvirt, see detail build log in attach.
> root@ubuntu:~/install/libvirt-0.10.2# ./autogen.sh
> running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
> --no-create --no-recursion
> configure: WARNING: unrecognized options: --enable-rbd

The correct option is "--with-storage-rbd"

But check the output of configure, it should tell you whether RBD was 
enabled or not.

Then verify again if you can create a RBD storage pool manually via libvirt.

Wido

> .....
> make
> make install
>
>
>
> On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander <wido@widodh.nl
> <ma...@widodh.nl>> wrote:
>
>     Hi,
>
>
>     On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>
>         Thanks for your mail, you suggest compile libvirt with RBD enable.
>         I already build libvirt-0.10.2.tar.gz as document
>         http://ceph.com/docs/master/__rbd/libvirt/
>         <http://ceph.com/docs/master/rbd/libvirt/> in my SERVER C(Ubuntu
>         12.04),
>            Shall I build libvirt-0.10.2.tar.gz with RBD enable?  use
>         ./configure
>         --enable-rbd instead autogen.sh?
>
>
>     Well, you don't have to add --enable-rbd to configure nor
>     autogen.sh, but you have to make sure the development libraries for
>     librbd are installed.
>
>     On CentOS do this:
>
>     yum install librbd-devel
>
>     And retry autogen.sh for libvirt, it should tell you RBD is enabled.
>
>     Wido
>
>         cd libvirt
>         ./autogen.sh
>         make
>         sudo make install
>
>
>
>         On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander
>         <wido@widodh.nl <ma...@widodh.nl>> wrote:
>
>             Hi,
>
>
>             On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>
>                 Create rbd primary storage fail in CS 4.0.1
>                 Anybody can help about it!
>
>                 Environment:
>                 1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>                 2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>                 3. Server C: KVM/Qemu OS: Ubuntu 12.04
>                        compile libvirt and Qemu as document
>                 root@ubuntu:/usr/local/lib# virsh version
>                 Compiled against library: libvirt 0.10.2
>                 Using library: libvirt 0.10.2
>                 Using API: QEMU 0.10.2
>                 Running hypervisor: QEMU 1.0.0
>
>
>             Are you sure both libvirt and Qemu are compiled with RBD
>             enabled?
>
>             On your CentOS system you should make sure librbd-dev is
>             installed during
>             compilation of libvirt and Qemu.
>
>             The most important part is the RBD storage pool support in
>             libvirt, that
>             should be enabled.
>
>             In the e-mail you send me directly I saw this:
>
>             root@ubuntu:~/scripts# virsh pool-define rbd-pool.xml error:
>             Failed to
>             define pool from rbd-pool.xml error: internal error missing
>             backend for
>             pool type 8
>
>             That suggest RBD storage pool support is not enabled in libvirt.
>
>             Wido
>
>
>                Problem:
>
>                 create primary storage fail with rbd device.
>
>                 Fail log:
>                 2013-04-16 16:27:14,224 DEBUG
>                 [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) createPool Params @ scheme - rbd
>                 storageHost -
>                 10.0.0.41 hostPath - /cloudstack port - -1
>                 2013-04-16 16:27:14,270 DEBUG
>                 [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) In createPool Setting poolId -
>                 218 uuid -
>                 5924a2df-d658-3119-8aba-**__f90307683206 zoneId - 4
>                 podId - 4 poolName -
>                 ceph
>                 2013-04-16 16:27:14,318 DEBUG
>                 [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) creating pool ceph on  host 18
>                 2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
>                 (catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd
>                 , MgmtId:
>                 37528005876872, via: 18, Ver: v1, Flags: 100011,
>                 [{"CreateStoragePoolCommand":{__**"add":true,"pool":{"id":218,__"**
>                 uuid":"5924a2df-d658-3119-**__8aba-f90307683206","host":"10.__**
>                 0.0.41","path":"cloudstack","*__*userInfo":":","port":6789,"**
>                 type":"RBD"},"localPath":"/**__mnt//3cf4f0e8-781d-39d8-b81c-*__*
>
>                 9896da212335","wait":0}}]
>                 }
>                 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>                 (AgentManager-Handler-2:null) Seq 18-1625162275:
>                 Processing:  { Ans: ,
>                 MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>                 [{"Answer":{"result":true,"**__details":"success","wait":0}}]
>                 }
>
>                 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>                 (catalina-exec-9:null) Seq 18-1625162275: Received:  {
>                 Ans: , MgmtId:
>                 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>                 2013-04-16 16:27:14,323 DEBUG
>                 [agent.manager.**__AgentManagerImpl]
>
>                 (catalina-exec-9:null) Details from executing class
>                 com.cloud.agent.api.**__CreateStoragePoolCommand: success
>                 2013-04-16 16:27:14,323 DEBUG
>                 [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) In createPool Adding the pool to
>                 each of the hosts
>                 2013-04-16 16:27:14,323 DEBUG
>                 [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) Adding pool ceph to  host 18
>                 2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
>                 (catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd
>                 , MgmtId:
>                 37528005876872, via: 18, Ver: v1, Flags: 100011,
>                 [{"ModifyStoragePoolCommand":{__**"add":true,"pool":{"id":218,__"**
>                 uuid":"5924a2df-d658-3119-**__8aba-f90307683206","host":"10.__**
>                 0.0.41","path":"cloudstack","*__*userInfo":":","port":6789,"**
>                 type":"RBD"},"localPath":"/**__mnt//3cf4f0e8-781d-39d8-b81c-*__*
>
>                 9896da212335","wait":0}}]
>                 }
>                 2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
>                 (AgentManager-Handler-6:null) Seq 18-1625162276:
>                 Processing:  { Ans: ,
>                 MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>                 [{"Answer":{"result":false,"**__details":"java.lang.**
>                 NullPointerException\n\tat
>                 com.cloud.hypervisor.kvm.**__storage.LibvirtStorageAdaptor.__**
>                 createStoragePool(**__LibvirtStorageAdaptor.java:**__462)\n\tat
>                 com.cloud.hypervisor.kvm.**__storage.KVMStoragePoolManager.__**
>                 createStoragePool(**__KVMStoragePoolManager.java:57)__**\n\tat
>                 com.cloud.hypervisor.kvm.**__resource.**__LibvirtComputingResource.**__execute(
>                 **LibvirtComputingResource.__java:**2087)\n\tat
>                 com.cloud.hypervisor.kvm.**__resource.**__LibvirtComputingResource.**
>                 executeRequest(**__LibvirtComputingResource.java:__**1053)\n\tat
>                 com.cloud.agent.Agent.**__processRequest(Agent.java:518)__**\n\tat
>                 com.cloud.agent.Agent$**__AgentRequestHandler.doTask(**
>                 Agent.java:831)\n\tat
>                 com.cloud.utils.nio.Task.run(*__*Task.java:83)\n\tat
>                 java.util.concurrent.**__ThreadPoolExecutor.runWorker(*__*
>                 ThreadPoolExecutor.java:1146)\__**n\tat
>                 java.util.concurrent.**__ThreadPoolExecutor$Worker.run(__**
>                 ThreadPoolExecutor.java:615)\*__*n\tat
>                 java.lang.Thread.run(Thread.**__java:679)\n","wait":0}}] }
>
>                 2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
>                 (catalina-exec-9:null) Seq 18-1625162276: Received:  {
>                 Ans: , MgmtId:
>                 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>                 2013-04-16 16:27:14,412 DEBUG
>                 [agent.manager.**__AgentManagerImpl]
>
>                 (catalina-exec-9:null) Details from executing class
>                 com.cloud.agent.api.**__ModifyStoragePoolCommand:
>                 java.lang.NullPointerException
>                            at
>                 com.cloud.hypervisor.kvm.**__storage.LibvirtStorageAdaptor.__**
>                 createStoragePool(**__LibvirtStorageAdaptor.java:**__462)
>                            at
>                 com.cloud.hypervisor.kvm.**__storage.KVMStoragePoolManager.__**
>                 createStoragePool(**__KVMStoragePoolManager.java:57)
>                            at
>                 com.cloud.hypervisor.kvm.**__resource.**__LibvirtComputingResource.**__execute(
>                 **LibvirtComputingResource.__java:**2087)
>                            at
>                 com.cloud.hypervisor.kvm.**__resource.**__LibvirtComputingResource.**
>                 executeRequest(**__LibvirtComputingResource.java:__**1053)
>                            at
>                 com.cloud.agent.Agent.**__processRequest(Agent.java:518)
>                            at
>                 com.cloud.agent.Agent$**__AgentRequestHandler.doTask(**
>                 Agent.java:831)
>                            at com.cloud.utils.nio.Task.run(*__*Task.java:83)
>                            at
>                 java.util.concurrent.**__ThreadPoolExecutor.runWorker(*__*
>                 ThreadPoolExecutor.java:1146)
>                            at
>                 java.util.concurrent.**__ThreadPoolExecutor$Worker.run(__**
>                 ThreadPoolExecutor.java:615)
>                            at java.lang.Thread.run(Thread.**__java:679)
>
>                 2013-04-16 16:27:14,451 WARN
>                   [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) Unable to establish a connection
>                 between
>                 Host[-18-Routing] and Pool[218|RBD]
>                 com.cloud.exception.**__StorageUnavailableException:
>                 Resource
>
>                 [StoragePool:218]
>                 is unreachable: Unable establish connection from storage
>                 head to storage
>                 pool 218 due to java.lang.NullPointerException
>                            at
>                 com.cloud.hypervisor.kvm.**__storage.LibvirtStorageAdaptor.__**
>                 createStoragePool(**__LibvirtStorageAdaptor.java:**__462)
>                            at
>                 com.cloud.hypervisor.kvm.**__storage.KVMStoragePoolManager.__**
>                 createStoragePool(**__KVMStoragePoolManager.java:57)
>                            at
>                 com.cloud.hypervisor.kvm.**__resource.**__LibvirtComputingResource.**__execute(
>                 **LibvirtComputingResource.__java:**2087)
>                            at
>                 com.cloud.hypervisor.kvm.**__resource.**__LibvirtComputingResource.**
>                 executeRequest(**__LibvirtComputingResource.java:__**1053)
>                            at
>                 com.cloud.agent.Agent.**__processRequest(Agent.java:518)
>                            at
>                 com.cloud.agent.Agent$**__AgentRequestHandler.doTask(**
>                 Agent.java:831)
>                            at com.cloud.utils.nio.Task.run(*__*Task.java:83)
>                            at
>                 java.util.concurrent.**__ThreadPoolExecutor.runWorker(*__*
>                 ThreadPoolExecutor.java:1146)
>                            at
>                 java.util.concurrent.**__ThreadPoolExecutor$Worker.run(__**
>                 ThreadPoolExecutor.java:615)
>                            at java.lang.Thread.run(Thread.**__java:679)
>
>                            at
>                 com.cloud.storage.**__StorageManagerImpl.**__connectHostToSharedPool(**
>                 StorageManagerImpl.java:1685)
>                            at
>                 com.cloud.storage.**__StorageManagerImpl.createPool(__**
>                 StorageManagerImpl.java:1450)
>                            at
>                 com.cloud.storage.**__StorageManagerImpl.createPool(__**
>                 StorageManagerImpl.java:215)
>                            at
>                 com.cloud.api.commands.**__CreateStoragePoolCmd.execute(*__*
>                 CreateStoragePoolCmd.java:120)
>                            at
>                 com.cloud.api.ApiDispatcher.**__dispatch(ApiDispatcher.java:**
>                 138)
>                            at
>                 com.cloud.api.ApiServer.**__queueCommand(ApiServer.java:**__543)
>                            at
>                 com.cloud.api.ApiServer.**__handleRequest(ApiServer.java:*__*422)
>                            at
>                 com.cloud.api.ApiServlet.**__processRequest(ApiServlet.**
>                 java:304)
>                            at
>                 com.cloud.api.ApiServlet.**__doGet(ApiServlet.java:63)
>                            at javax.servlet.http.**__HttpServlet.service(**
>                 HttpServlet.java:617)
>                            at javax.servlet.http.**__HttpServlet.service(**
>                 HttpServlet.java:717)
>                            at
>                 org.apache.catalina.core.**__ApplicationFilterChain.**__internalDoFilter(**
>                 ApplicationFilterChain.java:**__290)
>                            at
>                 org.apache.catalina.core.**__ApplicationFilterChain.**__doFilter(**
>                 ApplicationFilterChain.java:**__206)
>                            at
>                 org.apache.catalina.core.**__StandardWrapperValve.invoke(**
>                 StandardWrapperValve.java:233)
>                            at
>                 org.apache.catalina.core.**__StandardContextValve.invoke(**
>                 StandardContextValve.java:191)
>                            at
>                 org.apache.catalina.core.**__StandardHostValve.invoke(**
>                 StandardHostValve.java:127)
>                            at
>                 org.apache.catalina.valves.**__ErrorReportValve.invoke(**
>                 ErrorReportValve.java:102)
>                            at
>                 org.apache.catalina.valves.**__AccessLogValve.invoke(**
>                 AccessLogValve.java:555)
>                            at
>                 org.apache.catalina.core.**__StandardEngineValve.invoke(**
>                 StandardEngineValve.java:109)
>                            at
>                 org.apache.catalina.connector.__**CoyoteAdapter.service(**
>                 CoyoteAdapter.java:298)
>                            at
>                 org.apache.coyote.http11.**__Http11NioProcessor.process(**
>                 Http11NioProcessor.java:889)
>                            at
>                 org.apache.coyote.http11.**__Http11NioProtocol$**__Http11ConnectionHandler.**
>                 process(Http11NioProtocol.**__java:721)
>                            at
>                 org.apache.tomcat.util.net.**__NioEndpoint$SocketProcessor.**
>                 run(NioEndpoint.java:2260)
>                            at
>                 java.util.concurrent.**__ThreadPoolExecutor.runWorker(*__*
>                 ThreadPoolExecutor.java:1110)
>                            at
>                 java.util.concurrent.**__ThreadPoolExecutor$Worker.run(__**
>                 ThreadPoolExecutor.java:603)
>                            at java.lang.Thread.run(Thread.**__java:679)
>                 2013-04-16 16:27:14,452 WARN
>                   [cloud.storage.**__StorageManagerImpl]
>
>                 (catalina-exec-9:null) No host can access storage pool
>                 Pool[218|RBD] on
>                 cluster 5
>                 2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
>                 (catalina-exec-9:null) class
>                 com.cloud.api.**__ServerApiException : Failed
>                 to
>                 add storage pool
>                 2013-04-16 16:27:15,293 DEBUG
>                 [agent.manager.**__AgentManagerImpl]
>
>                 (AgentManager-Handler-12:null) Ping from 18
>                 ^C
>                 [root@RDR02S02 management]#
>
>
>
>
>
>
>
> --
> Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Guangjian Liu <gu...@gmail.com>.
I still meet the same result.

In ubuntu 12.04,
1. I install libvirt-dev as below,
   apt-get install libvirt-dev
2. rebuild libvirt, see detail build log in attach.
root@ubuntu:~/install/libvirt-0.10.2# ./autogen.sh
running CONFIG_SHELL=/bin/bash /bin/bash ./configure --enable-rbd
--no-create --no-recursion
configure: WARNING: unrecognized options: --enable-rbd
.....
make
make install



On Wed, Apr 17, 2013 at 6:25 PM, Wido den Hollander <wi...@widodh.nl> wrote:

> Hi,
>
>
> On 04/17/2013 11:37 AM, Guangjian Liu wrote:
>
>> Thanks for your mail, you suggest compile libvirt with RBD enable.
>> I already build libvirt-0.10.2.tar.gz as document
>> http://ceph.com/docs/master/**rbd/libvirt/<http://ceph.com/docs/master/rbd/libvirt/>in my SERVER C(Ubuntu 12.04),
>>   Shall I build libvirt-0.10.2.tar.gz with RBD enable?  use ./configure
>> --enable-rbd instead autogen.sh?
>>
>>
> Well, you don't have to add --enable-rbd to configure nor autogen.sh, but
> you have to make sure the development libraries for librbd are installed.
>
> On CentOS do this:
>
> yum install librbd-devel
>
> And retry autogen.sh for libvirt, it should tell you RBD is enabled.
>
> Wido
>
>  cd libvirt
>> ./autogen.sh
>> make
>> sudo make install
>>
>>
>>
>> On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander <wi...@widodh.nl>
>> wrote:
>>
>>  Hi,
>>>
>>>
>>> On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>>
>>>  Create rbd primary storage fail in CS 4.0.1
>>>> Anybody can help about it!
>>>>
>>>> Environment:
>>>> 1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>>> 2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>>> 3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>>>       compile libvirt and Qemu as document
>>>> root@ubuntu:/usr/local/lib# virsh version
>>>> Compiled against library: libvirt 0.10.2
>>>> Using library: libvirt 0.10.2
>>>> Using API: QEMU 0.10.2
>>>> Running hypervisor: QEMU 1.0.0
>>>>
>>>>
>>>>  Are you sure both libvirt and Qemu are compiled with RBD enabled?
>>>
>>> On your CentOS system you should make sure librbd-dev is installed during
>>> compilation of libvirt and Qemu.
>>>
>>> The most important part is the RBD storage pool support in libvirt, that
>>> should be enabled.
>>>
>>> In the e-mail you send me directly I saw this:
>>>
>>> root@ubuntu:~/scripts# virsh pool-define rbd-pool.xml error: Failed to
>>> define pool from rbd-pool.xml error: internal error missing backend for
>>> pool type 8
>>>
>>> That suggest RBD storage pool support is not enabled in libvirt.
>>>
>>> Wido
>>>
>>>
>>>   Problem:
>>>
>>>> create primary storage fail with rbd device.
>>>>
>>>> Fail log:
>>>> 2013-04-16 16:27:14,224 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) createPool Params @ scheme - rbd storageHost -
>>>> 10.0.0.41 hostPath - /cloudstack port - -1
>>>> 2013-04-16 16:27:14,270 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) In createPool Setting poolId - 218 uuid -
>>>> 5924a2df-d658-3119-8aba-****f90307683206 zoneId - 4 podId - 4 poolName
>>>> -
>>>> ceph
>>>> 2013-04-16 16:27:14,318 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) creating pool ceph on  host 18
>>>> 2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
>>>> (catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd , MgmtId:
>>>> 37528005876872, via: 18, Ver: v1, Flags: 100011,
>>>> [{"CreateStoragePoolCommand":{****"add":true,"pool":{"id":218,**"**
>>>> uuid":"5924a2df-d658-3119-****8aba-f90307683206","host":"10.****
>>>> 0.0.41","path":"cloudstack","****userInfo":":","port":6789,"**
>>>> type":"RBD"},"localPath":"/****mnt//3cf4f0e8-781d-39d8-b81c-****
>>>>
>>>> 9896da212335","wait":0}}]
>>>> }
>>>> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>>>> (AgentManager-Handler-2:null) Seq 18-1625162275: Processing:  { Ans: ,
>>>> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>>>> [{"Answer":{"result":true,"****details":"success","wait":0}}] }
>>>>
>>>> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>>>> (catalina-exec-9:null) Seq 18-1625162275: Received:  { Ans: , MgmtId:
>>>> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>>>> 2013-04-16 16:27:14,323 DEBUG [agent.manager.****AgentManagerImpl]
>>>>
>>>> (catalina-exec-9:null) Details from executing class
>>>> com.cloud.agent.api.****CreateStoragePoolCommand: success
>>>> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) In createPool Adding the pool to each of the
>>>> hosts
>>>> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) Adding pool ceph to  host 18
>>>> 2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
>>>> (catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd , MgmtId:
>>>> 37528005876872, via: 18, Ver: v1, Flags: 100011,
>>>> [{"ModifyStoragePoolCommand":{****"add":true,"pool":{"id":218,**"**
>>>> uuid":"5924a2df-d658-3119-****8aba-f90307683206","host":"10.****
>>>> 0.0.41","path":"cloudstack","****userInfo":":","port":6789,"**
>>>> type":"RBD"},"localPath":"/****mnt//3cf4f0e8-781d-39d8-b81c-****
>>>>
>>>> 9896da212335","wait":0}}]
>>>> }
>>>> 2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
>>>> (AgentManager-Handler-6:null) Seq 18-1625162276: Processing:  { Ans: ,
>>>> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>>>> [{"Answer":{"result":false,"****details":"java.lang.**
>>>> NullPointerException\n\tat
>>>> com.cloud.hypervisor.kvm.****storage.LibvirtStorageAdaptor.****
>>>> createStoragePool(****LibvirtStorageAdaptor.java:****462)\n\tat
>>>> com.cloud.hypervisor.kvm.****storage.KVMStoragePoolManager.****
>>>> createStoragePool(****KVMStoragePoolManager.java:57)****\n\tat
>>>> com.cloud.hypervisor.kvm.****resource.****LibvirtComputingResource.****
>>>> execute(
>>>> **LibvirtComputingResource.**java:**2087)\n\tat
>>>> com.cloud.hypervisor.kvm.****resource.****LibvirtComputingResource.**
>>>> executeRequest(****LibvirtComputingResource.java:****1053)\n\tat
>>>> com.cloud.agent.Agent.****processRequest(Agent.java:518)****\n\tat
>>>> com.cloud.agent.Agent$****AgentRequestHandler.doTask(**
>>>> Agent.java:831)\n\tat
>>>> com.cloud.utils.nio.Task.run(****Task.java:83)\n\tat
>>>> java.util.concurrent.****ThreadPoolExecutor.runWorker(****
>>>> ThreadPoolExecutor.java:1146)\****n\tat
>>>> java.util.concurrent.****ThreadPoolExecutor$Worker.run(****
>>>> ThreadPoolExecutor.java:615)\****n\tat
>>>> java.lang.Thread.run(Thread.****java:679)\n","wait":0}}] }
>>>>
>>>> 2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
>>>> (catalina-exec-9:null) Seq 18-1625162276: Received:  { Ans: , MgmtId:
>>>> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>>>> 2013-04-16 16:27:14,412 DEBUG [agent.manager.****AgentManagerImpl]
>>>>
>>>> (catalina-exec-9:null) Details from executing class
>>>> com.cloud.agent.api.****ModifyStoragePoolCommand:
>>>> java.lang.NullPointerException
>>>>           at
>>>> com.cloud.hypervisor.kvm.****storage.LibvirtStorageAdaptor.****
>>>> createStoragePool(****LibvirtStorageAdaptor.java:****462)
>>>>           at
>>>> com.cloud.hypervisor.kvm.****storage.KVMStoragePoolManager.****
>>>> createStoragePool(****KVMStoragePoolManager.java:57)
>>>>           at
>>>> com.cloud.hypervisor.kvm.****resource.****LibvirtComputingResource.****
>>>> execute(
>>>> **LibvirtComputingResource.**java:**2087)
>>>>           at
>>>> com.cloud.hypervisor.kvm.****resource.****LibvirtComputingResource.**
>>>> executeRequest(****LibvirtComputingResource.java:****1053)
>>>>           at com.cloud.agent.Agent.****processRequest(Agent.java:518)
>>>>           at com.cloud.agent.Agent$****AgentRequestHandler.doTask(**
>>>> Agent.java:831)
>>>>           at com.cloud.utils.nio.Task.run(****Task.java:83)
>>>>           at
>>>> java.util.concurrent.****ThreadPoolExecutor.runWorker(****
>>>> ThreadPoolExecutor.java:1146)
>>>>           at
>>>> java.util.concurrent.****ThreadPoolExecutor$Worker.run(****
>>>> ThreadPoolExecutor.java:615)
>>>>           at java.lang.Thread.run(Thread.****java:679)
>>>>
>>>> 2013-04-16 16:27:14,451 WARN  [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) Unable to establish a connection between
>>>> Host[-18-Routing] and Pool[218|RBD]
>>>> com.cloud.exception.****StorageUnavailableException: Resource
>>>>
>>>> [StoragePool:218]
>>>> is unreachable: Unable establish connection from storage head to storage
>>>> pool 218 due to java.lang.NullPointerException
>>>>           at
>>>> com.cloud.hypervisor.kvm.****storage.LibvirtStorageAdaptor.****
>>>> createStoragePool(****LibvirtStorageAdaptor.java:****462)
>>>>           at
>>>> com.cloud.hypervisor.kvm.****storage.KVMStoragePoolManager.****
>>>> createStoragePool(****KVMStoragePoolManager.java:57)
>>>>           at
>>>> com.cloud.hypervisor.kvm.****resource.****LibvirtComputingResource.****
>>>> execute(
>>>> **LibvirtComputingResource.**java:**2087)
>>>>           at
>>>> com.cloud.hypervisor.kvm.****resource.****LibvirtComputingResource.**
>>>> executeRequest(****LibvirtComputingResource.java:****1053)
>>>>           at com.cloud.agent.Agent.****processRequest(Agent.java:518)
>>>>           at com.cloud.agent.Agent$****AgentRequestHandler.doTask(**
>>>> Agent.java:831)
>>>>           at com.cloud.utils.nio.Task.run(****Task.java:83)
>>>>           at
>>>> java.util.concurrent.****ThreadPoolExecutor.runWorker(****
>>>> ThreadPoolExecutor.java:1146)
>>>>           at
>>>> java.util.concurrent.****ThreadPoolExecutor$Worker.run(****
>>>> ThreadPoolExecutor.java:615)
>>>>           at java.lang.Thread.run(Thread.****java:679)
>>>>
>>>>           at
>>>> com.cloud.storage.****StorageManagerImpl.****connectHostToSharedPool(**
>>>> StorageManagerImpl.java:1685)
>>>>           at
>>>> com.cloud.storage.****StorageManagerImpl.createPool(****
>>>> StorageManagerImpl.java:1450)
>>>>           at
>>>> com.cloud.storage.****StorageManagerImpl.createPool(****
>>>> StorageManagerImpl.java:215)
>>>>           at
>>>> com.cloud.api.commands.****CreateStoragePoolCmd.execute(****
>>>> CreateStoragePoolCmd.java:120)
>>>>           at com.cloud.api.ApiDispatcher.****
>>>> dispatch(ApiDispatcher.java:**
>>>> 138)
>>>>           at com.cloud.api.ApiServer.****queueCommand(ApiServer.java:**
>>>> **543)
>>>>           at com.cloud.api.ApiServer.****handleRequest(ApiServer.java:*
>>>> ***422)
>>>>           at com.cloud.api.ApiServlet.****processRequest(ApiServlet.**
>>>> java:304)
>>>>           at com.cloud.api.ApiServlet.****doGet(ApiServlet.java:63)
>>>>           at javax.servlet.http.****HttpServlet.service(**
>>>> HttpServlet.java:617)
>>>>           at javax.servlet.http.****HttpServlet.service(**
>>>> HttpServlet.java:717)
>>>>           at
>>>> org.apache.catalina.core.****ApplicationFilterChain.****
>>>> internalDoFilter(**
>>>> ApplicationFilterChain.java:****290)
>>>>           at
>>>> org.apache.catalina.core.****ApplicationFilterChain.****doFilter(**
>>>> ApplicationFilterChain.java:****206)
>>>>           at
>>>> org.apache.catalina.core.****StandardWrapperValve.invoke(**
>>>> StandardWrapperValve.java:233)
>>>>           at
>>>> org.apache.catalina.core.****StandardContextValve.invoke(**
>>>> StandardContextValve.java:191)
>>>>           at
>>>> org.apache.catalina.core.****StandardHostValve.invoke(**
>>>> StandardHostValve.java:127)
>>>>           at
>>>> org.apache.catalina.valves.****ErrorReportValve.invoke(**
>>>> ErrorReportValve.java:102)
>>>>           at
>>>> org.apache.catalina.valves.****AccessLogValve.invoke(**
>>>> AccessLogValve.java:555)
>>>>           at
>>>> org.apache.catalina.core.****StandardEngineValve.invoke(**
>>>> StandardEngineValve.java:109)
>>>>           at
>>>> org.apache.catalina.connector.****CoyoteAdapter.service(**
>>>> CoyoteAdapter.java:298)
>>>>           at
>>>> org.apache.coyote.http11.****Http11NioProcessor.process(**
>>>> Http11NioProcessor.java:889)
>>>>           at
>>>> org.apache.coyote.http11.****Http11NioProtocol$****
>>>> Http11ConnectionHandler.**
>>>> process(Http11NioProtocol.****java:721)
>>>>           at
>>>> org.apache.tomcat.util.net.****NioEndpoint$SocketProcessor.**
>>>> run(NioEndpoint.java:2260)
>>>>           at
>>>> java.util.concurrent.****ThreadPoolExecutor.runWorker(****
>>>> ThreadPoolExecutor.java:1110)
>>>>           at
>>>> java.util.concurrent.****ThreadPoolExecutor$Worker.run(****
>>>> ThreadPoolExecutor.java:603)
>>>>           at java.lang.Thread.run(Thread.****java:679)
>>>> 2013-04-16 16:27:14,452 WARN  [cloud.storage.****StorageManagerImpl]
>>>>
>>>> (catalina-exec-9:null) No host can access storage pool Pool[218|RBD] on
>>>> cluster 5
>>>> 2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
>>>> (catalina-exec-9:null) class com.cloud.api.****ServerApiException :
>>>> Failed
>>>> to
>>>> add storage pool
>>>> 2013-04-16 16:27:15,293 DEBUG [agent.manager.****AgentManagerImpl]
>>>>
>>>> (AgentManager-Handler-12:null) Ping from 18
>>>> ^C
>>>> [root@RDR02S02 management]#
>>>>
>>>>
>>>>
>>
>>


-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Wido den Hollander <wi...@widodh.nl>.
Hi,

On 04/17/2013 11:37 AM, Guangjian Liu wrote:
> Thanks for your mail, you suggest compile libvirt with RBD enable.
> I already build libvirt-0.10.2.tar.gz as document
> http://ceph.com/docs/master/rbd/libvirt/ in my SERVER C(Ubuntu 12.04),
>   Shall I build libvirt-0.10.2.tar.gz with RBD enable?  use ./configure
> --enable-rbd instead autogen.sh?
>

Well, you don't have to add --enable-rbd to configure nor autogen.sh, 
but you have to make sure the development libraries for librbd are 
installed.

On CentOS do this:

yum install librbd-devel

And retry autogen.sh for libvirt, it should tell you RBD is enabled.

Wido

> cd libvirt
> ./autogen.sh
> make
> sudo make install
>
>
>
> On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander <wi...@widodh.nl> wrote:
>
>> Hi,
>>
>>
>> On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>>
>>> Create rbd primary storage fail in CS 4.0.1
>>> Anybody can help about it!
>>>
>>> Environment:
>>> 1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>>> 2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>>> 3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>>       compile libvirt and Qemu as document
>>> root@ubuntu:/usr/local/lib# virsh version
>>> Compiled against library: libvirt 0.10.2
>>> Using library: libvirt 0.10.2
>>> Using API: QEMU 0.10.2
>>> Running hypervisor: QEMU 1.0.0
>>>
>>>
>> Are you sure both libvirt and Qemu are compiled with RBD enabled?
>>
>> On your CentOS system you should make sure librbd-dev is installed during
>> compilation of libvirt and Qemu.
>>
>> The most important part is the RBD storage pool support in libvirt, that
>> should be enabled.
>>
>> In the e-mail you send me directly I saw this:
>>
>> root@ubuntu:~/scripts# virsh pool-define rbd-pool.xml error: Failed to
>> define pool from rbd-pool.xml error: internal error missing backend for
>> pool type 8
>>
>> That suggest RBD storage pool support is not enabled in libvirt.
>>
>> Wido
>>
>>
>>   Problem:
>>> create primary storage fail with rbd device.
>>>
>>> Fail log:
>>> 2013-04-16 16:27:14,224 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) createPool Params @ scheme - rbd storageHost -
>>> 10.0.0.41 hostPath - /cloudstack port - -1
>>> 2013-04-16 16:27:14,270 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) In createPool Setting poolId - 218 uuid -
>>> 5924a2df-d658-3119-8aba-**f90307683206 zoneId - 4 podId - 4 poolName -
>>> ceph
>>> 2013-04-16 16:27:14,318 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) creating pool ceph on  host 18
>>> 2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
>>> (catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd , MgmtId:
>>> 37528005876872, via: 18, Ver: v1, Flags: 100011,
>>> [{"CreateStoragePoolCommand":{**"add":true,"pool":{"id":218,"**
>>> uuid":"5924a2df-d658-3119-**8aba-f90307683206","host":"10.**
>>> 0.0.41","path":"cloudstack","**userInfo":":","port":6789,"**
>>> type":"RBD"},"localPath":"/**mnt//3cf4f0e8-781d-39d8-b81c-**
>>> 9896da212335","wait":0}}]
>>> }
>>> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>>> (AgentManager-Handler-2:null) Seq 18-1625162275: Processing:  { Ans: ,
>>> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>>> [{"Answer":{"result":true,"**details":"success","wait":0}}] }
>>> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>>> (catalina-exec-9:null) Seq 18-1625162275: Received:  { Ans: , MgmtId:
>>> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>>> 2013-04-16 16:27:14,323 DEBUG [agent.manager.**AgentManagerImpl]
>>> (catalina-exec-9:null) Details from executing class
>>> com.cloud.agent.api.**CreateStoragePoolCommand: success
>>> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) In createPool Adding the pool to each of the hosts
>>> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) Adding pool ceph to  host 18
>>> 2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
>>> (catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd , MgmtId:
>>> 37528005876872, via: 18, Ver: v1, Flags: 100011,
>>> [{"ModifyStoragePoolCommand":{**"add":true,"pool":{"id":218,"**
>>> uuid":"5924a2df-d658-3119-**8aba-f90307683206","host":"10.**
>>> 0.0.41","path":"cloudstack","**userInfo":":","port":6789,"**
>>> type":"RBD"},"localPath":"/**mnt//3cf4f0e8-781d-39d8-b81c-**
>>> 9896da212335","wait":0}}]
>>> }
>>> 2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
>>> (AgentManager-Handler-6:null) Seq 18-1625162276: Processing:  { Ans: ,
>>> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>>> [{"Answer":{"result":false,"**details":"java.lang.**
>>> NullPointerException\n\tat
>>> com.cloud.hypervisor.kvm.**storage.LibvirtStorageAdaptor.**
>>> createStoragePool(**LibvirtStorageAdaptor.java:**462)\n\tat
>>> com.cloud.hypervisor.kvm.**storage.KVMStoragePoolManager.**
>>> createStoragePool(**KVMStoragePoolManager.java:57)**\n\tat
>>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**execute(
>>> **LibvirtComputingResource.java:**2087)\n\tat
>>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**
>>> executeRequest(**LibvirtComputingResource.java:**1053)\n\tat
>>> com.cloud.agent.Agent.**processRequest(Agent.java:518)**\n\tat
>>> com.cloud.agent.Agent$**AgentRequestHandler.doTask(**
>>> Agent.java:831)\n\tat
>>> com.cloud.utils.nio.Task.run(**Task.java:83)\n\tat
>>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>>> ThreadPoolExecutor.java:1146)\**n\tat
>>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>>> ThreadPoolExecutor.java:615)\**n\tat
>>> java.lang.Thread.run(Thread.**java:679)\n","wait":0}}] }
>>> 2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
>>> (catalina-exec-9:null) Seq 18-1625162276: Received:  { Ans: , MgmtId:
>>> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>>> 2013-04-16 16:27:14,412 DEBUG [agent.manager.**AgentManagerImpl]
>>> (catalina-exec-9:null) Details from executing class
>>> com.cloud.agent.api.**ModifyStoragePoolCommand:
>>> java.lang.NullPointerException
>>>           at
>>> com.cloud.hypervisor.kvm.**storage.LibvirtStorageAdaptor.**
>>> createStoragePool(**LibvirtStorageAdaptor.java:**462)
>>>           at
>>> com.cloud.hypervisor.kvm.**storage.KVMStoragePoolManager.**
>>> createStoragePool(**KVMStoragePoolManager.java:57)
>>>           at
>>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**execute(
>>> **LibvirtComputingResource.java:**2087)
>>>           at
>>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**
>>> executeRequest(**LibvirtComputingResource.java:**1053)
>>>           at com.cloud.agent.Agent.**processRequest(Agent.java:518)
>>>           at com.cloud.agent.Agent$**AgentRequestHandler.doTask(**
>>> Agent.java:831)
>>>           at com.cloud.utils.nio.Task.run(**Task.java:83)
>>>           at
>>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>>> ThreadPoolExecutor.java:1146)
>>>           at
>>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>>> ThreadPoolExecutor.java:615)
>>>           at java.lang.Thread.run(Thread.**java:679)
>>>
>>> 2013-04-16 16:27:14,451 WARN  [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) Unable to establish a connection between
>>> Host[-18-Routing] and Pool[218|RBD]
>>> com.cloud.exception.**StorageUnavailableException: Resource
>>> [StoragePool:218]
>>> is unreachable: Unable establish connection from storage head to storage
>>> pool 218 due to java.lang.NullPointerException
>>>           at
>>> com.cloud.hypervisor.kvm.**storage.LibvirtStorageAdaptor.**
>>> createStoragePool(**LibvirtStorageAdaptor.java:**462)
>>>           at
>>> com.cloud.hypervisor.kvm.**storage.KVMStoragePoolManager.**
>>> createStoragePool(**KVMStoragePoolManager.java:57)
>>>           at
>>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**execute(
>>> **LibvirtComputingResource.java:**2087)
>>>           at
>>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**
>>> executeRequest(**LibvirtComputingResource.java:**1053)
>>>           at com.cloud.agent.Agent.**processRequest(Agent.java:518)
>>>           at com.cloud.agent.Agent$**AgentRequestHandler.doTask(**
>>> Agent.java:831)
>>>           at com.cloud.utils.nio.Task.run(**Task.java:83)
>>>           at
>>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>>> ThreadPoolExecutor.java:1146)
>>>           at
>>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>>> ThreadPoolExecutor.java:615)
>>>           at java.lang.Thread.run(Thread.**java:679)
>>>
>>>           at
>>> com.cloud.storage.**StorageManagerImpl.**connectHostToSharedPool(**
>>> StorageManagerImpl.java:1685)
>>>           at
>>> com.cloud.storage.**StorageManagerImpl.createPool(**
>>> StorageManagerImpl.java:1450)
>>>           at
>>> com.cloud.storage.**StorageManagerImpl.createPool(**
>>> StorageManagerImpl.java:215)
>>>           at
>>> com.cloud.api.commands.**CreateStoragePoolCmd.execute(**
>>> CreateStoragePoolCmd.java:120)
>>>           at com.cloud.api.ApiDispatcher.**dispatch(ApiDispatcher.java:**
>>> 138)
>>>           at com.cloud.api.ApiServer.**queueCommand(ApiServer.java:**543)
>>>           at com.cloud.api.ApiServer.**handleRequest(ApiServer.java:**422)
>>>           at com.cloud.api.ApiServlet.**processRequest(ApiServlet.**
>>> java:304)
>>>           at com.cloud.api.ApiServlet.**doGet(ApiServlet.java:63)
>>>           at javax.servlet.http.**HttpServlet.service(**
>>> HttpServlet.java:617)
>>>           at javax.servlet.http.**HttpServlet.service(**
>>> HttpServlet.java:717)
>>>           at
>>> org.apache.catalina.core.**ApplicationFilterChain.**internalDoFilter(**
>>> ApplicationFilterChain.java:**290)
>>>           at
>>> org.apache.catalina.core.**ApplicationFilterChain.**doFilter(**
>>> ApplicationFilterChain.java:**206)
>>>           at
>>> org.apache.catalina.core.**StandardWrapperValve.invoke(**
>>> StandardWrapperValve.java:233)
>>>           at
>>> org.apache.catalina.core.**StandardContextValve.invoke(**
>>> StandardContextValve.java:191)
>>>           at
>>> org.apache.catalina.core.**StandardHostValve.invoke(**
>>> StandardHostValve.java:127)
>>>           at
>>> org.apache.catalina.valves.**ErrorReportValve.invoke(**
>>> ErrorReportValve.java:102)
>>>           at
>>> org.apache.catalina.valves.**AccessLogValve.invoke(**
>>> AccessLogValve.java:555)
>>>           at
>>> org.apache.catalina.core.**StandardEngineValve.invoke(**
>>> StandardEngineValve.java:109)
>>>           at
>>> org.apache.catalina.connector.**CoyoteAdapter.service(**
>>> CoyoteAdapter.java:298)
>>>           at
>>> org.apache.coyote.http11.**Http11NioProcessor.process(**
>>> Http11NioProcessor.java:889)
>>>           at
>>> org.apache.coyote.http11.**Http11NioProtocol$**Http11ConnectionHandler.**
>>> process(Http11NioProtocol.**java:721)
>>>           at
>>> org.apache.tomcat.util.net.**NioEndpoint$SocketProcessor.**
>>> run(NioEndpoint.java:2260)
>>>           at
>>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>>> ThreadPoolExecutor.java:1110)
>>>           at
>>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>>> ThreadPoolExecutor.java:603)
>>>           at java.lang.Thread.run(Thread.**java:679)
>>> 2013-04-16 16:27:14,452 WARN  [cloud.storage.**StorageManagerImpl]
>>> (catalina-exec-9:null) No host can access storage pool Pool[218|RBD] on
>>> cluster 5
>>> 2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
>>> (catalina-exec-9:null) class com.cloud.api.**ServerApiException : Failed
>>> to
>>> add storage pool
>>> 2013-04-16 16:27:15,293 DEBUG [agent.manager.**AgentManagerImpl]
>>> (AgentManager-Handler-12:null) Ping from 18
>>> ^C
>>> [root@RDR02S02 management]#
>>>
>>>
>
>

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Guangjian Liu <gu...@gmail.com>.
Thanks for your mail, you suggest compile libvirt with RBD enable.
I already build libvirt-0.10.2.tar.gz as document
http://ceph.com/docs/master/rbd/libvirt/ in my SERVER C(Ubuntu 12.04),
 Shall I build libvirt-0.10.2.tar.gz with RBD enable?  use ./configure
--enable-rbd instead autogen.sh?

cd libvirt
./autogen.sh
make
sudo make install



On Wed, Apr 17, 2013 at 4:37 PM, Wido den Hollander <wi...@widodh.nl> wrote:

> Hi,
>
>
> On 04/17/2013 01:44 AM, Guangjian Liu wrote:
>
>> Create rbd primary storage fail in CS 4.0.1
>> Anybody can help about it!
>>
>> Environment:
>> 1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
>> 2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
>> 3. Server C: KVM/Qemu OS: Ubuntu 12.04
>>      compile libvirt and Qemu as document
>> root@ubuntu:/usr/local/lib# virsh version
>> Compiled against library: libvirt 0.10.2
>> Using library: libvirt 0.10.2
>> Using API: QEMU 0.10.2
>> Running hypervisor: QEMU 1.0.0
>>
>>
> Are you sure both libvirt and Qemu are compiled with RBD enabled?
>
> On your CentOS system you should make sure librbd-dev is installed during
> compilation of libvirt and Qemu.
>
> The most important part is the RBD storage pool support in libvirt, that
> should be enabled.
>
> In the e-mail you send me directly I saw this:
>
> root@ubuntu:~/scripts# virsh pool-define rbd-pool.xml error: Failed to
> define pool from rbd-pool.xml error: internal error missing backend for
> pool type 8
>
> That suggest RBD storage pool support is not enabled in libvirt.
>
> Wido
>
>
>  Problem:
>> create primary storage fail with rbd device.
>>
>> Fail log:
>> 2013-04-16 16:27:14,224 DEBUG [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) createPool Params @ scheme - rbd storageHost -
>> 10.0.0.41 hostPath - /cloudstack port - -1
>> 2013-04-16 16:27:14,270 DEBUG [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) In createPool Setting poolId - 218 uuid -
>> 5924a2df-d658-3119-8aba-**f90307683206 zoneId - 4 podId - 4 poolName -
>> ceph
>> 2013-04-16 16:27:14,318 DEBUG [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) creating pool ceph on  host 18
>> 2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
>> (catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd , MgmtId:
>> 37528005876872, via: 18, Ver: v1, Flags: 100011,
>> [{"CreateStoragePoolCommand":{**"add":true,"pool":{"id":218,"**
>> uuid":"5924a2df-d658-3119-**8aba-f90307683206","host":"10.**
>> 0.0.41","path":"cloudstack","**userInfo":":","port":6789,"**
>> type":"RBD"},"localPath":"/**mnt//3cf4f0e8-781d-39d8-b81c-**
>> 9896da212335","wait":0}}]
>> }
>> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>> (AgentManager-Handler-2:null) Seq 18-1625162275: Processing:  { Ans: ,
>> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>> [{"Answer":{"result":true,"**details":"success","wait":0}}] }
>> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
>> (catalina-exec-9:null) Seq 18-1625162275: Received:  { Ans: , MgmtId:
>> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>> 2013-04-16 16:27:14,323 DEBUG [agent.manager.**AgentManagerImpl]
>> (catalina-exec-9:null) Details from executing class
>> com.cloud.agent.api.**CreateStoragePoolCommand: success
>> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) In createPool Adding the pool to each of the hosts
>> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) Adding pool ceph to  host 18
>> 2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
>> (catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd , MgmtId:
>> 37528005876872, via: 18, Ver: v1, Flags: 100011,
>> [{"ModifyStoragePoolCommand":{**"add":true,"pool":{"id":218,"**
>> uuid":"5924a2df-d658-3119-**8aba-f90307683206","host":"10.**
>> 0.0.41","path":"cloudstack","**userInfo":":","port":6789,"**
>> type":"RBD"},"localPath":"/**mnt//3cf4f0e8-781d-39d8-b81c-**
>> 9896da212335","wait":0}}]
>> }
>> 2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
>> (AgentManager-Handler-6:null) Seq 18-1625162276: Processing:  { Ans: ,
>> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
>> [{"Answer":{"result":false,"**details":"java.lang.**
>> NullPointerException\n\tat
>> com.cloud.hypervisor.kvm.**storage.LibvirtStorageAdaptor.**
>> createStoragePool(**LibvirtStorageAdaptor.java:**462)\n\tat
>> com.cloud.hypervisor.kvm.**storage.KVMStoragePoolManager.**
>> createStoragePool(**KVMStoragePoolManager.java:57)**\n\tat
>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**execute(
>> **LibvirtComputingResource.java:**2087)\n\tat
>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**
>> executeRequest(**LibvirtComputingResource.java:**1053)\n\tat
>> com.cloud.agent.Agent.**processRequest(Agent.java:518)**\n\tat
>> com.cloud.agent.Agent$**AgentRequestHandler.doTask(**
>> Agent.java:831)\n\tat
>> com.cloud.utils.nio.Task.run(**Task.java:83)\n\tat
>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>> ThreadPoolExecutor.java:1146)\**n\tat
>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>> ThreadPoolExecutor.java:615)\**n\tat
>> java.lang.Thread.run(Thread.**java:679)\n","wait":0}}] }
>> 2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
>> (catalina-exec-9:null) Seq 18-1625162276: Received:  { Ans: , MgmtId:
>> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
>> 2013-04-16 16:27:14,412 DEBUG [agent.manager.**AgentManagerImpl]
>> (catalina-exec-9:null) Details from executing class
>> com.cloud.agent.api.**ModifyStoragePoolCommand:
>> java.lang.NullPointerException
>>          at
>> com.cloud.hypervisor.kvm.**storage.LibvirtStorageAdaptor.**
>> createStoragePool(**LibvirtStorageAdaptor.java:**462)
>>          at
>> com.cloud.hypervisor.kvm.**storage.KVMStoragePoolManager.**
>> createStoragePool(**KVMStoragePoolManager.java:57)
>>          at
>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**execute(
>> **LibvirtComputingResource.java:**2087)
>>          at
>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**
>> executeRequest(**LibvirtComputingResource.java:**1053)
>>          at com.cloud.agent.Agent.**processRequest(Agent.java:518)
>>          at com.cloud.agent.Agent$**AgentRequestHandler.doTask(**
>> Agent.java:831)
>>          at com.cloud.utils.nio.Task.run(**Task.java:83)
>>          at
>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>> ThreadPoolExecutor.java:1146)
>>          at
>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>> ThreadPoolExecutor.java:615)
>>          at java.lang.Thread.run(Thread.**java:679)
>>
>> 2013-04-16 16:27:14,451 WARN  [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) Unable to establish a connection between
>> Host[-18-Routing] and Pool[218|RBD]
>> com.cloud.exception.**StorageUnavailableException: Resource
>> [StoragePool:218]
>> is unreachable: Unable establish connection from storage head to storage
>> pool 218 due to java.lang.NullPointerException
>>          at
>> com.cloud.hypervisor.kvm.**storage.LibvirtStorageAdaptor.**
>> createStoragePool(**LibvirtStorageAdaptor.java:**462)
>>          at
>> com.cloud.hypervisor.kvm.**storage.KVMStoragePoolManager.**
>> createStoragePool(**KVMStoragePoolManager.java:57)
>>          at
>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**execute(
>> **LibvirtComputingResource.java:**2087)
>>          at
>> com.cloud.hypervisor.kvm.**resource.**LibvirtComputingResource.**
>> executeRequest(**LibvirtComputingResource.java:**1053)
>>          at com.cloud.agent.Agent.**processRequest(Agent.java:518)
>>          at com.cloud.agent.Agent$**AgentRequestHandler.doTask(**
>> Agent.java:831)
>>          at com.cloud.utils.nio.Task.run(**Task.java:83)
>>          at
>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>> ThreadPoolExecutor.java:1146)
>>          at
>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>> ThreadPoolExecutor.java:615)
>>          at java.lang.Thread.run(Thread.**java:679)
>>
>>          at
>> com.cloud.storage.**StorageManagerImpl.**connectHostToSharedPool(**
>> StorageManagerImpl.java:1685)
>>          at
>> com.cloud.storage.**StorageManagerImpl.createPool(**
>> StorageManagerImpl.java:1450)
>>          at
>> com.cloud.storage.**StorageManagerImpl.createPool(**
>> StorageManagerImpl.java:215)
>>          at
>> com.cloud.api.commands.**CreateStoragePoolCmd.execute(**
>> CreateStoragePoolCmd.java:120)
>>          at com.cloud.api.ApiDispatcher.**dispatch(ApiDispatcher.java:**
>> 138)
>>          at com.cloud.api.ApiServer.**queueCommand(ApiServer.java:**543)
>>          at com.cloud.api.ApiServer.**handleRequest(ApiServer.java:**422)
>>          at com.cloud.api.ApiServlet.**processRequest(ApiServlet.**
>> java:304)
>>          at com.cloud.api.ApiServlet.**doGet(ApiServlet.java:63)
>>          at javax.servlet.http.**HttpServlet.service(**
>> HttpServlet.java:617)
>>          at javax.servlet.http.**HttpServlet.service(**
>> HttpServlet.java:717)
>>          at
>> org.apache.catalina.core.**ApplicationFilterChain.**internalDoFilter(**
>> ApplicationFilterChain.java:**290)
>>          at
>> org.apache.catalina.core.**ApplicationFilterChain.**doFilter(**
>> ApplicationFilterChain.java:**206)
>>          at
>> org.apache.catalina.core.**StandardWrapperValve.invoke(**
>> StandardWrapperValve.java:233)
>>          at
>> org.apache.catalina.core.**StandardContextValve.invoke(**
>> StandardContextValve.java:191)
>>          at
>> org.apache.catalina.core.**StandardHostValve.invoke(**
>> StandardHostValve.java:127)
>>          at
>> org.apache.catalina.valves.**ErrorReportValve.invoke(**
>> ErrorReportValve.java:102)
>>          at
>> org.apache.catalina.valves.**AccessLogValve.invoke(**
>> AccessLogValve.java:555)
>>          at
>> org.apache.catalina.core.**StandardEngineValve.invoke(**
>> StandardEngineValve.java:109)
>>          at
>> org.apache.catalina.connector.**CoyoteAdapter.service(**
>> CoyoteAdapter.java:298)
>>          at
>> org.apache.coyote.http11.**Http11NioProcessor.process(**
>> Http11NioProcessor.java:889)
>>          at
>> org.apache.coyote.http11.**Http11NioProtocol$**Http11ConnectionHandler.**
>> process(Http11NioProtocol.**java:721)
>>          at
>> org.apache.tomcat.util.net.**NioEndpoint$SocketProcessor.**
>> run(NioEndpoint.java:2260)
>>          at
>> java.util.concurrent.**ThreadPoolExecutor.runWorker(**
>> ThreadPoolExecutor.java:1110)
>>          at
>> java.util.concurrent.**ThreadPoolExecutor$Worker.run(**
>> ThreadPoolExecutor.java:603)
>>          at java.lang.Thread.run(Thread.**java:679)
>> 2013-04-16 16:27:14,452 WARN  [cloud.storage.**StorageManagerImpl]
>> (catalina-exec-9:null) No host can access storage pool Pool[218|RBD] on
>> cluster 5
>> 2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
>> (catalina-exec-9:null) class com.cloud.api.**ServerApiException : Failed
>> to
>> add storage pool
>> 2013-04-16 16:27:15,293 DEBUG [agent.manager.**AgentManagerImpl]
>> (AgentManager-Handler-12:null) Ping from 18
>> ^C
>> [root@RDR02S02 management]#
>>
>>


-- 
Guangjian

Re: Create rbd primary storage fail in CS 4.0.1

Posted by Wido den Hollander <wi...@widodh.nl>.
Hi,

On 04/17/2013 01:44 AM, Guangjian Liu wrote:
> Create rbd primary storage fail in CS 4.0.1
> Anybody can help about it!
>
> Environment:
> 1. Server A: CS 4.0.1 OS: RHEL 6.2 x86-64
> 2. Server B: Ceph 0.56.4  OS: RHEL 6.2 x86-64
> 3. Server C: KVM/Qemu OS: Ubuntu 12.04
>      compile libvirt and Qemu as document
> root@ubuntu:/usr/local/lib# virsh version
> Compiled against library: libvirt 0.10.2
> Using library: libvirt 0.10.2
> Using API: QEMU 0.10.2
> Running hypervisor: QEMU 1.0.0
>

Are you sure both libvirt and Qemu are compiled with RBD enabled?

On your CentOS system you should make sure librbd-dev is installed 
during compilation of libvirt and Qemu.

The most important part is the RBD storage pool support in libvirt, that 
should be enabled.

In the e-mail you send me directly I saw this:

root@ubuntu:~/scripts# virsh pool-define rbd-pool.xml error: Failed to 
define pool from rbd-pool.xml error: internal error missing backend for 
pool type 8

That suggest RBD storage pool support is not enabled in libvirt.

Wido

> Problem:
> create primary storage fail with rbd device.
>
> Fail log:
> 2013-04-16 16:27:14,224 DEBUG [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) createPool Params @ scheme - rbd storageHost -
> 10.0.0.41 hostPath - /cloudstack port - -1
> 2013-04-16 16:27:14,270 DEBUG [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) In createPool Setting poolId - 218 uuid -
> 5924a2df-d658-3119-8aba-f90307683206 zoneId - 4 podId - 4 poolName - ceph
> 2013-04-16 16:27:14,318 DEBUG [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) creating pool ceph on  host 18
> 2013-04-16 16:27:14,320 DEBUG [agent.transport.Request]
> (catalina-exec-9:null) Seq 18-1625162275: Sending  { Cmd , MgmtId:
> 37528005876872, via: 18, Ver: v1, Flags: 100011,
> [{"CreateStoragePoolCommand":{"add":true,"pool":{"id":218,"uuid":"5924a2df-d658-3119-8aba-f90307683206","host":"10.0.0.41","path":"cloudstack","userInfo":":","port":6789,"type":"RBD"},"localPath":"/mnt//3cf4f0e8-781d-39d8-b81c-9896da212335","wait":0}}]
> }
> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
> (AgentManager-Handler-2:null) Seq 18-1625162275: Processing:  { Ans: ,
> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
> [{"Answer":{"result":true,"details":"success","wait":0}}] }
> 2013-04-16 16:27:14,323 DEBUG [agent.transport.Request]
> (catalina-exec-9:null) Seq 18-1625162275: Received:  { Ans: , MgmtId:
> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
> 2013-04-16 16:27:14,323 DEBUG [agent.manager.AgentManagerImpl]
> (catalina-exec-9:null) Details from executing class
> com.cloud.agent.api.CreateStoragePoolCommand: success
> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) In createPool Adding the pool to each of the hosts
> 2013-04-16 16:27:14,323 DEBUG [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) Adding pool ceph to  host 18
> 2013-04-16 16:27:14,326 DEBUG [agent.transport.Request]
> (catalina-exec-9:null) Seq 18-1625162276: Sending  { Cmd , MgmtId:
> 37528005876872, via: 18, Ver: v1, Flags: 100011,
> [{"ModifyStoragePoolCommand":{"add":true,"pool":{"id":218,"uuid":"5924a2df-d658-3119-8aba-f90307683206","host":"10.0.0.41","path":"cloudstack","userInfo":":","port":6789,"type":"RBD"},"localPath":"/mnt//3cf4f0e8-781d-39d8-b81c-9896da212335","wait":0}}]
> }
> 2013-04-16 16:27:14,411 DEBUG [agent.transport.Request]
> (AgentManager-Handler-6:null) Seq 18-1625162276: Processing:  { Ans: ,
> MgmtId: 37528005876872, via: 18, Ver: v1, Flags: 10,
> [{"Answer":{"result":false,"details":"java.lang.NullPointerException\n\tat
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:462)\n\tat
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)\n\tat
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2087)\n\tat
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1053)\n\tat
> com.cloud.agent.Agent.processRequest(Agent.java:518)\n\tat
> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)\n\tat
> com.cloud.utils.nio.Task.run(Task.java:83)\n\tat
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)\n\tat
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)\n\tat
> java.lang.Thread.run(Thread.java:679)\n","wait":0}}] }
> 2013-04-16 16:27:14,412 DEBUG [agent.transport.Request]
> (catalina-exec-9:null) Seq 18-1625162276: Received:  { Ans: , MgmtId:
> 37528005876872, via: 18, Ver: v1, Flags: 10, { Answer } }
> 2013-04-16 16:27:14,412 DEBUG [agent.manager.AgentManagerImpl]
> (catalina-exec-9:null) Details from executing class
> com.cloud.agent.api.ModifyStoragePoolCommand: java.lang.NullPointerException
>          at
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:462)
>          at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)
>          at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2087)
>          at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1053)
>          at com.cloud.agent.Agent.processRequest(Agent.java:518)
>          at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)
>          at com.cloud.utils.nio.Task.run(Task.java:83)
>          at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>          at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:679)
>
> 2013-04-16 16:27:14,451 WARN  [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) Unable to establish a connection between
> Host[-18-Routing] and Pool[218|RBD]
> com.cloud.exception.StorageUnavailableException: Resource [StoragePool:218]
> is unreachable: Unable establish connection from storage head to storage
> pool 218 due to java.lang.NullPointerException
>          at
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:462)
>          at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:57)
>          at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2087)
>          at
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1053)
>          at com.cloud.agent.Agent.processRequest(Agent.java:518)
>          at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:831)
>          at com.cloud.utils.nio.Task.run(Task.java:83)
>          at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>          at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>          at java.lang.Thread.run(Thread.java:679)
>
>          at
> com.cloud.storage.StorageManagerImpl.connectHostToSharedPool(StorageManagerImpl.java:1685)
>          at
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:1450)
>          at
> com.cloud.storage.StorageManagerImpl.createPool(StorageManagerImpl.java:215)
>          at
> com.cloud.api.commands.CreateStoragePoolCmd.execute(CreateStoragePoolCmd.java:120)
>          at com.cloud.api.ApiDispatcher.dispatch(ApiDispatcher.java:138)
>          at com.cloud.api.ApiServer.queueCommand(ApiServer.java:543)
>          at com.cloud.api.ApiServer.handleRequest(ApiServer.java:422)
>          at com.cloud.api.ApiServlet.processRequest(ApiServlet.java:304)
>          at com.cloud.api.ApiServlet.doGet(ApiServlet.java:63)
>          at javax.servlet.http.HttpServlet.service(HttpServlet.java:617)
>          at javax.servlet.http.HttpServlet.service(HttpServlet.java:717)
>          at
> org.apache.catalina.core.ApplicationFilterChain.internalDoFilter(ApplicationFilterChain.java:290)
>          at
> org.apache.catalina.core.ApplicationFilterChain.doFilter(ApplicationFilterChain.java:206)
>          at
> org.apache.catalina.core.StandardWrapperValve.invoke(StandardWrapperValve.java:233)
>          at
> org.apache.catalina.core.StandardContextValve.invoke(StandardContextValve.java:191)
>          at
> org.apache.catalina.core.StandardHostValve.invoke(StandardHostValve.java:127)
>          at
> org.apache.catalina.valves.ErrorReportValve.invoke(ErrorReportValve.java:102)
>          at
> org.apache.catalina.valves.AccessLogValve.invoke(AccessLogValve.java:555)
>          at
> org.apache.catalina.core.StandardEngineValve.invoke(StandardEngineValve.java:109)
>          at
> org.apache.catalina.connector.CoyoteAdapter.service(CoyoteAdapter.java:298)
>          at
> org.apache.coyote.http11.Http11NioProcessor.process(Http11NioProcessor.java:889)
>          at
> org.apache.coyote.http11.Http11NioProtocol$Http11ConnectionHandler.process(Http11NioProtocol.java:721)
>          at
> org.apache.tomcat.util.net.NioEndpoint$SocketProcessor.run(NioEndpoint.java:2260)
>          at
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1110)
>          at
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:603)
>          at java.lang.Thread.run(Thread.java:679)
> 2013-04-16 16:27:14,452 WARN  [cloud.storage.StorageManagerImpl]
> (catalina-exec-9:null) No host can access storage pool Pool[218|RBD] on
> cluster 5
> 2013-04-16 16:27:14,504 WARN  [cloud.api.ApiDispatcher]
> (catalina-exec-9:null) class com.cloud.api.ServerApiException : Failed to
> add storage pool
> 2013-04-16 16:27:15,293 DEBUG [agent.manager.AgentManagerImpl]
> (AgentManager-Handler-12:null) Ping from 18
> ^C
> [root@RDR02S02 management]#
>