You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by raj kumar <ra...@gmail.com> on 2013/10/08 12:06:35 UTC

could not add ceph primary storage.

Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But I'm
getting below error.

pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.

/var/log/cloudstack/agent/agent.log shows:
2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal error
missing backend for pool type 8
2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
(agentRequest-Handler-3:null) Caught:
java.lang.NullPointerException
        at
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
        at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
        at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
        at com.cloud.agent.Agent.processRequest(Agent.java:525)
        at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
        at com.cloud.utils.nio.Task.run(Task.java:83)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)

pls let me know how to resolve it. Thanks.

Re: could not add ceph primary storage.

Posted by raj kumar <ra...@gmail.com>.
Upgraded ubuntu to 13 and I can do it now. thank you.


On Thu, Oct 10, 2013 at 9:44 PM, Suresh Sadhu <Su...@citrix.com>wrote:

>  on the host run this command and provide the output:****
>
> #virsh pool-define rbd-pool.xml****
>
> ** **
>
> If you  get the same error message(error missing backend for pool type 8) then might be  RBD storage pool support is not enabled in libvirt.****
>
> ** **
>
> Also refer this below link: http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/  hope this might help.****
>
> ** **
>
> ** **
>
> ** **
>
> I tried with Ubuntu 13.4 and I didn’t face any problem and able to add primary storage successfully.****
>
> ** **
>
> ** **
>
> Regards****
>
> Sadhu****
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> ** **
>
> *From:* raj kumar [mailto:rajkumar600003@gmail.com]
> *Sent:* 09 October 2013 22:50
> *To:* Suresh Sadhu
>
> *Subject:* Re: could not add ceph primary storage.****
>
> ** **
>
> tried client.admin instead of admin, but still fails.****
>
> ** **
>
> I'm doing exactly the same. I can access ceph storage from the kvm host.
> For Eg.****
>
> ** **
>
> #ceph osd lspools****
>
> 0 data,1 metadata,2 rbd,4 cloudstack,****
>
> ** **
>
> in the agent.log I see,****
>
> DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-2:null)
> org.libvirt.LibvirtException: internal error missing backend for pool type
> 8. ****
>
> ** **
>
> This is causing the issue?****
>
> ** **
>
> ** **
>
> On Wed, Oct 9, 2013 at 6:22 PM, Suresh Sadhu <Su...@citrix.com>
> wrote:****
>
>
> Raj,
>
>
> 1.set the  rados.user value  as  client.admin instead of admin  and try
> adding ceph primary storage.
>
> 2. check your   kvm host is  able to access the  ceph storage or not [if
> not I will recommend to install ceph on host and copy client.admin's key in
> /etc/ceph folder and try adding the primarystorage ]
>
> Thanks****
>
> Sadhu
>
> -----Original Message-----
> From: raj kumar [mailto:rajkumar600003@gmail.com]****
>
> Sent: 09 October 2013 11:59
> To: users@cloudstack.apache.org
> Subject: Re: could not add ceph primary storage.****
>
> Added new pool in ceph - cloudstack
>
>
> ****
>
> *Client log from firebug:*****
>
>
> "NetworkError: 530  -
>
> http://192.168.210.35:8080/client/api?command=createStoragePool&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&podId=6bbd3b48-d03a-4169-8d9d-8283cddc537f&clusterid=c1c9a508-76c0-4dea-8b12-8962663316f7&name=rbd-primary&url=rbd%3A%2F%2Fadmin%3AAQDZJB9SaI8vOBAAY00Iaoz7dTHcpUHXFxS8eQ%3D%3D%40192.168.210.40%2Fcloudstack&tags=rbd&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299847389
> "
>
>
> ****
>
>  *Server apilog.log:*****
>
>
> 2013-10-09 11:53:09,096 INFO  [cloud.api.ApiServer] (catalina-exec-18:null)
> (userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
> 192.168.208.119 -- GET
>
> command=listStoragePools&id=eb8dbc7b-08a0-33df-bb48-5157de0f5b61&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299809242
> 200 { "liststoragepoolsresponse" : { "count":1 ,"storagepool" : [
>
> {"id":"eb8dbc7b-08a0-33df-bb48-5157de0f5b61","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","podid":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","podname":"pod-rack1","name":"primary-nfs","ipaddress":"192.168.211.4","path":"/export/primary","created":"2013-10-03T16:50:56+0530","type":"NetworkFilesystem","clusterid":"c1c9a508-76c0-4dea-8b12-8962663316f7","clustername":"cluster-kvm1","disksizetotal":76754714624,"disksizeallocated":0,"state":"Up"}
> ] } }
>
> 2013-10-09 11:53:10,811 INFO  [cloud.api.ApiServer] (catalina-exec-11:null)
> (userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
> 192.168.208.119 -- GET
>
> command=listZones&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&listAll=true&_=1381299810896
> 200 { "listzonesresponse" : { "count":1 ,"zone" : [
>
> {"id":"9fe2780a-0bf8-466d-a712-60f456501597","name":"zone-bangalore","dns1":"192.168.210.15","internaldns1":"192.168.210.15","networktype":"Basic","securitygroupsenabled":true,"allocationstate":"Disabled","zonetoken":"5070ff41-97a9-375b-9412-970ae512110f","dhcpprovider":"VirtualRouter","localstorageenabled":false}
> ] } }
>
> 2013-10-09 11:53:10,843 INFO  [cloud.api.ApiServer] (catalina-exec-13:null)
> (userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
> 192.168.208.119 -- GET
>
> command=listPods&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299811017
> 200 { "listpodsresponse" : { "count":1 ,"pod" : [
>
> {"id":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","name":"pod-rack1","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","gateway":"192.168.210.5","netmask":"255.255.254.0","startip":"192.168.211.182","endip":"192.168.211.186","allocationstate":"Enabled"}
> ] } }
>
> 2013-10-09 11:53:10,873 INFO  [cloud.api.ApiServer] (catalina-exec-9:null)
> (userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
> 192.168.208.119 -- GET
>
> command=listClusters&podid=6bbd3b48-d03a-4169-8d9d-8283cddc537f&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299811049
> 200 { "listclustersresponse" : { "count":1 ,"cluster" : [
>
> {"id":"c1c9a508-76c0-4dea-8b12-8962663316f7","name":"cluster-kvm1","podid":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","podname":"pod-rack1","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","hypervisortype":"KVM","clustertype":"CloudManaged","allocationstate":"Enabled","managedstate":"Managed"}
> ] } }
>
> 2013-10-09 11:53:47,452 INFO  [cloud.api.ApiServer] (catalina-exec-23:null)
> (userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
> 192.168.208.119 -- GET
>
> command=createStoragePool&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&podId=6bbd3b48-d03a-4169-8d9d-8283cddc537f&clusterid=c1c9a508-76c0-4dea-8b12-8962663316f7&name=rbd-primary&url=rbd%3A%2F%2Fadmin%3AAQDZJB9SaI8vOBAAY00Iaoz7dTHcpUHXFxS8eQ%3D%3D%40192.168.210.40%2Fcloudstack&tags=rbd&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299847389
> 530 Failed to add storage pool
>
>
> On Wed, Oct 9, 2013 at 12:09 AM, Suresh Sadhu <Suresh.Sadhu@citrix.com
> >wrote:
>
> > You have to manually create the pool first in ceph and then add to
> > cloudstack.
> >
> > On server side api calls are logged in in
> > /var/log/cloudstack/management/apilog.log file  and client side you can
> > capture the api calls by  enabling the firbug on firefox browser.
> >
> > Regards
> > sadhu
> >
> >
> > folder
> > -----Original Message-----
> > From: raj kumar [mailto:rajkumar600003@gmail.com]
> > Sent: 08 October 2013 21:57
> > To: users@cloudstack.apache.org
> > Subject: Re: could not add ceph primary storage.
> >
> > Name: rbd-primary
> > protocol: RBD
> > Rad mon: 192.168.210.35  (one of the monitor ip)
> > rados pool: cloudstack   (this pool don't exists in ceph)
> > rados user: admin
> > rados secret:  <client.admin's key here> storage tags: rbd
> >
> > can you pls let me know how can i get the api call. Is it part of logs in
> > /var/log/cloudstack.
> >
> >
> >
> >
> >
> > On Tue, Oct 8, 2013 at 5:45 PM, Suresh Sadhu <Suresh.Sadhu@citrix.com
> > >wrote:
> >
> > > What are the parameters you passed while adding the primary storage.
> > >
> > > Can you copy and paste api call
> > >
> > > Regards
> > > Sadhu
> > >
> > >
> > > -----Original Message-----
> > > From: raj kumar [mailto:rajkumar600003@gmail.com]
> > > Sent: 08 October 2013 15:37
> > > To: users@cloudstack.apache.org
> > > Subject: could not add ceph primary storage.
> > >
> > > Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But
> > > I'm getting below error.
> > >
> > > pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.
> > >
> > > /var/log/cloudstack/agent/agent.log shows:
> > > 2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
> > > (agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal
> > > error missing backend for pool type 8
> > > 2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
> > > (agentRequest-Handler-3:null) Caught:
> > > java.lang.NullPointerException
> > >         at
> > >
> > >
> >
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
> > >         at
> > >
> > >
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
> > >         at
> > >
> > >
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
> > >         at
> > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
> > >         at
> > >
> > >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
> > >         at com.cloud.agent.Agent.processRequest(Agent.java:525)
> > >         at
> > com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
> > >         at com.cloud.utils.nio.Task.run(Task.java:83)
> > >         at
> > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> > >         at
> > >
> > >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> > >         at java.lang.Thread.run(Thread.java:679)
> > >
> > > pls let me know how to resolve it. Thanks.
> > >
> >****
>
> ** **
>

RE: could not add ceph primary storage.

Posted by Suresh Sadhu <Su...@citrix.com>.
on the host run this command and provide the output:
#virsh pool-define rbd-pool.xml


If you  get the same error message(error missing backend for pool type 8) then might be  RBD storage pool support is not enabled in libvirt.



Also refer this below link: http://blog.widodh.nl/2013/06/a-quick-note-on-running-cloudstack-with-rbd-on-ubuntu-12-04/  hope this might help.







I tried with Ubuntu 13.4 and I didn't face any problem and able to add primary storage successfully.





Regards

Sadhu










From: raj kumar [mailto:rajkumar600003@gmail.com]
Sent: 09 October 2013 22:50
To: Suresh Sadhu
Subject: Re: could not add ceph primary storage.

tried client.admin instead of admin, but still fails.

I'm doing exactly the same. I can access ceph storage from the kvm host. For Eg.

#ceph osd lspools
0 data,1 metadata,2 rbd,4 cloudstack,

in the agent.log I see,
DEBUG [kvm.storage.LibvirtStorageAdaptor] (agentRequest-Handler-2:null) org.libvirt.LibvirtException: internal error missing backend for pool type 8.

This is causing the issue?


On Wed, Oct 9, 2013 at 6:22 PM, Suresh Sadhu <Su...@citrix.com>> wrote:

Raj,


1.set the  rados.user value  as  client.admin instead of admin  and try adding ceph primary storage.

2. check your   kvm host is  able to access the  ceph storage or not [if not I will recommend to install ceph on host and copy client.admin's key in /etc/ceph folder and try adding the primarystorage ]

Thanks
Sadhu

-----Original Message-----
From: raj kumar [mailto:rajkumar600003@gmail.com<ma...@gmail.com>]
Sent: 09 October 2013 11:59
To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
Subject: Re: could not add ceph primary storage.
Added new pool in ceph - cloudstack


*Client log from firebug:*

"NetworkError: 530  -
http://192.168.210.35:8080/client/api?command=createStoragePool&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&podId=6bbd3b48-d03a-4169-8d9d-8283cddc537f&clusterid=c1c9a508-76c0-4dea-8b12-8962663316f7&name=rbd-primary&url=rbd%3A%2F%2Fadmin%3AAQDZJB9SaI8vOBAAY00Iaoz7dTHcpUHXFxS8eQ%3D%3D%40192.168.210.40%2Fcloudstack&tags=rbd&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299847389"


 *Server apilog.log:*

2013-10-09 11:53:09,096 INFO  [cloud.api.ApiServer] (catalina-exec-18:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listStoragePools&id=eb8dbc7b-08a0-33df-bb48-5157de0f5b61&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299809242
200 { "liststoragepoolsresponse" : { "count":1 ,"storagepool" : [
{"id":"eb8dbc7b-08a0-33df-bb48-5157de0f5b61","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","podid":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","podname":"pod-rack1","name":"primary-nfs","ipaddress":"192.168.211.4","path":"/export/primary","created":"2013-10-03T16:50:56+0530","type":"NetworkFilesystem","clusterid":"c1c9a508-76c0-4dea-8b12-8962663316f7","clustername":"cluster-kvm1","disksizetotal":76754714624,"disksizeallocated":0,"state":"Up"}
] } }

2013-10-09 11:53:10,811 INFO  [cloud.api.ApiServer] (catalina-exec-11:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listZones&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&listAll=true&_=1381299810896
200 { "listzonesresponse" : { "count":1 ,"zone" : [
{"id":"9fe2780a-0bf8-466d-a712-60f456501597","name":"zone-bangalore","dns1":"192.168.210.15","internaldns1":"192.168.210.15","networktype":"Basic","securitygroupsenabled":true,"allocationstate":"Disabled","zonetoken":"5070ff41-97a9-375b-9412-970ae512110f","dhcpprovider":"VirtualRouter","localstorageenabled":false}
] } }

2013-10-09 11:53:10,843 INFO  [cloud.api.ApiServer] (catalina-exec-13:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listPods&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299811017
200 { "listpodsresponse" : { "count":1 ,"pod" : [
{"id":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","name":"pod-rack1","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","gateway":"192.168.210.5","netmask":"255.255.254.0","startip":"192.168.211.182","endip":"192.168.211.186","allocationstate":"Enabled"}
] } }

2013-10-09 11:53:10,873 INFO  [cloud.api.ApiServer] (catalina-exec-9:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listClusters&podid=6bbd3b48-d03a-4169-8d9d-8283cddc537f&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299811049
200 { "listclustersresponse" : { "count":1 ,"cluster" : [
{"id":"c1c9a508-76c0-4dea-8b12-8962663316f7","name":"cluster-kvm1","podid":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","podname":"pod-rack1","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","hypervisortype":"KVM","clustertype":"CloudManaged","allocationstate":"Enabled","managedstate":"Managed"}
] } }

2013-10-09 11:53:47,452 INFO  [cloud.api.ApiServer] (catalina-exec-23:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=createStoragePool&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&podId=6bbd3b48-d03a-4169-8d9d-8283cddc537f&clusterid=c1c9a508-76c0-4dea-8b12-8962663316f7&name=rbd-primary&url=rbd%3A%2F%2Fadmin%3AAQDZJB9SaI8vOBAAY00Iaoz7dTHcpUHXFxS8eQ%3D%3D%40192.168.210.40%2Fcloudstack&tags=rbd&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299847389
530 Failed to add storage pool


On Wed, Oct 9, 2013 at 12:09 AM, Suresh Sadhu <Su...@citrix.com>>wrote:

> You have to manually create the pool first in ceph and then add to
> cloudstack.
>
> On server side api calls are logged in in
> /var/log/cloudstack/management/apilog.log file  and client side you can
> capture the api calls by  enabling the firbug on firefox browser.
>
> Regards
> sadhu
>
>
> folder
> -----Original Message-----
> From: raj kumar [mailto:rajkumar600003@gmail.com<ma...@gmail.com>]
> Sent: 08 October 2013 21:57
> To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> Subject: Re: could not add ceph primary storage.
>
> Name: rbd-primary
> protocol: RBD
> Rad mon: 192.168.210.35  (one of the monitor ip)
> rados pool: cloudstack   (this pool don't exists in ceph)
> rados user: admin
> rados secret:  <client.admin's key here> storage tags: rbd
>
> can you pls let me know how can i get the api call. Is it part of logs in
> /var/log/cloudstack.
>
>
>
>
>
> On Tue, Oct 8, 2013 at 5:45 PM, Suresh Sadhu <Su...@citrix.com>
> >wrote:
>
> > What are the parameters you passed while adding the primary storage.
> >
> > Can you copy and paste api call
> >
> > Regards
> > Sadhu
> >
> >
> > -----Original Message-----
> > From: raj kumar [mailto:rajkumar600003@gmail.com<ma...@gmail.com>]
> > Sent: 08 October 2013 15:37
> > To: users@cloudstack.apache.org<ma...@cloudstack.apache.org>
> > Subject: could not add ceph primary storage.
> >
> > Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But
> > I'm getting below error.
> >
> > pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.
> >
> > /var/log/cloudstack/agent/agent.log shows:
> > 2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal
> > error missing backend for pool type 8
> > 2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
> > (agentRequest-Handler-3:null) Caught:
> > java.lang.NullPointerException
> >         at
> >
> >
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
> >         at com.cloud.agent.Agent.processRequest(Agent.java:525)
> >         at
> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
> >         at com.cloud.utils.nio.Task.run(Task.java:83)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:679)
> >
> > pls let me know how to resolve it. Thanks.
> >
>


Re: could not add ceph primary storage.

Posted by raj kumar <ra...@gmail.com>.
Added new pool in ceph - cloudstack



*Client log from firebug:*

"NetworkError: 530  -
http://192.168.210.35:8080/client/api?command=createStoragePool&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&podId=6bbd3b48-d03a-4169-8d9d-8283cddc537f&clusterid=c1c9a508-76c0-4dea-8b12-8962663316f7&name=rbd-primary&url=rbd%3A%2F%2Fadmin%3AAQDZJB9SaI8vOBAAY00Iaoz7dTHcpUHXFxS8eQ%3D%3D%40192.168.210.40%2Fcloudstack&tags=rbd&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299847389"



 *Server apilog.log:*

2013-10-09 11:53:09,096 INFO  [cloud.api.ApiServer] (catalina-exec-18:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listStoragePools&id=eb8dbc7b-08a0-33df-bb48-5157de0f5b61&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299809242
200 { "liststoragepoolsresponse" : { "count":1 ,"storagepool" : [
{"id":"eb8dbc7b-08a0-33df-bb48-5157de0f5b61","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","podid":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","podname":"pod-rack1","name":"primary-nfs","ipaddress":"192.168.211.4","path":"/export/primary","created":"2013-10-03T16:50:56+0530","type":"NetworkFilesystem","clusterid":"c1c9a508-76c0-4dea-8b12-8962663316f7","clustername":"cluster-kvm1","disksizetotal":76754714624,"disksizeallocated":0,"state":"Up"}
] } }

2013-10-09 11:53:10,811 INFO  [cloud.api.ApiServer] (catalina-exec-11:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listZones&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&listAll=true&_=1381299810896
200 { "listzonesresponse" : { "count":1 ,"zone" : [
{"id":"9fe2780a-0bf8-466d-a712-60f456501597","name":"zone-bangalore","dns1":"192.168.210.15","internaldns1":"192.168.210.15","networktype":"Basic","securitygroupsenabled":true,"allocationstate":"Disabled","zonetoken":"5070ff41-97a9-375b-9412-970ae512110f","dhcpprovider":"VirtualRouter","localstorageenabled":false}
] } }

2013-10-09 11:53:10,843 INFO  [cloud.api.ApiServer] (catalina-exec-13:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listPods&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299811017
200 { "listpodsresponse" : { "count":1 ,"pod" : [
{"id":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","name":"pod-rack1","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","gateway":"192.168.210.5","netmask":"255.255.254.0","startip":"192.168.211.182","endip":"192.168.211.186","allocationstate":"Enabled"}
] } }

2013-10-09 11:53:10,873 INFO  [cloud.api.ApiServer] (catalina-exec-9:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=listClusters&podid=6bbd3b48-d03a-4169-8d9d-8283cddc537f&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299811049
200 { "listclustersresponse" : { "count":1 ,"cluster" : [
{"id":"c1c9a508-76c0-4dea-8b12-8962663316f7","name":"cluster-kvm1","podid":"6bbd3b48-d03a-4169-8d9d-8283cddc537f","podname":"pod-rack1","zoneid":"9fe2780a-0bf8-466d-a712-60f456501597","zonename":"zone-bangalore","hypervisortype":"KVM","clustertype":"CloudManaged","allocationstate":"Enabled","managedstate":"Managed"}
] } }

2013-10-09 11:53:47,452 INFO  [cloud.api.ApiServer] (catalina-exec-23:null)
(userId=2 accountId=2 sessionId=5E5DE14C96E830CE8DEC70FD9060A126)
192.168.208.119 -- GET
command=createStoragePool&zoneid=9fe2780a-0bf8-466d-a712-60f456501597&podId=6bbd3b48-d03a-4169-8d9d-8283cddc537f&clusterid=c1c9a508-76c0-4dea-8b12-8962663316f7&name=rbd-primary&url=rbd%3A%2F%2Fadmin%3AAQDZJB9SaI8vOBAAY00Iaoz7dTHcpUHXFxS8eQ%3D%3D%40192.168.210.40%2Fcloudstack&tags=rbd&response=json&sessionkey=Vpza7MLGCa136227d7bBir94yAs%3D&_=1381299847389
530 Failed to add storage pool


On Wed, Oct 9, 2013 at 12:09 AM, Suresh Sadhu <Su...@citrix.com>wrote:

> You have to manually create the pool first in ceph and then add to
> cloudstack.
>
> On server side api calls are logged in in
> /var/log/cloudstack/management/apilog.log file  and client side you can
> capture the api calls by  enabling the firbug on firefox browser.
>
> Regards
> sadhu
>
>
> folder
> -----Original Message-----
> From: raj kumar [mailto:rajkumar600003@gmail.com]
> Sent: 08 October 2013 21:57
> To: users@cloudstack.apache.org
> Subject: Re: could not add ceph primary storage.
>
> Name: rbd-primary
> protocol: RBD
> Rad mon: 192.168.210.35  (one of the monitor ip)
> rados pool: cloudstack   (this pool don't exists in ceph)
> rados user: admin
> rados secret:  <client.admin's key here> storage tags: rbd
>
> can you pls let me know how can i get the api call. Is it part of logs in
> /var/log/cloudstack.
>
>
>
>
>
> On Tue, Oct 8, 2013 at 5:45 PM, Suresh Sadhu <Suresh.Sadhu@citrix.com
> >wrote:
>
> > What are the parameters you passed while adding the primary storage.
> >
> > Can you copy and paste api call
> >
> > Regards
> > Sadhu
> >
> >
> > -----Original Message-----
> > From: raj kumar [mailto:rajkumar600003@gmail.com]
> > Sent: 08 October 2013 15:37
> > To: users@cloudstack.apache.org
> > Subject: could not add ceph primary storage.
> >
> > Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But
> > I'm getting below error.
> >
> > pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.
> >
> > /var/log/cloudstack/agent/agent.log shows:
> > 2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
> > (agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal
> > error missing backend for pool type 8
> > 2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
> > (agentRequest-Handler-3:null) Caught:
> > java.lang.NullPointerException
> >         at
> >
> >
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
> >         at
> >
> >
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
> >         at com.cloud.agent.Agent.processRequest(Agent.java:525)
> >         at
> com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
> >         at com.cloud.utils.nio.Task.run(Task.java:83)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
> >         at
> >
> >
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
> >         at java.lang.Thread.run(Thread.java:679)
> >
> > pls let me know how to resolve it. Thanks.
> >
>

RE: could not add ceph primary storage.

Posted by Suresh Sadhu <Su...@citrix.com>.
You have to manually create the pool first in ceph and then add to cloudstack.

On server side api calls are logged in in /var/log/cloudstack/management/apilog.log file  and client side you can capture the api calls by  enabling the firbug on firefox browser.

Regards
sadhu


folder
-----Original Message-----
From: raj kumar [mailto:rajkumar600003@gmail.com] 
Sent: 08 October 2013 21:57
To: users@cloudstack.apache.org
Subject: Re: could not add ceph primary storage.

Name: rbd-primary
protocol: RBD
Rad mon: 192.168.210.35  (one of the monitor ip)
rados pool: cloudstack   (this pool don't exists in ceph)
rados user: admin
rados secret:  <client.admin's key here> storage tags: rbd

can you pls let me know how can i get the api call. Is it part of logs in /var/log/cloudstack.





On Tue, Oct 8, 2013 at 5:45 PM, Suresh Sadhu <Su...@citrix.com>wrote:

> What are the parameters you passed while adding the primary storage.
>
> Can you copy and paste api call
>
> Regards
> Sadhu
>
>
> -----Original Message-----
> From: raj kumar [mailto:rajkumar600003@gmail.com]
> Sent: 08 October 2013 15:37
> To: users@cloudstack.apache.org
> Subject: could not add ceph primary storage.
>
> Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But 
> I'm getting below error.
>
> pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.
>
> /var/log/cloudstack/agent/agent.log shows:
> 2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal 
> error missing backend for pool type 8
> 2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
> (agentRequest-Handler-3:null) Caught:
> java.lang.NullPointerException
>         at
>
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
>         at
>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
>         at
>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
>         at
>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
>         at
>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
>         at com.cloud.agent.Agent.processRequest(Agent.java:525)
>         at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
>         at com.cloud.utils.nio.Task.run(Task.java:83)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:679)
>
> pls let me know how to resolve it. Thanks.
>

Re: could not add ceph primary storage.

Posted by raj kumar <ra...@gmail.com>.
Name: rbd-primary
protocol: RBD
Rad mon: 192.168.210.35  (one of the monitor ip)
rados pool: cloudstack   (this pool don't exists in ceph)
rados user: admin
rados secret:  <client.admin's key here>
storage tags: rbd

can you pls let me know how can i get the api call. Is it part of logs in
/var/log/cloudstack.





On Tue, Oct 8, 2013 at 5:45 PM, Suresh Sadhu <Su...@citrix.com>wrote:

> What are the parameters you passed while adding the primary storage.
>
> Can you copy and paste api call
>
> Regards
> Sadhu
>
>
> -----Original Message-----
> From: raj kumar [mailto:rajkumar600003@gmail.com]
> Sent: 08 October 2013 15:37
> To: users@cloudstack.apache.org
> Subject: could not add ceph primary storage.
>
> Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But I'm
> getting below error.
>
> pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.
>
> /var/log/cloudstack/agent/agent.log shows:
> 2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
> (agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal error
> missing backend for pool type 8
> 2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
> (agentRequest-Handler-3:null) Caught:
> java.lang.NullPointerException
>         at
>
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
>         at
>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
>         at
>
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
>         at
>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
>         at
>
> com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
>         at com.cloud.agent.Agent.processRequest(Agent.java:525)
>         at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
>         at com.cloud.utils.nio.Task.run(Task.java:83)
>         at
>
> java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
>         at
>
> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
>         at java.lang.Thread.run(Thread.java:679)
>
> pls let me know how to resolve it. Thanks.
>

RE: could not add ceph primary storage.

Posted by Suresh Sadhu <Su...@citrix.com>.
What are the parameters you passed while adding the primary storage.

Can you copy and paste api call 

Regards
Sadhu


-----Original Message-----
From: raj kumar [mailto:rajkumar600003@gmail.com] 
Sent: 08 October 2013 15:37
To: users@cloudstack.apache.org
Subject: could not add ceph primary storage.

Hi, using ubuntu 12.04, compiled libvirtd 1.1.0 with rbd support.  But I'm getting below error.

pls note libvirt-bin 0.9.8 also exists for cloudstack-agent dependency.

/var/log/cloudstack/agent/agent.log shows:
2013-10-08 05:11:33,280 DEBUG [kvm.storage.LibvirtStorageAdaptor]
(agentRequest-Handler-3:null) org.libvirt.LibvirtException: internal error missing backend for pool type 8
2013-10-08 05:11:33,282 WARN  [cloud.agent.Agent]
(agentRequest-Handler-3:null) Caught:
java.lang.NullPointerException
        at
com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:540)
        at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:111)
        at
com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:104)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.execute(LibvirtComputingResource.java:2304)
        at
com.cloud.hypervisor.kvm.resource.LibvirtComputingResource.executeRequest(LibvirtComputingResource.java:1094)
        at com.cloud.agent.Agent.processRequest(Agent.java:525)
        at com.cloud.agent.Agent$AgentRequestHandler.doTask(Agent.java:852)
        at com.cloud.utils.nio.Task.run(Task.java:83)
        at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1146)
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615)
        at java.lang.Thread.run(Thread.java:679)

pls let me know how to resolve it. Thanks.