You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Praveen Buravilli <Pr...@citrix.com> on 2014/06/20 04:38:57 UTC

Adding ceph RBD storage failed in CentOS 6.5 KVM

Hi,

I am facing an issue in adding ceph RBD storage to CloudStack. It is failing with "Failed to add datasource" error.
I have followed all the available instructions related to CloudStack, KVM and ceph storage integration.

CenOS 6.5 KVM is used as KVM node here. I have read in some blog that we need to compile LibVirt in CenOS KVM nodes to make Ceph storage to work with in CloudStack.
Hence I have got git cloned LibVirt package from its source and upgraded LibVirt and Qemu versions.
(Commands used --> git cone #####, ./autogen.sh, make, make install).

It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support which needs to be specified as a parameter while compiling LibVirt.

Can anyone throw some pointers on how to rectify this problem?

Management Server Exception:
2014-06-20 09:58:03,757 DEBUG [agent.transport.Request] (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId: 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl] (catalina-exec-6:null) Details from executing class com.cloud.agent.api.ModifyStoragePoolCommand: java.lang.NullPointerException
                at com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:531)
                at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:185)
                at com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:177)

I even tried defining a pool using virsh command even that is failing with "internal error missing backend for pool type 8".
This indicates my KVM LibVirt is not supported for RBD.

Virsh exception on manual pool definition
<pool type='rbd'>
<name>c574980a-19fc-37e9-b6e3-788a7439575d</name>
<uuid>c574980a-19fc-37e9-b6e3-788a7439575d</uuid>
<source>
<host name='192.168.153.25' port='6789'/>
<name>cloudstack</name>
<auth username='cloudstack' type='ceph'>
<secret uuid='c574980a-19fc-37e9-b6e3-788a7439575d'/>
</auth>
</source>
</pool>

[root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
error: Failed to define pool from /tmp/rbd.xml
error: internal error missing backend for pool type 8

The Ceph storage is working fine and confirmed with following statistics of it.

Ceph output
[root@kvm-ovs-002 ~]# ceph auth list
installed auth entries:

osd.0
        key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
        caps: [mon] allow profile osd
        caps: [osd] allow *
client.admin
        key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
        caps: [mds] allow
        caps: [mon] allow *
        caps: [osd] allow *
client.bootstrap-mds
        key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
        caps: [mon] allow profile bootstrap-mds
client.bootstrap-osd
        key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
        caps: [mon] allow profile bootstrap-osd
client.cloudstack
        key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
        caps: [mon] allow r
        caps: [osd] allow rwx pool=cloudstack

[root@ceph ~]# ceph status
    cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
     health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck stale; 292 pgs stuck unclean; 1/1 in osds are down; clock skew detected on mon.kvm-ovs-00                                                                                        2
     monmap e1: 2 mons at {ceph=192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election epoch 10, quorum 0,1 ceph,kvm-ovs-002
     osdmap e8: 1 osds: 0 up, 1 in
      pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
            26036 MB used, 824 GB / 895 GB avail
                 292 stale+active+degraded

[root@kvm-ovs-002 agent]# cat /etc/redhat-release
CentOS release 6.5 (Final)

The compiled LibVirt is showing upgraded version in virsh but still finding old rpm packages in KVM.
Give me some hint on whether to clean-up these old RPMs?

Virsh version
[root@kvm-ovs-002 agent]# virsh version
Compiled against library: libvirt 1.2.6
Using library: libvirt 1.2.6
Using API: QEMU 1.2.6
Running hypervisor: QEMU 0.12.1

[root@kvm-ovs-002 agent]# rpm -qa | grep qemu
qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
gpxe-roms-qemu-0.9.7-6.10.el6.noarch
qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
You have mail in /var/spool/mail/root
[root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
libvirt-python-0.10.2-29.el6_5.9.x86_64
libvirt-java-0.4.9-1.el6.noarch
libvirt-cim-0.6.1-9.el6_5.1.x86_64
libvirt-client-0.10.2-29.el6_5.9.x86_64
libvirt-devel-0.10.2-29.el6_5.9.x86_64
fence-virtd-libvirt-0.2.3-15.el6.x86_64
libvirt-0.10.2-29.el6_5.9.x86_64
libvirt-snmp-0.0.2-4.el6.x86_64

*Attached all the log files from management and kvm servers.

Thanks,
Praveen Kumar Buravilli


RE: Adding ceph RBD storage failed in CentOS 6.5 KVM

Posted by Praveen Buravilli <Pr...@citrix.com>.
Thanks Andrija for your detailed instructions.

Here a question, can I execute all the steps mentioned at http://pastebin.com/HwCZEASR in the centos kvm node which had LibVirt compiled from its git source already?

Praveen Kumar Buravilli
Cloud Platform Implementation Engineer, APAC Cloud Services
M +91-9885456905
praveen.buravilli@citrix.com



-----Original Message-----
From: Andrija Panic [mailto:andrija.panic@gmail.com] 
Sent: 20 June 2014 14:35
To: users@cloudstack.apache.org
Subject: Re: Adding ceph RBD storage failed in CentOS 6.5 KVM

Been there, done that....:

This libvirt error "error: internal error missing backend for pool type 8"
means that libvirt was not compiled with RBD backend support.

Here are my steps to compile libvirt 1.2.3 few months ago - change configure options if you want, I tried to use as much options as possible.
http://pastebin.com/HwCZEASR (note the "ceph-devel" must be installed as in instructions in order to be able to compile with RBD support)

Also note, that if you are using CentOS 6.5, the "-s" flag was removed from qemu-img packages, meaning you will not be able to use snapshot functionality on cloudstack (not related to libbvirt), this is ANY snapshoting will be broken - there is workarround down :)


Also, beside making sure libvirt can talk to RBD/CEPH, you MUST be sure your qemu-img and qemu-kvm was compiled with RBD support, check like this:
qemu-img | grep "Supported formats"
should get somwething like this - note the "rbd" on the end of output:

*Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed parallels nbd blkdebug host_cdrom host_floppy host_device file rbd*


If both qemu and libvirt are fine, you will be able to add CEPH to ACS 4.2 or newer.

If you have stock (CentOS) versions of qemu-img and qemu-kvm - they are NOT compiled with RBD support, so you will not be able to use CEPH.
You need to install Intank's version of RPM packages, that are based on official RedHat stock code of those packages, but are pathced for RBD/CEPH support.

Refer to http://admintweets.com/centos-kvm-and-ceph-client-side-setup/ in order to download Intanks's RPMs - note that the latest RPMs you will find are probably also based on RHEL 6.5 version, that is missing "-s" flag, so you will still NOT be able to use disk snapshotting in ACS...

I solved this by installing a little bit older RPMs from Intanks (qemu-img and qemu-kvm that are based on RHEL 6.2 that still has that famous "-s"
flag preseent), please let me know if you need this provided, since they are NOT present on the Intanks download page at the moment exact versions of packages I installed and which are working fine for me:
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64

This whole setup is working fine for me...

Hope that helps, was going through the same pain, as you do now... :)

Best,
Andrija Panic




On 20 June 2014 04:38, Praveen Buravilli <Pr...@citrix.com>
wrote:

>  Hi,
>
>
>
> I am facing an issue in adding ceph RBD storage to CloudStack. It is 
> failing with “Failed to add datasource” error.
>
> I have followed all the available instructions related to CloudStack, 
> KVM and ceph storage integration.
>
>
>
> CenOS 6.5 KVM is used as KVM node here. I have read in some blog that 
> we need to compile LibVirt in CenOS KVM nodes to make Ceph storage to 
> work with in CloudStack.
>
> Hence I have got git cloned LibVirt package from its source and 
> upgraded LibVirt and Qemu versions.
>
> (Commands used à git cone #####, ./autogen.sh, make, make install).
>
>
>
> It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support 
> which needs to be specified as a parameter while compiling LibVirt.
>
>
>
> Can anyone throw some pointers on how to rectify this problem?
>
>
>
> *Management Server Exception:*
>
> 2014-06-20 09:58:03,757 DEBUG [agent.transport.Request]
> (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId:
> 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
>
> 2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl]
> (catalina-exec-6:null) Details from executing class
> com.cloud.agent.api.ModifyStoragePoolCommand: 
> java.lang.NullPointerException
>
>                 at
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePo
> ol(LibvirtStorageAdaptor.java:531)
>
>                 at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> ol(KVMStoragePoolManager.java:185)
>
>                 at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> ol(KVMStoragePoolManager.java:177)
>
>
>
> I even tried defining a pool using virsh command even that is failing 
> with “internal error missing backend for pool type 8”.
>
> This indicates my KVM LibVirt is not supported for RBD.
>
>
>
> *Virsh exception on manual pool definition*
>
> <pool type='rbd'>
>
> <name>c574980a-19fc-37e9-b6e3-788a7439575d</name>
>
> <uuid>c574980a-19fc-37e9-b6e3-788a7439575d</uuid>
>
> <source>
>
> <host name='192.168.153.25' port='6789'/>
>
> <name>cloudstack</name>
>
> <auth username='cloudstack' type='ceph'>
>
> <secret uuid='c574980a-19fc-37e9-b6e3-788a7439575d'/>
>
> </auth>
>
> </source>
>
> </pool>
>
>
>
> [root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
>
> error: Failed to define pool from /tmp/rbd.xml
>
> error: internal error missing backend for pool type 8
>
>
>
> The Ceph storage is working fine and confirmed with following 
> statistics of it.
>
>
>
> *Ceph output*
>
> [root@kvm-ovs-002 ~]# ceph auth list
>
> installed auth entries:
>
>
>
> osd.0
>
>         key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
>
>         caps: [mon] allow profile osd
>
>         caps: [osd] allow *
>
> client.admin
>
>         key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
>
>         caps: [mds] allow
>
>         caps: [mon] allow *
>
>         caps: [osd] allow *
>
> client.bootstrap-mds
>
>         key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
>
>         caps: [mon] allow profile bootstrap-mds
>
> client.bootstrap-osd
>
>         key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
>
>         caps: [mon] allow profile bootstrap-osd
>
> client.cloudstack
>
>         key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
>
>         caps: [mon] allow r
>
>         caps: [osd] allow rwx pool=cloudstack
>
>
>
> [root@ceph ~]# ceph status
>
>     cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
>
>      health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck 
> stale; 292 pgs stuck unclean; 1/1 in osds are down; clock skew 
> detected on
> mon.kvm-ovs-00
>                               2
>
>      monmap e1: 2 mons at {ceph=
> 192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election 
> epoch 10, quorum 0,1 ceph,kvm-ovs-002
>
>      osdmap e8: 1 osds: 0 up, 1 in
>
>       pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
>
>             26036 MB used, 824 GB / 895 GB avail
>
>                  292 stale+active+degraded
>
>
>
> [root@kvm-ovs-002 agent]# cat /etc/redhat-release
>
> CentOS release 6.5 (Final)
>
>
>
> The compiled LibVirt is showing upgraded version in virsh but still 
> finding old rpm packages in KVM.
>
> Give me some hint on whether to clean-up these old RPMs?
>
>
>
> *Virsh version*
>
> [root@kvm-ovs-002 agent]# virsh version
>
> Compiled against library: libvirt 1.2.6
>
> Using library: libvirt 1.2.6
>
> Using API: QEMU 1.2.6
>
> Running hypervisor: QEMU 0.12.1
>
>
>
> [root@kvm-ovs-002 agent]# rpm -qa | grep qemu
>
> qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
>
> gpxe-roms-qemu-0.9.7-6.10.el6.noarch
>
> qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
>
> qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
>
> qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
>
> You have mail in /var/spool/mail/root
>
> [root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
>
> libvirt-python-0.10.2-29.el6_5.9.x86_64
>
> libvirt-java-0.4.9-1.el6.noarch
>
> libvirt-cim-0.6.1-9.el6_5.1.x86_64
>
> libvirt-client-0.10.2-29.el6_5.9.x86_64
>
> libvirt-devel-0.10.2-29.el6_5.9.x86_64
>
> fence-virtd-libvirt-0.2.3-15.el6.x86_64
>
> libvirt-0.10.2-29.el6_5.9.x86_64
>
> libvirt-snmp-0.0.2-4.el6.x86_64
>
>
>
> *Attached all the log files from management and kvm servers.
>
>
>
> Thanks,
>
> *Praveen Kumar Buravilli*
>
>
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

RE: Adding ceph RBD storage failed in CentOS 6.5 KVM

Posted by Andrija Panic <an...@gmail.com>.
It seems you did not remove existing qemu-* packages ?

Also if your qemu-img shows rbd support no need to replace it - what
version of qemu-img are you using, how did you install it ? Stock centos
version did not (up to 1-2months ago) did not support rbd, maybe it does
now after redhat asquired inktank/ceph...?

Also post results of the ./configure to see if the libvirt was compiled
with rbd support....

Andrija

Sent from Google Nexus 4
On Jun 20, 2014 10:33 AM, "Praveen Buravilli" <Pr...@citrix.com>
wrote:

> Hi Andrija,
>
> Thanks for providing detailed instructions.
> I have executed all of the steps given in http://pastebin.com/HwCZEASR
> and also in http://admintweets.com/centos-kvm-and-ceph-client-side-setup/.
> But still facing the same issue. Any other ideas?
>
> Also, when I tried to install qemu-* rpms, it reported lot of dependency
> issues(attaching the output file).
>
> Please note that "qemu-img" was reporting support for rbd earlier also.
> Probably ceph osd node was also running on the same KVM node which might
> have updated "rbd" in list of supported formats.
>
> Error:
> ====
> [root@kvm-ovs-002 src-DONT-TOUCH]# virsh pool-define /tmp/rbd.xml
> error: Failed to define pool from /tmp/rbd.xml
> error: internal error: missing backend for pool type 8 (rbd)
>
> [root@kvm-ovs-002 src-DONT-TOUCH]# qemu-img | grep rbd
> Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2
> qed vhdx parallels nbd blkdebug host_cdrom host_floppy host_device file
> gluster gluster gluster gluster rbd
>
> Thanks,
> Praveen Kumar Buravilli
>
>
> -----Original Message-----
> From: Andrija Panic [mailto:andrija.panic@gmail.com]
> Sent: 20 June 2014 14:35
> To: users@cloudstack.apache.org
> Subject: Re: Adding ceph RBD storage failed in CentOS 6.5 KVM
>
> Been there, done that....:
>
> This libvirt error "error: internal error missing backend for pool type 8"
> means that libvirt was not compiled with RBD backend support.
>
> Here are my steps to compile libvirt 1.2.3 few months ago - change
> configure options if you want, I tried to use as much options as possible.
> http://pastebin.com/HwCZEASR (note the "ceph-devel" must be installed as
> in instructions in order to be able to compile with RBD support)
>
> Also note, that if you are using CentOS 6.5, the "-s" flag was removed
> from qemu-img packages, meaning you will not be able to use snapshot
> functionality on cloudstack (not related to libbvirt), this is ANY
> snapshoting will be broken - there is workarround down :)
>
>
> Also, beside making sure libvirt can talk to RBD/CEPH, you MUST be sure
> your qemu-img and qemu-kvm was compiled with RBD support, check like this:
> qemu-img | grep "Supported formats"
> should get somwething like this - note the "rbd" on the end of output:
>
> *Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2
> qed parallels nbd blkdebug host_cdrom host_floppy host_device file rbd*
>
>
> If both qemu and libvirt are fine, you will be able to add CEPH to ACS 4.2
> or newer.
>
> If you have stock (CentOS) versions of qemu-img and qemu-kvm - they are
> NOT compiled with RBD support, so you will not be able to use CEPH.
> You need to install Intank's version of RPM packages, that are based on
> official RedHat stock code of those packages, but are pathced for RBD/CEPH
> support.
>
> Refer to http://admintweets.com/centos-kvm-and-ceph-client-side-setup/ in
> order to download Intanks's RPMs - note that the latest RPMs you will find
> are probably also based on RHEL 6.5 version, that is missing "-s" flag, so
> you will still NOT be able to use disk snapshotting in ACS...
>
> I solved this by installing a little bit older RPMs from Intanks (qemu-img
> and qemu-kvm that are based on RHEL 6.2 that still has that famous "-s"
> flag preseent), please let me know if you need this provided, since they
> are NOT present on the Intanks download page at the moment exact versions
> of packages I installed and which are working fine for me:
> qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
> qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64
> qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
>
> This whole setup is working fine for me...
>
> Hope that helps, was going through the same pain, as you do now... :)
>
> Best,
> Andrija Panic
>
>
>
>
> On 20 June 2014 04:38, Praveen Buravilli <Pr...@citrix.com>
> wrote:
>
> >  Hi,
> >
> >
> >
> > I am facing an issue in adding ceph RBD storage to CloudStack. It is
> > failing with “Failed to add datasource” error.
> >
> > I have followed all the available instructions related to CloudStack,
> > KVM and ceph storage integration.
> >
> >
> >
> > CenOS 6.5 KVM is used as KVM node here. I have read in some blog that
> > we need to compile LibVirt in CenOS KVM nodes to make Ceph storage to
> > work with in CloudStack.
> >
> > Hence I have got git cloned LibVirt package from its source and
> > upgraded LibVirt and Qemu versions.
> >
> > (Commands used à git cone #####, ./autogen.sh, make, make install).
> >
> >
> >
> > It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support
> > which needs to be specified as a parameter while compiling LibVirt.
> >
> >
> >
> > Can anyone throw some pointers on how to rectify this problem?
> >
> >
> >
> > *Management Server Exception:*
> >
> > 2014-06-20 09:58:03,757 DEBUG [agent.transport.Request]
> > (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId:
> > 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
> >
> > 2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl]
> > (catalina-exec-6:null) Details from executing class
> > com.cloud.agent.api.ModifyStoragePoolCommand:
> > java.lang.NullPointerException
> >
> >                 at
> > com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePo
> > ol(LibvirtStorageAdaptor.java:531)
> >
> >                 at
> > com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> > ol(KVMStoragePoolManager.java:185)
> >
> >                 at
> > com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> > ol(KVMStoragePoolManager.java:177)
> >
> >
> >
> > I even tried defining a pool using virsh command even that is failing
> > with “internal error missing backend for pool type 8”.
> >
> > This indicates my KVM LibVirt is not supported for RBD.
> >
> >
> >
> > *Virsh exception on manual pool definition*
> >
> > <pool type='rbd'>
> >
> > <name>c574980a-19fc-37e9-b6e3-788a7439575d</name>
> >
> > <uuid>c574980a-19fc-37e9-b6e3-788a7439575d</uuid>
> >
> > <source>
> >
> > <host name='192.168.153.25' port='6789'/>
> >
> > <name>cloudstack</name>
> >
> > <auth username='cloudstack' type='ceph'>
> >
> > <secret uuid='c574980a-19fc-37e9-b6e3-788a7439575d'/>
> >
> > </auth>
> >
> > </source>
> >
> > </pool>
> >
> >
> >
> > [root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
> >
> > error: Failed to define pool from /tmp/rbd.xml
> >
> > error: internal error missing backend for pool type 8
> >
> >
> >
> > The Ceph storage is working fine and confirmed with following
> > statistics of it.
> >
> >
> >
> > *Ceph output*
> >
> > [root@kvm-ovs-002 ~]# ceph auth list
> >
> > installed auth entries:
> >
> >
> >
> > osd.0
> >
> >         key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
> >
> >         caps: [mon] allow profile osd
> >
> >         caps: [osd] allow *
> >
> > client.admin
> >
> >         key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
> >
> >         caps: [mds] allow
> >
> >         caps: [mon] allow *
> >
> >         caps: [osd] allow *
> >
> > client.bootstrap-mds
> >
> >         key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
> >
> >         caps: [mon] allow profile bootstrap-mds
> >
> > client.bootstrap-osd
> >
> >         key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
> >
> >         caps: [mon] allow profile bootstrap-osd
> >
> > client.cloudstack
> >
> >         key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
> >
> >         caps: [mon] allow r
> >
> >         caps: [osd] allow rwx pool=cloudstack
> >
> >
> >
> > [root@ceph ~]# ceph status
> >
> >     cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
> >
> >      health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck
> > stale; 292 pgs stuck unclean; 1/1 in osds are down; clock skew
> > detected on
> > mon.kvm-ovs-00
> >                               2
> >
> >      monmap e1: 2 mons at {ceph=
> > 192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election
> > epoch 10, quorum 0,1 ceph,kvm-ovs-002
> >
> >      osdmap e8: 1 osds: 0 up, 1 in
> >
> >       pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
> >
> >             26036 MB used, 824 GB / 895 GB avail
> >
> >                  292 stale+active+degraded
> >
> >
> >
> > [root@kvm-ovs-002 agent]# cat /etc/redhat-release
> >
> > CentOS release 6.5 (Final)
> >
> >
> >
> > The compiled LibVirt is showing upgraded version in virsh but still
> > finding old rpm packages in KVM.
> >
> > Give me some hint on whether to clean-up these old RPMs?
> >
> >
> >
> > *Virsh version*
> >
> > [root@kvm-ovs-002 agent]# virsh version
> >
> > Compiled against library: libvirt 1.2.6
> >
> > Using library: libvirt 1.2.6
> >
> > Using API: QEMU 1.2.6
> >
> > Running hypervisor: QEMU 0.12.1
> >
> >
> >
> > [root@kvm-ovs-002 agent]# rpm -qa | grep qemu
> >
> > qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
> >
> > qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
> >
> > gpxe-roms-qemu-0.9.7-6.10.el6.noarch
> >
> > qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
> >
> > qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
> >
> > qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
> >
> > qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
> >
> > qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
> >
> > You have mail in /var/spool/mail/root
> >
> > [root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
> >
> > libvirt-python-0.10.2-29.el6_5.9.x86_64
> >
> > libvirt-java-0.4.9-1.el6.noarch
> >
> > libvirt-cim-0.6.1-9.el6_5.1.x86_64
> >
> > libvirt-client-0.10.2-29.el6_5.9.x86_64
> >
> > libvirt-devel-0.10.2-29.el6_5.9.x86_64
> >
> > fence-virtd-libvirt-0.2.3-15.el6.x86_64
> >
> > libvirt-0.10.2-29.el6_5.9.x86_64
> >
> > libvirt-snmp-0.0.2-4.el6.x86_64
> >
> >
> >
> > *Attached all the log files from management and kvm servers.
> >
> >
> >
> > Thanks,
> >
> > *Praveen Kumar Buravilli*
> >
> >
> >
>
>
>
> --
>
> Andrija Panić
> --------------------------------------
>   http://admintweets.com
> --------------------------------------
>

RE: Adding ceph RBD storage failed in CentOS 6.5 KVM

Posted by Praveen Buravilli <Pr...@citrix.com>.
Hi Andrija,

Thanks for providing detailed instructions. 
I have executed all of the steps given in http://pastebin.com/HwCZEASR and also in http://admintweets.com/centos-kvm-and-ceph-client-side-setup/.
But still facing the same issue. Any other ideas? 

Also, when I tried to install qemu-* rpms, it reported lot of dependency issues(attaching the output file). 

Please note that "qemu-img" was reporting support for rbd earlier also. Probably ceph osd node was also running on the same KVM node which might have updated "rbd" in list of supported formats.

Error:
====
[root@kvm-ovs-002 src-DONT-TOUCH]# virsh pool-define /tmp/rbd.xml
error: Failed to define pool from /tmp/rbd.xml
error: internal error: missing backend for pool type 8 (rbd)

[root@kvm-ovs-002 src-DONT-TOUCH]# qemu-img | grep rbd
Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed vhdx parallels nbd blkdebug host_cdrom host_floppy host_device file gluster gluster gluster gluster rbd

Thanks,
Praveen Kumar Buravilli


-----Original Message-----
From: Andrija Panic [mailto:andrija.panic@gmail.com] 
Sent: 20 June 2014 14:35
To: users@cloudstack.apache.org
Subject: Re: Adding ceph RBD storage failed in CentOS 6.5 KVM

Been there, done that....:

This libvirt error "error: internal error missing backend for pool type 8"
means that libvirt was not compiled with RBD backend support.

Here are my steps to compile libvirt 1.2.3 few months ago - change configure options if you want, I tried to use as much options as possible.
http://pastebin.com/HwCZEASR (note the "ceph-devel" must be installed as in instructions in order to be able to compile with RBD support)

Also note, that if you are using CentOS 6.5, the "-s" flag was removed from qemu-img packages, meaning you will not be able to use snapshot functionality on cloudstack (not related to libbvirt), this is ANY snapshoting will be broken - there is workarround down :)


Also, beside making sure libvirt can talk to RBD/CEPH, you MUST be sure your qemu-img and qemu-kvm was compiled with RBD support, check like this:
qemu-img | grep "Supported formats"
should get somwething like this - note the "rbd" on the end of output:

*Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2 qed parallels nbd blkdebug host_cdrom host_floppy host_device file rbd*


If both qemu and libvirt are fine, you will be able to add CEPH to ACS 4.2 or newer.

If you have stock (CentOS) versions of qemu-img and qemu-kvm - they are NOT compiled with RBD support, so you will not be able to use CEPH.
You need to install Intank's version of RPM packages, that are based on official RedHat stock code of those packages, but are pathced for RBD/CEPH support.

Refer to http://admintweets.com/centos-kvm-and-ceph-client-side-setup/ in order to download Intanks's RPMs - note that the latest RPMs you will find are probably also based on RHEL 6.5 version, that is missing "-s" flag, so you will still NOT be able to use disk snapshotting in ACS...

I solved this by installing a little bit older RPMs from Intanks (qemu-img and qemu-kvm that are based on RHEL 6.2 that still has that famous "-s"
flag preseent), please let me know if you need this provided, since they are NOT present on the Intanks download page at the moment exact versions of packages I installed and which are working fine for me:
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64

This whole setup is working fine for me...

Hope that helps, was going through the same pain, as you do now... :)

Best,
Andrija Panic




On 20 June 2014 04:38, Praveen Buravilli <Pr...@citrix.com>
wrote:

>  Hi,
>
>
>
> I am facing an issue in adding ceph RBD storage to CloudStack. It is 
> failing with “Failed to add datasource” error.
>
> I have followed all the available instructions related to CloudStack, 
> KVM and ceph storage integration.
>
>
>
> CenOS 6.5 KVM is used as KVM node here. I have read in some blog that 
> we need to compile LibVirt in CenOS KVM nodes to make Ceph storage to 
> work with in CloudStack.
>
> Hence I have got git cloned LibVirt package from its source and 
> upgraded LibVirt and Qemu versions.
>
> (Commands used à git cone #####, ./autogen.sh, make, make install).
>
>
>
> It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support 
> which needs to be specified as a parameter while compiling LibVirt.
>
>
>
> Can anyone throw some pointers on how to rectify this problem?
>
>
>
> *Management Server Exception:*
>
> 2014-06-20 09:58:03,757 DEBUG [agent.transport.Request]
> (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId:
> 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
>
> 2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl]
> (catalina-exec-6:null) Details from executing class
> com.cloud.agent.api.ModifyStoragePoolCommand: 
> java.lang.NullPointerException
>
>                 at
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePo
> ol(LibvirtStorageAdaptor.java:531)
>
>                 at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> ol(KVMStoragePoolManager.java:185)
>
>                 at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePo
> ol(KVMStoragePoolManager.java:177)
>
>
>
> I even tried defining a pool using virsh command even that is failing 
> with “internal error missing backend for pool type 8”.
>
> This indicates my KVM LibVirt is not supported for RBD.
>
>
>
> *Virsh exception on manual pool definition*
>
> <pool type='rbd'>
>
> <name>c574980a-19fc-37e9-b6e3-788a7439575d</name>
>
> <uuid>c574980a-19fc-37e9-b6e3-788a7439575d</uuid>
>
> <source>
>
> <host name='192.168.153.25' port='6789'/>
>
> <name>cloudstack</name>
>
> <auth username='cloudstack' type='ceph'>
>
> <secret uuid='c574980a-19fc-37e9-b6e3-788a7439575d'/>
>
> </auth>
>
> </source>
>
> </pool>
>
>
>
> [root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
>
> error: Failed to define pool from /tmp/rbd.xml
>
> error: internal error missing backend for pool type 8
>
>
>
> The Ceph storage is working fine and confirmed with following 
> statistics of it.
>
>
>
> *Ceph output*
>
> [root@kvm-ovs-002 ~]# ceph auth list
>
> installed auth entries:
>
>
>
> osd.0
>
>         key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
>
>         caps: [mon] allow profile osd
>
>         caps: [osd] allow *
>
> client.admin
>
>         key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
>
>         caps: [mds] allow
>
>         caps: [mon] allow *
>
>         caps: [osd] allow *
>
> client.bootstrap-mds
>
>         key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
>
>         caps: [mon] allow profile bootstrap-mds
>
> client.bootstrap-osd
>
>         key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
>
>         caps: [mon] allow profile bootstrap-osd
>
> client.cloudstack
>
>         key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
>
>         caps: [mon] allow r
>
>         caps: [osd] allow rwx pool=cloudstack
>
>
>
> [root@ceph ~]# ceph status
>
>     cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
>
>      health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck 
> stale; 292 pgs stuck unclean; 1/1 in osds are down; clock skew 
> detected on
> mon.kvm-ovs-00
>                               2
>
>      monmap e1: 2 mons at {ceph=
> 192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election 
> epoch 10, quorum 0,1 ceph,kvm-ovs-002
>
>      osdmap e8: 1 osds: 0 up, 1 in
>
>       pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
>
>             26036 MB used, 824 GB / 895 GB avail
>
>                  292 stale+active+degraded
>
>
>
> [root@kvm-ovs-002 agent]# cat /etc/redhat-release
>
> CentOS release 6.5 (Final)
>
>
>
> The compiled LibVirt is showing upgraded version in virsh but still 
> finding old rpm packages in KVM.
>
> Give me some hint on whether to clean-up these old RPMs?
>
>
>
> *Virsh version*
>
> [root@kvm-ovs-002 agent]# virsh version
>
> Compiled against library: libvirt 1.2.6
>
> Using library: libvirt 1.2.6
>
> Using API: QEMU 1.2.6
>
> Running hypervisor: QEMU 0.12.1
>
>
>
> [root@kvm-ovs-002 agent]# rpm -qa | grep qemu
>
> qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
>
> gpxe-roms-qemu-0.9.7-6.10.el6.noarch
>
> qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
>
> qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
>
> qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
>
> You have mail in /var/spool/mail/root
>
> [root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
>
> libvirt-python-0.10.2-29.el6_5.9.x86_64
>
> libvirt-java-0.4.9-1.el6.noarch
>
> libvirt-cim-0.6.1-9.el6_5.1.x86_64
>
> libvirt-client-0.10.2-29.el6_5.9.x86_64
>
> libvirt-devel-0.10.2-29.el6_5.9.x86_64
>
> fence-virtd-libvirt-0.2.3-15.el6.x86_64
>
> libvirt-0.10.2-29.el6_5.9.x86_64
>
> libvirt-snmp-0.0.2-4.el6.x86_64
>
>
>
> *Attached all the log files from management and kvm servers.
>
>
>
> Thanks,
>
> *Praveen Kumar Buravilli*
>
>
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------

Re: Adding ceph RBD storage failed in CentOS 6.5 KVM

Posted by Andrija Panic <an...@gmail.com>.
Been there, done that....:

This libvirt error "error: internal error missing backend for pool type 8"
means that libvirt was not compiled with RBD backend support.

Here are my steps to compile libvirt 1.2.3 few months ago - change
configure options if you want, I tried to use as much options as possible.
http://pastebin.com/HwCZEASR (note the "ceph-devel" must be installed as in
instructions in order to be able to compile with RBD support)

Also note, that if you are using CentOS 6.5, the "-s" flag was removed from
qemu-img packages, meaning you will not be able to use snapshot
functionality on cloudstack (not related to libbvirt), this is ANY
snapshoting will be broken - there is workarround down :)


Also, beside making sure libvirt can talk to RBD/CEPH, you MUST be sure
your qemu-img and qemu-kvm was compiled with RBD support, check like this:
qemu-img | grep "Supported formats"
should get somwething like this - note the "rbd" on the end of output:

*Supported formats: raw cow qcow vdi vmdk cloop dmg bochs vpc vvfat qcow2
qed parallels nbd blkdebug host_cdrom host_floppy host_device file rbd*


If both qemu and libvirt are fine, you will be able to add CEPH to ACS 4.2
or newer.

If you have stock (CentOS) versions of qemu-img and qemu-kvm - they are NOT
compiled with RBD support, so you will not be able to use CEPH.
You need to install Intank's version of RPM packages, that are based on
official RedHat stock code of those packages, but are pathced for RBD/CEPH
support.

Refer to http://admintweets.com/centos-kvm-and-ceph-client-side-setup/ in
order to download Intanks's RPMs - note that the latest RPMs you will find
are probably also based on RHEL 6.5 version, that is missing "-s" flag, so
you will still NOT be able to use disk snapshotting in ACS...

I solved this by installing a little bit older RPMs from Intanks (qemu-img
and qemu-kvm that are based on RHEL 6.2 that still has that famous "-s"
flag preseent), please let me know if you need this provided, since they
are NOT present on the Intanks download page at the moment
exact versions of packages I installed and which are working fine for me:
qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
qemu-img-0.12.1.2-2.355.el6.2.cuttlefish.x86_64
qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64

This whole setup is working fine for me...

Hope that helps, was going through the same pain, as you do now... :)

Best,
Andrija Panic




On 20 June 2014 04:38, Praveen Buravilli <Pr...@citrix.com>
wrote:

>  Hi,
>
>
>
> I am facing an issue in adding ceph RBD storage to CloudStack. It is
> failing with “Failed to add datasource” error.
>
> I have followed all the available instructions related to CloudStack, KVM
> and ceph storage integration.
>
>
>
> CenOS 6.5 KVM is used as KVM node here. I have read in some blog that we
> need to compile LibVirt in CenOS KVM nodes to make Ceph storage to work
> with in CloudStack.
>
> Hence I have got git cloned LibVirt package from its source and upgraded
> LibVirt and Qemu versions.
>
> (Commands used à git cone #####, ./autogen.sh, make, make install).
>
>
>
> It seems CentOS 6.5 KVM needs to be enabled for RBD (driver) support which
> needs to be specified as a parameter while compiling LibVirt.
>
>
>
> Can anyone throw some pointers on how to rectify this problem?
>
>
>
> *Management Server Exception:*
>
> 2014-06-20 09:58:03,757 DEBUG [agent.transport.Request]
> (catalina-exec-6:null) Seq 1-1602164611: Received:  { Ans: , MgmtId:
> 52234925782, via: 1, Ver: v1, Flags: 10, { Answer } }
>
> 2014-06-20 09:58:03,757 DEBUG [agent.manager.AgentManagerImpl]
> (catalina-exec-6:null) Details from executing class
> com.cloud.agent.api.ModifyStoragePoolCommand: java.lang.NullPointerException
>
>                 at
> com.cloud.hypervisor.kvm.storage.LibvirtStorageAdaptor.createStoragePool(LibvirtStorageAdaptor.java:531)
>
>                 at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:185)
>
>                 at
> com.cloud.hypervisor.kvm.storage.KVMStoragePoolManager.createStoragePool(KVMStoragePoolManager.java:177)
>
>
>
> I even tried defining a pool using virsh command even that is failing with
> “internal error missing backend for pool type 8”.
>
> This indicates my KVM LibVirt is not supported for RBD.
>
>
>
> *Virsh exception on manual pool definition*
>
> <pool type='rbd'>
>
> <name>c574980a-19fc-37e9-b6e3-788a7439575d</name>
>
> <uuid>c574980a-19fc-37e9-b6e3-788a7439575d</uuid>
>
> <source>
>
> <host name='192.168.153.25' port='6789'/>
>
> <name>cloudstack</name>
>
> <auth username='cloudstack' type='ceph'>
>
> <secret uuid='c574980a-19fc-37e9-b6e3-788a7439575d'/>
>
> </auth>
>
> </source>
>
> </pool>
>
>
>
> [root@kvm-ovs-002 agent]# virsh pool-define /tmp/rbd.xml
>
> error: Failed to define pool from /tmp/rbd.xml
>
> error: internal error missing backend for pool type 8
>
>
>
> The Ceph storage is working fine and confirmed with following statistics
> of it.
>
>
>
> *Ceph output*
>
> [root@kvm-ovs-002 ~]# ceph auth list
>
> installed auth entries:
>
>
>
> osd.0
>
>         key: AQCwTKFTSOudGhAAsWAMRFuCqHjvTQKEV0zjvw==
>
>         caps: [mon] allow profile osd
>
>         caps: [osd] allow *
>
> client.admin
>
>         key: AQBRQqFTWOjBKhAA2s7KnL1z3h7PuKeqXMd7SA==
>
>         caps: [mds] allow
>
>         caps: [mon] allow *
>
>         caps: [osd] allow *
>
> client.bootstrap-mds
>
>         key: AQBSQqFTYKm6CRAAjjZotpN68yJaOjS2QTKzKg==
>
>         caps: [mon] allow profile bootstrap-mds
>
> client.bootstrap-osd
>
>         key: AQBRQqFT6GzXNxAA4ZTmVX6LIu0k4Sk7bh2Ifg==
>
>         caps: [mon] allow profile bootstrap-osd
>
> client.cloudstack
>
>         key: AQBNTaFTeCuwFRAA0NE7CCm9rwuq3ngLcGEysQ==
>
>         caps: [mon] allow r
>
>         caps: [osd] allow rwx pool=cloudstack
>
>
>
> [root@ceph ~]# ceph status
>
>     cluster 9c1be0b6-f600-45d7-ae0f-df7bcd3a82cd
>
>      health HEALTH_WARN 292 pgs degraded; 292 pgs stale; 292 pgs stuck
> stale; 292 pgs stuck unclean; 1/1 in osds are down; clock skew detected on
> mon.kvm-ovs-00
>                               2
>
>      monmap e1: 2 mons at {ceph=
> 192.168.153.25:6789/0,kvm-ovs-002=192.168.160.3: 6789/0}, election epoch
> 10, quorum 0,1 ceph,kvm-ovs-002
>
>      osdmap e8: 1 osds: 0 up, 1 in
>
>       pgmap v577: 292 pgs, 4 pools, 0 bytes data, 0 objects
>
>             26036 MB used, 824 GB / 895 GB avail
>
>                  292 stale+active+degraded
>
>
>
> [root@kvm-ovs-002 agent]# cat /etc/redhat-release
>
> CentOS release 6.5 (Final)
>
>
>
> The compiled LibVirt is showing upgraded version in virsh but still
> finding old rpm packages in KVM.
>
> Give me some hint on whether to clean-up these old RPMs?
>
>
>
> *Virsh version*
>
> [root@kvm-ovs-002 agent]# virsh version
>
> Compiled against library: libvirt 1.2.6
>
> Using library: libvirt 1.2.6
>
> Using API: QEMU 1.2.6
>
> Running hypervisor: QEMU 0.12.1
>
>
>
> [root@kvm-ovs-002 agent]# rpm -qa | grep qemu
>
> qemu-kvm-tools-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-kvm-0.12.1.2-2.415.el6.3ceph.x86_64
>
> gpxe-roms-qemu-0.9.7-6.10.el6.noarch
>
> qemu-kvm-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-img-0.12.1.2-2.415.el6.3ceph.x86_64
>
> qemu-img-0.12.1.2-2.415.el6_5.10.x86_64
>
> qemu-kvm-tools-0.12.1.2-2.415.el6.3ceph.x86_64
>
> qemu-guest-agent-0.12.1.2-2.415.el6_5.10.x86_64
>
> You have mail in /var/spool/mail/root
>
> [root@kvm-ovs-002 agent]# rpm -qa | grep libvirt
>
> libvirt-python-0.10.2-29.el6_5.9.x86_64
>
> libvirt-java-0.4.9-1.el6.noarch
>
> libvirt-cim-0.6.1-9.el6_5.1.x86_64
>
> libvirt-client-0.10.2-29.el6_5.9.x86_64
>
> libvirt-devel-0.10.2-29.el6_5.9.x86_64
>
> fence-virtd-libvirt-0.2.3-15.el6.x86_64
>
> libvirt-0.10.2-29.el6_5.9.x86_64
>
> libvirt-snmp-0.0.2-4.el6.x86_64
>
>
>
> *Attached all the log files from management and kvm servers.
>
>
>
> Thanks,
>
> *Praveen Kumar Buravilli*
>
>
>



-- 

Andrija Panić
--------------------------------------
  http://admintweets.com
--------------------------------------