You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Andrija Panic <an...@gmail.com> on 2018/02/01 20:30:43 UTC

Re: kvm live volume migration

Actually,  we have this feature (we call this internally
online-storage-migration) to migrate volume from CEPH/NFS to SolidFire
(thanks to Mike Tutkowski)

There is libvirt mechanism, where basically you start another PAUSED VM on
another host (same name and same XML file, except the storage volumes are
pointing to new storage, different paths, etc and maybe VNC listening
address needs to be changed or so) and then you issue on original host/VM
the live migrate command with few parameters... the libvirt will
transaprently handle the copy data process from Soruce to New volumes, and
after migration the VM will be alive (with new XML since have new volumes)
on new host, while the original VM on original host is destroyed....

(I can send you manual for this, that is realted to SF, but idea is the
same and you can exercies this on i.e. 2 NFS volumes on 2 different
storages)

This mechanism doesn't exist in ACS in general (AFAIK), except for when
migrating to SolidFire.

Perhaps community/DEV can help extend Mike's code to do same work on
different storage types...

Cheers

On 19 January 2018 at 18:45, Eric Green <er...@gmail.com> wrote:

> KVM is able to live migrate entire virtual machines complete with local
> volumes (see 'man virsh') but does require nbd (Network Block Device) to be
> installed on the destination host to do so. It may need installation of
> later libvirt / qemu packages from OpenStack repositories on Centos 6, I'm
> not sure, but just works on Centos 7. In any event, I have used this
> functionality to move virtual machines between virtualization hosts on my
> home network. It works.
>
> What is missing is the ability to live-migrate a disk from one shared
> storage to another. The functionality built into virsh live-migrates the
> volume ***to the exact same location on the new host***, so obviously is
> useless for migrating the disk to a new location on shared storage. I
> looked everywhere for the ability of KVM to live migrate a disk from point
> A to point B all by itself, and found no such thing. libvirt/qemu has the
> raw capabilities needed to do this, but it is not currently exposed as a
> single API via the qemu console or virsh. It can be emulated via scripting
> however:
>
> 1. Pause virtual machine
> 2. Do qcow2 snapshot.
> 3. Detach base disk, attach qcow2 snapshot
> 4. unpause virtual machine
> 5. copy qcow2 base file to new location
> 6. pause virtual machine
> 7. detach snapshot
> 8. unsnapshot qcow2 snapshot at its new location.
> 9. attach new base at new location.
> 10. unpause virtual machine.
>
> Thing is, if that entire process is not built into the underlying
> kvm/qemu/libvirt infrastructure as tested functionality with a defined API,
> there's no guarantee that it will work seamlessly and will continue working
> with the next release of the underlying infrastructure. This is using
> multiple different tools to manipulate the qcow2 file and attach/detach
> base disks to the running (but paused) kvm domain, and would have to be
> tested against all variations of those tools on all supported Cloudstack
> KVM host platforms. The test matrix looks pretty grim.
>
> By contrast, the migrate-with-local-storage process is built into virsh
> and is tested by the distribution vendor and the set of tools provided with
> the distribution is guaranteed to work with the virsh / libvirt/ qemu
> distributed by the distribution vendor. That makes the test matrix for
> move-with-local-storage look a lot simpler -- "is this functionality
> supported by that version of virsh on that distribution? Yes? Enable it.
> No? Don't enable it."
>
> I'd love to have live migration of disks on shared storage with Cloudstack
> KVM, but not at the expense of reliability. Shutting down a virtual machine
> in order to migrate one of its disks from one shared datastore to another
> is not ideal, but at least it's guaranteed reliable.
>
>
> > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> rafaelweingartner@gmail.com> wrote:
> >
> > Hey Marc,
> > It is very interesting that you are going to pick this up for KVM. I am
> > working in a related issue for XenServer [1].
> > If you can confirm that KVM is able to live migrate local volumes to
> other
> > local storage or shared storage I could make the feature I am working on
> > available to KVM as well.
> >
> >
> > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> >
> > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier <
> marco@exoscale.ch>
> > wrote:
> >
> >> There's a PR waiting to be fixed about live migration with local volume
> for
> >> KVM. So it will come at some point. I'm the one who made this PR but I'm
> >> not using the upstream release so it's hard for me to debug the problem.
> >> You can add yourself to the PR to get notify when things are moving on
> it.
> >>
> >> https://github.com/apache/cloudstack/pull/1709
> >>
> >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <er...@gmail.com>
> >> wrote:
> >>
> >>> Theoretically on Centos 7 as the host KVM OS it could be done with a
> >>> couple of pauses and the snapshotting mechanism built into qcow2, but
> >> there
> >>> is no simple way to do it directly via virsh, the libvirtd/qemu control
> >>> program that is used to manage virtualization. It's not as with
> issuing a
> >>> simple vmotion 'migrate volume' call in Vmware.
> >>>
> >>> I scripted out how it would work without that direct support in
> >>> libvirt/virsh and after looking at all the points where things could go
> >>> wrong, honestly, I think we need to wait until there is support in
> >>> libvirt/virsh to do this. virsh clearly has the capability internally
> to
> >> do
> >>> live migration of storage, since it does this for live domain migration
> >> of
> >>> local storage between machines when migrating KVM domains from one host
> >> to
> >>> another, but that capability is not currently exposed in a way
> Cloudstack
> >>> could use, at least not on Centos 7.
> >>>
> >>>
> >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <pp...@pulab.pl> wrote:
> >>>>
> >>>> Hello,
> >>>>
> >>>> Is there a chance that one day it will be possible to migrate volume
> >>> (root disk) of a live VM in KVM between storage pools (in CloudStack)?
> >>>> Like a storage vMotion in Vmware.
> >>>>
> >>>> Best regards,
> >>>> Piotr
> >>>>
> >>>
> >>>
> >>
> >
> >
> >
> > --
> > Rafael Weingärtner
>
>


-- 

Andrija Panić

Re: kvm live volume migration

Posted by Andrija Panic <an...@gmail.com>.
Cheers Melanie,

glad to hear that it works for you :)

So, as a test, I would say to make sure to test if things like following
are working fine: creating volume snapshots, stop/start VM, stop VM and
then migrated to another storage - i.e. any operations that include
reading/modifying that migrated volume (the one on the new storage) - if no
failures, I would expect that DB is happy with the changes you made (also,
make sure that the source volume on OLD storage is actually deleted after
the migration you've done previously !!!)

As mentioned, you MAYBE need to check the usage DB - to see if the pool_id
is mentioned anywhere (though I expect it's not)

will ping this .docx in a separate email

Cheers

On Thu, 26 Sep 2019 at 13:01, Melanie Desaive <m....@heinlein-support.de>
wrote:

> Hi Andrija,
>
> thank you so much for your support.
>
> It worked perfectly.
>
> I used the oVirt 4.3 Repository:
>         yum install
> https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm
>
> And limited the repo to the libvirt and qemu packages:
>         includepkgs=qemu-* libvirt-*
>
> I had some issues with the configuration of my 2nd Storage NFS and had
> to enable rpc.statd on the nfs server, otherwise I was not able to
> mount ISO images from 2nd storage. With the packages from CentOS-Base
> this was no problem.
>
> After changing to the oVirt packages, I was able to online migrate a
> volume between two storage repositories using the "virsh --copy-
> storage-all --xml" mechanism.
>
> Afterwards I updated the CloudStack database setting pool_id in volumes
> to the new storage for the migrated volume and everyting looked
> perfectly.
>
> I am still unsure how far to use this "hack" in production, but I feel
> reassured, that I do now have the opportunity to use this feature for
> urgend cases where no VM downtime is possible for a storage migration.
>
> If there is interest I can offer to translate my notes to english to
> provide them to others with the same need.
>
> And @Andrija, you mentioned a "detailed step-by-step .docx guide".. I
> would really be interested, maybe theres further information I missed.
> I would really like you to forward this to me.
>
> Greetings,
>
> Melanie
>
> Am Mittwoch, den 25.09.2019, 13:51 +0200 schrieb Andrija Panic:
> > Hi Mellanie,
> >
> > so Ubuntu 14.04+  - i.e. 16.04 working fine, 18.04 also being
> > supported in
> > later releases...
> > CentOS7 is THE recommended OS (or more recent Ubuntu) - but yes, RHEL
> > makes
> > small surprises sometimes (until CentOS 7.1, if not mistaken, they
> > also
> > didn't provide RBD/Ceph support, only in paid RHEV - won't comment on
> > this
> > lousy behaviour...)
> >
> > Afaik, for KVM specifically, no pooling of volumes' location, and you
> > would
> > need to update DB (pay attention also to usage records if that is of
> > your
> > interest)
> > You'll need to test this kind of migration and DB schema thoroughly
> > (including changing disk offerings and such in DB, in case your
> > source/destination storage solution have different Storage TAGs in
> > ACS)
> >
> > I'm trying to stay away from any clustered file systems, 'cause when
> > they
> > break, they break bad...so can't comment there.
> > You are using those as preSetup in KVM/CloudStack I guess - if it
> > works,
> > then all good.
> > But...move on I suggest, if possible :)
> >
> > Best
> > Andrija
> >
> > On Wed, 25 Sep 2019 at 13:16, Melanie Desaive <
> > m.desaive@heinlein-support.de>
> > wrote:
> >
> > > Hi Andrija,
> > >
> > > thank you so much for your detailled explanation! Looks like the my
> > > problem can be solved. :)
> > >
> > > To summarize the information you provided:
> > >
> > > As long as CloudStack does not support volume live migration I
> > > could be
> > > using
> > >
> > > virsh with --copy-storage --xml.
> > >
> > > BUT: CentOS7 is lacking necessary features! Bad luck. I started out
> > > with CentOS7 as Disto.
> > >
> > > You suggest, that it could be worth trying the qemu/libvirt
> > > packages
> > > from the oVirt repository. I will look into this now.
> > >
> > > But if that gets complicated: Cloudstack documentation lists
> > > CentOS7
> > > and Ubuntu 14.04 as supported Distros. Are there other not
> > > officially
> > > supported Distros/Version I could be using? I wanted to avoid the
> > > quite
> > > outdated Ubuntu 14.04 and did for that reason decide towards
> > > CentOS7.
> > >
> > > And another general question: How is CloudStack getting along with
> > > the
> > > Volumes of its VMs changing the storage repository without beeing
> > > informed about it. Does it get this information through polling, or
> > > do
> > > I have to manipulate the database?
> > >
> > > And to make things clearer: At the moment I am using storage
> > > attached
> > > through Gibrechannel using clustered LVM logic. Could also be
> > > changing
> > > to GFS2 on cLVM. Never heard anyone mentioning such a setup by now.
> > > Am
> > > I the only one running KVM on a proprietary storage system over
> > > Fibrechannel, are there limitation/problems to be expected from
> > > such a
> > > setup?
> > >
> > > Greetings,
> > >
> > > Melanie
> > >
> > >
> > > Am Mittwoch, den 25.09.2019, 11:46 +0200 schrieb Andrija Panic:
> > > > So, let me explain.
> > > >
> > > > Doing "online storage migration" aka live storage migration is
> > > > working for
> > > > CEPH/NFS --> SolidFire, starting from 4.11+
> > > > Internally it is done in the same way as "virsh with --copy-
> > > > storage-
> > > > all
> > > > --xml" in short
> > > >
> > > > Longer explanation:
> > > > Steps:
> > > > You create new volumes on the destination storage (SolidFire in
> > > > this
> > > > case),
> > > > set QoS etc - simply prepare the destination volumes (empty
> > > > volumes
> > > > atm).
> > > > On source host/VM, dump VM XML, edit XML, change disk section to
> > > > point to
> > > > new volume path, protocol, etc - and also the IP address for the
> > > > VNC
> > > > (cloudstack requirement), save XMLT
> > > > Then you do "virsh with --copy-storage-all --xml. myEditedVM.xml
> > > > ..."
> > > > stuff
> > > > that does the job.
> > > > Then NBD driver will be used to copy blocks from the source
> > > > volumes
> > > > to the
> > > > destination volumes while that virsh command is working...
> > > > (here's my
> > > > demo,
> > > > in details..
> > > >
> > >
> https://www.youtube.com/watch?v=Eo8BuHBnVgg&list=PLEr0fbgkyLKyiPnNzPz7XDjxnmQNxjJWT&index=5&t=2s
> > > > )
> > > >
> > > > This is yet to be extended/coded to support NFS-->NFS or CEPH
> > > > -->CEPH
> > > > or
> > > > CEPH/NFS-->CEPH/NFS... should not be that much work, the logic is
> > > > there
> > > > (bit part of the code)
> > > > Also, starting from 4.12, you can actually  (I believe using
> > > > identical
> > > > logic) migrate only ROOT volume that are on the LOCAL storage
> > > > (local
> > > > disks)
> > > > to another host/local storage - but DATA disks are not supported.
> > > >
> > > > Now...imagine the feature is there - if using CentOS7, our
> > > > friends at
> > > > RedHat have removed support for actually using live storage
> > > > migration
> > > > (unless you are paying for RHEV - but it does work fine on
> > > > CentOS6,
> > > > and
> > > > Ubuntu 14.04+
> > > >
> > > > I recall "we" had to use qemu/libvirt from the "oVirt" repo which
> > > > DOES
> > > > (DID) support storage live migration (normal EV packages from the
> > > > Special
> > > > Interest Group (2.12 tested) - did NOT include this...)
> > > >
> > > > I can send you step-by-step .docx guide for manually mimicking
> > > > what
> > > > is done
> > > > (in SolidFire, but identical logic for other storages) - but not
> > > > sure
> > > > if
> > > > that still helps you...
> > > >
> > > >
> > > > Andrija
> > > >
> > > > On Wed, 25 Sep 2019 at 10:51, Melanie Desaive <
> > > > m.desaive@heinlein-support.de>
> > > > wrote:
> > > >
> > > > > Hi all,
> > > > >
> > > > > I am currently doing my first steps with KVM as hypervisor for
> > > > > CloudStack. I was shocked to realize that currently live volume
> > > > > migration between different shared storages is not supported
> > > > > with
> > > > > KVM.
> > > > > This is a feature I use intensively with XenServer.
> > > > >
> > > > > How do you get along with this limitation? I do really expect
> > > > > you
> > > > > to
> > > > > use some workarounds, or do you all only accept vm downtimes
> > > > > for a
> > > > > storage migration?
> > > > >
> > > > > With my first investigation I found three techniques mentioned
> > > > > and
> > > > > would like to ask for suggestions which to investigate deeper:
> > > > >
> > > > >  x Eric describes a technique using snapshosts and pauses to do
> > > > > a
> > > > > live
> > > > > storage migration in this mailing list tread.
> > > > >  x Dag suggests using virsh with --copy-storage-all --xml.
> > > > >  x I found articles about using virsh blockcopy for storage
> > > > > live
> > > > > migration.
> > > > >
> > > > > Greetings,
> > > > >
> > > > > Melanie
> > > > >
> > > > > Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> > > > > > @Dag, you might want to check with Mike Tutkowski, how he
> > > > > > implemented
> > > > > > this
> > > > > > for the "online storage migration" from other storages (CEPH
> > > > > > and
> > > > > > NFS
> > > > > > implemented so far as sources) to SolidFire.
> > > > > >
> > > > > > We are doing exactly the same demo/manual way (this is what
> > > > > > Mike
> > > > > > has
> > > > > > sent
> > > > > > me back in the days), so perhaps you want to see how to
> > > > > > translate
> > > > > > this into
> > > > > > general things (so ANY to ANY storage migration) inside
> > > > > > CloudStack.
> > > > > >
> > > > > > Cheers
> > > > > >
> > > > > > On 2 February 2018 at 10:28, Dag Sonstebo <
> > > > > > Dag.Sonstebo@shapeblue.com
> > > > > > wrote:
> > > > > >
> > > > > > > All
> > > > > > >
> > > > > > > I am doing a bit of R&D around this for a client at the
> > > > > > > moment.
> > > > > > > I
> > > > > > > am
> > > > > > > semi-successful in getting live migrations to different
> > > > > > > storage
> > > > > > > pools to
> > > > > > > work. The method I’m using is as follows – this does not
> > > > > > > take
> > > > > > > into
> > > > > > > account
> > > > > > > any efficiency optimisation around the disk transfer (which
> > > > > > > is
> > > > > > > next
> > > > > > > on my
> > > > > > > list). The below should answer your question Eric about
> > > > > > > moving
> > > > > > > to a
> > > > > > > different location – and I am also working with your steps
> > > > > > > to
> > > > > > > see
> > > > > > > where I
> > > > > > > can improve the following. Keep in mind all of this is
> > > > > > > external
> > > > > > > to
> > > > > > > CloudStack – although CloudStack picks up the destination
> > > > > > > KVM
> > > > > > > host
> > > > > > > automatically it does not update the volume tables etc.,
> > > > > > > neither
> > > > > > > does it do
> > > > > > > any housekeeping.
> > > > > > >
> > > > > > > 1) Ensure the same network bridges are up on source and
> > > > > > > destination
> > > > > > > –
> > > > > > > these are found with:
> > > > > > >
> > > > > > > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> > > > > > >       <source file='/mnt/00e88a7b-985f-3be8-b717-
> > > > > > > 0a59d8197640/d0ab5dd5-
> > > > > > > e3dd-47ac-a326-5ce3d47d194d'/>
> > > > > > >       <source bridge='breth1-725'/>
> > > > > > >       <source path='/dev/pts/3'/>
> > > > > > >       <source path='/dev/pts/3'/>
> > > > > > >
> > > > > > > So from this make sure breth1-725 is up on the destionation
> > > > > > > host
> > > > > > > (do it
> > > > > > > the hard way or cheat and spin up a VM from same account
> > > > > > > and
> > > > > > > network on
> > > > > > > that host)
> > > > > > >
> > > > > > > 2) Find size of source disk and create stub disk in
> > > > > > > destination
> > > > > > > (this part
> > > > > > > can be made more efficient to speed up disk transfer – by
> > > > > > > doing
> > > > > > > similar
> > > > > > > things to what Eric is doing):
> > > > > > >
> > > > > > > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > > > > > > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > > > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-
> > > > > > > e3dd-
> > > > > > > 47ac-a326-5ce3d47d194d
> > > > > > > file format: qcow2
> > > > > > > virtual size: 8.0G (8589934592 bytes)
> > > > > > > disk size: 32M
> > > > > > > cluster_size: 65536
> > > > > > > backing file: /mnt/00e88a7b-985f-3be8-b717-
> > > > > > > 0a59d8197640/3caaf4c9-
> > > > > > > eaec-
> > > > > > > 11e7-800b-06b4a401075c
> > > > > > >
> > > > > > > ######################
> > > > > > >
> > > > > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img
> > > > > > > create
> > > > > > > -f
> > > > > > > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > > > > > > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d',
> > > > > > > fmt=qcow2
> > > > > > > size=8589934592 encryption=off cluster_size=65536
> > > > > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img
> > > > > > > info
> > > > > > > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > > > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > > > file format: qcow2
> > > > > > > virtual size: 8.0G (8589934592 bytes)
> > > > > > > disk size: 448K
> > > > > > > cluster_size: 65536
> > > > > > >
> > > > > > > 3) Rewrite the new VM XML file for the destination with:
> > > > > > > a) New disk location, in this case this is just a new path
> > > > > > > (Eric –
> > > > > > > this
> > > > > > > answers your question)
> > > > > > > b) Different IP addresses for VNC – in this case 10.0.0.1
> > > > > > > to
> > > > > > > 10.0.0.2
> > > > > > > and carry out migration.
> > > > > > >
> > > > > > > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-
> > > > > > > 3be8-
> > > > > > > b717-
> > > > > > > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed
> > > > > > > -e
> > > > > > > 's/
> > > > > > > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > > > > > >
> > > > > > > [root@kvm1 ~]# virsh migrate --live --persistent --copy-
> > > > > > > storage-all
> > > > > > > --xml
> > > > > > > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --
> > > > > > > verbose
> > > > > > > --abort-on-error
> > > > > > > Migration: [ 25 %]
> > > > > > >
> > > > > > > 4) Once complete delete the source file. This can be done
> > > > > > > with
> > > > > > > extra
> > > > > > > switches on the virsh migrate command if need be.
> > > > > > > = = =
> > > > > > >
> > > > > > > In the simplest tests this works – destination VM remains
> > > > > > > online
> > > > > > > and has
> > > > > > > storage in new location – but it’s not persistent –
> > > > > > > sometimes
> > > > > > > the
> > > > > > > destination VM ends up in a paused state, and I’m working
> > > > > > > on
> > > > > > > how to
> > > > > > > get
> > > > > > > around this. I also noted virsh migrate has a  migrate-
> > > > > > > setmaxdowntime which
> > > > > > > I think can be useful here.
> > > > > > >
> > > > > > > Regards,
> > > > > > > Dag Sonstebo
> > > > > > > Cloud Architect
> > > > > > > ShapeBlue
> > > > > > >
> > > > > > > On 01/02/2018, 20:30, "Andrija Panic" <
> > > > > > > andrija.panic@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > >     Actually,  we have this feature (we call this
> > > > > > > internally
> > > > > > >     online-storage-migration) to migrate volume from
> > > > > > > CEPH/NFS
> > > > > > > to
> > > > > > > SolidFire
> > > > > > >     (thanks to Mike Tutkowski)
> > > > > > >
> > > > > > >     There is libvirt mechanism, where basically you start
> > > > > > > another
> > > > > > > PAUSED
> > > > > > > VM on
> > > > > > >     another host (same name and same XML file, except the
> > > > > > > storage
> > > > > > > volumes
> > > > > > > are
> > > > > > >     pointing to new storage, different paths, etc and maybe
> > > > > > > VNC
> > > > > > > listening
> > > > > > >     address needs to be changed or so) and then you issue
> > > > > > > on
> > > > > > > original
> > > > > > > host/VM
> > > > > > >     the live migrate command with few parameters... the
> > > > > > > libvirt
> > > > > > > will
> > > > > > >     transaprently handle the copy data process from Soruce
> > > > > > > to
> > > > > > > New
> > > > > > > volumes,
> > > > > > > and
> > > > > > >     after migration the VM will be alive (with new XML
> > > > > > > since
> > > > > > > have
> > > > > > > new
> > > > > > > volumes)
> > > > > > >     on new host, while the original VM on original host is
> > > > > > > destroyed....
> > > > > > >
> > > > > > >     (I can send you manual for this, that is realted to SF,
> > > > > > > but
> > > > > > > idea is the
> > > > > > >     same and you can exercies this on i.e. 2 NFS volumes on
> > > > > > > 2
> > > > > > > different
> > > > > > >     storages)
> > > > > > >
> > > > > > >     This mechanism doesn't exist in ACS in general (AFAIK),
> > > > > > > except
> > > > > > > for when
> > > > > > >     migrating to SolidFire.
> > > > > > >
> > > > > > >     Perhaps community/DEV can help extend Mike's code to do
> > > > > > > same
> > > > > > > work on
> > > > > > >     different storage types...
> > > > > > >
> > > > > > >     Cheers
> > > > > > >
> > > > > > >
> > > > > > > Dag.Sonstebo@shapeblue.com
> > > > > > > www.shapeblue.com
> > > > > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > > > @shapeblue
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On 19 January 2018 at 18:45, Eric Green <
> > > > > > > eric.lee.green@gmail.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > >     > KVM is able to live migrate entire virtual machines
> > > > > > > complete
> > > > > > > with
> > > > > > > local
> > > > > > >     > volumes (see 'man virsh') but does require nbd
> > > > > > > (Network
> > > > > > > Block
> > > > > > > Device) to be
> > > > > > >     > installed on the destination host to do so. It may
> > > > > > > need
> > > > > > > installation
> > > > > > > of
> > > > > > >     > later libvirt / qemu packages from OpenStack
> > > > > > > repositories
> > > > > > > on
> > > > > > > Centos
> > > > > > > 6, I'm
> > > > > > >     > not sure, but just works on Centos 7. In any event, I
> > > > > > > have
> > > > > > > used this
> > > > > > >     > functionality to move virtual machines between
> > > > > > > virtualization
> > > > > > > hosts
> > > > > > > on my
> > > > > > >     > home network. It works.
> > > > > > >     >
> > > > > > >     > What is missing is the ability to live-migrate a disk
> > > > > > > from
> > > > > > > one shared
> > > > > > >     > storage to another. The functionality built into
> > > > > > > virsh
> > > > > > > live-
> > > > > > > migrates
> > > > > > > the
> > > > > > >     > volume ***to the exact same location on the new
> > > > > > > host***,
> > > > > > > so
> > > > > > > obviously is
> > > > > > >     > useless for migrating the disk to a new location on
> > > > > > > shared
> > > > > > > storage. I
> > > > > > >     > looked everywhere for the ability of KVM to live
> > > > > > > migrate
> > > > > > > a
> > > > > > > disk from
> > > > > > > point
> > > > > > >     > A to point B all by itself, and found no such thing.
> > > > > > > libvirt/qemu
> > > > > > > has the
> > > > > > >     > raw capabilities needed to do this, but it is not
> > > > > > > currently
> > > > > > > exposed
> > > > > > > as a
> > > > > > >     > single API via the qemu console or virsh. It can be
> > > > > > > emulated
> > > > > > > via
> > > > > > > scripting
> > > > > > >     > however:
> > > > > > >     >
> > > > > > >     > 1. Pause virtual machine
> > > > > > >     > 2. Do qcow2 snapshot.
> > > > > > >     > 3. Detach base disk, attach qcow2 snapshot
> > > > > > >     > 4. unpause virtual machine
> > > > > > >     > 5. copy qcow2 base file to new location
> > > > > > >     > 6. pause virtual machine
> > > > > > >     > 7. detach snapshot
> > > > > > >     > 8. unsnapshot qcow2 snapshot at its new location.
> > > > > > >     > 9. attach new base at new location.
> > > > > > >     > 10. unpause virtual machine.
> > > > > > >     >
> > > > > > >     > Thing is, if that entire process is not built into
> > > > > > > the
> > > > > > > underlying
> > > > > > >     > kvm/qemu/libvirt infrastructure as tested
> > > > > > > functionality
> > > > > > > with
> > > > > > > a
> > > > > > > defined API,
> > > > > > >     > there's no guarantee that it will work seamlessly and
> > > > > > > will
> > > > > > > continue
> > > > > > > working
> > > > > > >     > with the next release of the underlying
> > > > > > > infrastructure.
> > > > > > > This
> > > > > > > is using
> > > > > > >     > multiple different tools to manipulate the qcow2 file
> > > > > > > and
> > > > > > > attach/detach
> > > > > > >     > base disks to the running (but paused) kvm domain,
> > > > > > > and
> > > > > > > would
> > > > > > > have to
> > > > > > > be
> > > > > > >     > tested against all variations of those tools on all
> > > > > > > supported
> > > > > > > Cloudstack
> > > > > > >     > KVM host platforms. The test matrix looks pretty
> > > > > > > grim.
> > > > > > >     >
> > > > > > >     > By contrast, the migrate-with-local-storage process
> > > > > > > is
> > > > > > > built
> > > > > > > into
> > > > > > > virsh
> > > > > > >     > and is tested by the distribution vendor and the set
> > > > > > > of
> > > > > > > tools
> > > > > > > provided with
> > > > > > >     > the distribution is guaranteed to work with the virsh
> > > > > > > /
> > > > > > > libvirt/ qemu
> > > > > > >     > distributed by the distribution vendor. That makes
> > > > > > > the
> > > > > > > test
> > > > > > > matrix
> > > > > > > for
> > > > > > >     > move-with-local-storage look a lot simpler -- "is
> > > > > > > this
> > > > > > > functionality
> > > > > > >     > supported by that version of virsh on that
> > > > > > > distribution?
> > > > > > > Yes?
> > > > > > > Enable
> > > > > > > it.
> > > > > > >     > No? Don't enable it."
> > > > > > >     >
> > > > > > >     > I'd love to have live migration of disks on shared
> > > > > > > storage
> > > > > > > with
> > > > > > > Cloudstack
> > > > > > >     > KVM, but not at the expense of reliability. Shutting
> > > > > > > down
> > > > > > > a
> > > > > > > virtual
> > > > > > > machine
> > > > > > >     > in order to migrate one of its disks from one shared
> > > > > > > datastore to
> > > > > > > another
> > > > > > >     > is not ideal, but at least it's guaranteed reliable.
> > > > > > >     >
> > > > > > >     >
> > > > > > >     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> > > > > > >     > rafaelweingartner@gmail.com> wrote:
> > > > > > >     > >
> > > > > > >     > > Hey Marc,
> > > > > > >     > > It is very interesting that you are going to pick
> > > > > > > this
> > > > > > > up
> > > > > > > for KVM.
> > > > > > > I am
> > > > > > >     > > working in a related issue for XenServer [1].
> > > > > > >     > > If you can confirm that KVM is able to live migrate
> > > > > > > local
> > > > > > > volumes
> > > > > > > to
> > > > > > >     > other
> > > > > > >     > > local storage or shared storage I could make the
> > > > > > > feature I
> > > > > > > am
> > > > > > > working on
> > > > > > >     > > available to KVM as well.
> > > > > > >     > >
> > > > > > >     > >
> > > > > > >     > > [1]
> > > > > > > https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> > > > > > >     > >
> > > > > > >     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle
> > > > > > > Brothier
> > > > > > > <
> > > > > > >     > marco@exoscale.ch>
> > > > > > >     > > wrote:
> > > > > > >     > >
> > > > > > >     > >> There's a PR waiting to be fixed about live
> > > > > > > migration
> > > > > > > with
> > > > > > > local
> > > > > > > volume
> > > > > > >     > for
> > > > > > >     > >> KVM. So it will come at some point. I'm the one
> > > > > > > who
> > > > > > > made
> > > > > > > this PR
> > > > > > > but I'm
> > > > > > >     > >> not using the upstream release so it's hard for me
> > > > > > > to
> > > > > > > debug the
> > > > > > > problem.
> > > > > > >     > >> You can add yourself to the PR to get notify when
> > > > > > > things
> > > > > > > are
> > > > > > > moving on
> > > > > > >     > it.
> > > > > > >     > >>
> > > > > > >     > >> https://github.com/apache/cloudstack/pull/1709
> > > > > > >     > >>
> > > > > > >     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> > > > > > > eric.lee.green@gmail.com>
> > > > > > >     > >> wrote:
> > > > > > >     > >>
> > > > > > >     > >>> Theoretically on Centos 7 as the host KVM OS it
> > > > > > > could
> > > > > > > be
> > > > > > > done
> > > > > > > with a
> > > > > > >     > >>> couple of pauses and the snapshotting mechanism
> > > > > > > built
> > > > > > > into
> > > > > > > qcow2, but
> > > > > > >     > >> there
> > > > > > >     > >>> is no simple way to do it directly via virsh, the
> > > > > > > libvirtd/qemu
> > > > > > > control
> > > > > > >     > >>> program that is used to manage virtualization.
> > > > > > > It's
> > > > > > > not
> > > > > > > as with
> > > > > > >     > issuing a
> > > > > > >     > >>> simple vmotion 'migrate volume' call in Vmware.
> > > > > > >     > >>>
> > > > > > >     > >>> I scripted out how it would work without that
> > > > > > > direct
> > > > > > > support in
> > > > > > >     > >>> libvirt/virsh and after looking at all the points
> > > > > > > where
> > > > > > > things
> > > > > > > could go
> > > > > > >     > >>> wrong, honestly, I think we need to wait until
> > > > > > > there
> > > > > > > is
> > > > > > > support
> > > > > > > in
> > > > > > >     > >>> libvirt/virsh to do this. virsh clearly has the
> > > > > > > capability
> > > > > > > internally
> > > > > > >     > to
> > > > > > >     > >> do
> > > > > > >     > >>> live migration of storage, since it does this for
> > > > > > > live
> > > > > > > domain
> > > > > > > migration
> > > > > > >     > >> of
> > > > > > >     > >>> local storage between machines when migrating KVM
> > > > > > > domains
> > > > > > > from
> > > > > > > one host
> > > > > > >     > >> to
> > > > > > >     > >>> another, but that capability is not currently
> > > > > > > exposed
> > > > > > > in
> > > > > > > a way
> > > > > > >     > Cloudstack
> > > > > > >     > >>> could use, at least not on Centos 7.
> > > > > > >     > >>>
> > > > > > >     > >>>
> > > > > > >     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <
> > > > > > > ppisz@pulab.pl>
> > > > > > > wrote:
> > > > > > >     > >>>>
> > > > > > >     > >>>> Hello,
> > > > > > >     > >>>>
> > > > > > >     > >>>> Is there a chance that one day it will be
> > > > > > > possible
> > > > > > > to
> > > > > > > migrate
> > > > > > > volume
> > > > > > >     > >>> (root disk) of a live VM in KVM between storage
> > > > > > > pools
> > > > > > > (in
> > > > > > > CloudStack)?
> > > > > > >     > >>>> Like a storage vMotion in Vmware.
> > > > > > >     > >>>>
> > > > > > >     > >>>> Best regards,
> > > > > > >     > >>>> Piotr
> > > > > > >     > >>>>
> > > > > > >     > >>>
> > > > > > >     > >>>
> > > > > > >     > >>
> > > > > > >     > >
> > > > > > >     > >
> > > > > > >     > >
> > > > > > >     > > --
> > > > > > >     > > Rafael Weingärtner
> > > > > > >     >
> > > > > > >     >
> > > > > > >
> > > > > > >
> > > > > > >     --
> > > > > > >
> > > > > > >     Andrija Panić
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > --
> > > > > --
> > > > > Heinlein Support GmbH
> > > > > Schwedter Str. 8/9b, 10119 Berlin
> > > > >
> > > > > https://www.heinlein-support.de
> > > > >
> > > > > Tel: 030 / 40 50 51 - 62
> > > > > Fax: 030 / 40 50 51 - 19
> > > > >
> > > > > Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> > > > > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > > > >
> > > --
> > > --
> > > Heinlein Support GmbH
> > > Schwedter Str. 8/9b, 10119 Berlin
> > >
> > > https://www.heinlein-support.de
> > >
> > > Tel: 030 / 40 50 51 - 62
> > > Fax: 030 / 40 50 51 - 19
> > >
> > > Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> > > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > >
> >
> >
> --
> --
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 40 50 51 - 62
> Fax: 030 / 40 50 51 - 19
>
> Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
>


-- 

Andrija Panić

Re: kvm live volume migration

Posted by Melanie Desaive <m....@heinlein-support.de>.
Hi Andrija,

thank you so much for your support. 

It worked perfectly.

I used the oVirt 4.3 Repository:
	yum install 
https://resources.ovirt.org/pub/yum-repo/ovirt-release43.rpm

And limited the repo to the libvirt and qemu packages:
	includepkgs=qemu-* libvirt-*

I had some issues with the configuration of my 2nd Storage NFS and had
to enable rpc.statd on the nfs server, otherwise I was not able to
mount ISO images from 2nd storage. With the packages from CentOS-Base
this was no problem.

After changing to the oVirt packages, I was able to online migrate a
volume between two storage repositories using the "virsh --copy-
storage-all --xml" mechanism. 

Afterwards I updated the CloudStack database setting pool_id in volumes
to the new storage for the migrated volume and everyting looked
perfectly.

I am still unsure how far to use this "hack" in production, but I feel
reassured, that I do now have the opportunity to use this feature for
urgend cases where no VM downtime is possible for a storage migration.

If there is interest I can offer to translate my notes to english to
provide them to others with the same need.

And @Andrija, you mentioned a "detailed step-by-step .docx guide".. I
would really be interested, maybe theres further information I missed.
I would really like you to forward this to me.

Greetings,

Melanie

Am Mittwoch, den 25.09.2019, 13:51 +0200 schrieb Andrija Panic:
> Hi Mellanie,
> 
> so Ubuntu 14.04+  - i.e. 16.04 working fine, 18.04 also being
> supported in
> later releases...
> CentOS7 is THE recommended OS (or more recent Ubuntu) - but yes, RHEL
> makes
> small surprises sometimes (until CentOS 7.1, if not mistaken, they
> also
> didn't provide RBD/Ceph support, only in paid RHEV - won't comment on
> this
> lousy behaviour...)
> 
> Afaik, for KVM specifically, no pooling of volumes' location, and you
> would
> need to update DB (pay attention also to usage records if that is of
> your
> interest)
> You'll need to test this kind of migration and DB schema thoroughly
> (including changing disk offerings and such in DB, in case your
> source/destination storage solution have different Storage TAGs in
> ACS)
> 
> I'm trying to stay away from any clustered file systems, 'cause when
> they
> break, they break bad...so can't comment there.
> You are using those as preSetup in KVM/CloudStack I guess - if it
> works,
> then all good.
> But...move on I suggest, if possible :)
> 
> Best
> Andrija
> 
> On Wed, 25 Sep 2019 at 13:16, Melanie Desaive <
> m.desaive@heinlein-support.de>
> wrote:
> 
> > Hi Andrija,
> > 
> > thank you so much for your detailled explanation! Looks like the my
> > problem can be solved. :)
> > 
> > To summarize the information you provided:
> > 
> > As long as CloudStack does not support volume live migration I
> > could be
> > using
> > 
> > virsh with --copy-storage --xml.
> > 
> > BUT: CentOS7 is lacking necessary features! Bad luck. I started out
> > with CentOS7 as Disto.
> > 
> > You suggest, that it could be worth trying the qemu/libvirt
> > packages
> > from the oVirt repository. I will look into this now.
> > 
> > But if that gets complicated: Cloudstack documentation lists
> > CentOS7
> > and Ubuntu 14.04 as supported Distros. Are there other not
> > officially
> > supported Distros/Version I could be using? I wanted to avoid the
> > quite
> > outdated Ubuntu 14.04 and did for that reason decide towards
> > CentOS7.
> > 
> > And another general question: How is CloudStack getting along with
> > the
> > Volumes of its VMs changing the storage repository without beeing
> > informed about it. Does it get this information through polling, or
> > do
> > I have to manipulate the database?
> > 
> > And to make things clearer: At the moment I am using storage
> > attached
> > through Gibrechannel using clustered LVM logic. Could also be
> > changing
> > to GFS2 on cLVM. Never heard anyone mentioning such a setup by now.
> > Am
> > I the only one running KVM on a proprietary storage system over
> > Fibrechannel, are there limitation/problems to be expected from
> > such a
> > setup?
> > 
> > Greetings,
> > 
> > Melanie
> > 
> > 
> > Am Mittwoch, den 25.09.2019, 11:46 +0200 schrieb Andrija Panic:
> > > So, let me explain.
> > > 
> > > Doing "online storage migration" aka live storage migration is
> > > working for
> > > CEPH/NFS --> SolidFire, starting from 4.11+
> > > Internally it is done in the same way as "virsh with --copy-
> > > storage-
> > > all
> > > --xml" in short
> > > 
> > > Longer explanation:
> > > Steps:
> > > You create new volumes on the destination storage (SolidFire in
> > > this
> > > case),
> > > set QoS etc - simply prepare the destination volumes (empty
> > > volumes
> > > atm).
> > > On source host/VM, dump VM XML, edit XML, change disk section to
> > > point to
> > > new volume path, protocol, etc - and also the IP address for the
> > > VNC
> > > (cloudstack requirement), save XMLT
> > > Then you do "virsh with --copy-storage-all --xml. myEditedVM.xml
> > > ..."
> > > stuff
> > > that does the job.
> > > Then NBD driver will be used to copy blocks from the source
> > > volumes
> > > to the
> > > destination volumes while that virsh command is working...
> > > (here's my
> > > demo,
> > > in details..
> > > 
> > https://www.youtube.com/watch?v=Eo8BuHBnVgg&list=PLEr0fbgkyLKyiPnNzPz7XDjxnmQNxjJWT&index=5&t=2s
> > > )
> > > 
> > > This is yet to be extended/coded to support NFS-->NFS or CEPH
> > > -->CEPH
> > > or
> > > CEPH/NFS-->CEPH/NFS... should not be that much work, the logic is
> > > there
> > > (bit part of the code)
> > > Also, starting from 4.12, you can actually  (I believe using
> > > identical
> > > logic) migrate only ROOT volume that are on the LOCAL storage
> > > (local
> > > disks)
> > > to another host/local storage - but DATA disks are not supported.
> > > 
> > > Now...imagine the feature is there - if using CentOS7, our
> > > friends at
> > > RedHat have removed support for actually using live storage
> > > migration
> > > (unless you are paying for RHEV - but it does work fine on
> > > CentOS6,
> > > and
> > > Ubuntu 14.04+
> > > 
> > > I recall "we" had to use qemu/libvirt from the "oVirt" repo which
> > > DOES
> > > (DID) support storage live migration (normal EV packages from the
> > > Special
> > > Interest Group (2.12 tested) - did NOT include this...)
> > > 
> > > I can send you step-by-step .docx guide for manually mimicking
> > > what
> > > is done
> > > (in SolidFire, but identical logic for other storages) - but not
> > > sure
> > > if
> > > that still helps you...
> > > 
> > > 
> > > Andrija
> > > 
> > > On Wed, 25 Sep 2019 at 10:51, Melanie Desaive <
> > > m.desaive@heinlein-support.de>
> > > wrote:
> > > 
> > > > Hi all,
> > > > 
> > > > I am currently doing my first steps with KVM as hypervisor for
> > > > CloudStack. I was shocked to realize that currently live volume
> > > > migration between different shared storages is not supported
> > > > with
> > > > KVM.
> > > > This is a feature I use intensively with XenServer.
> > > > 
> > > > How do you get along with this limitation? I do really expect
> > > > you
> > > > to
> > > > use some workarounds, or do you all only accept vm downtimes
> > > > for a
> > > > storage migration?
> > > > 
> > > > With my first investigation I found three techniques mentioned
> > > > and
> > > > would like to ask for suggestions which to investigate deeper:
> > > > 
> > > >  x Eric describes a technique using snapshosts and pauses to do
> > > > a
> > > > live
> > > > storage migration in this mailing list tread.
> > > >  x Dag suggests using virsh with --copy-storage-all --xml.
> > > >  x I found articles about using virsh blockcopy for storage
> > > > live
> > > > migration.
> > > > 
> > > > Greetings,
> > > > 
> > > > Melanie
> > > > 
> > > > Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> > > > > @Dag, you might want to check with Mike Tutkowski, how he
> > > > > implemented
> > > > > this
> > > > > for the "online storage migration" from other storages (CEPH
> > > > > and
> > > > > NFS
> > > > > implemented so far as sources) to SolidFire.
> > > > > 
> > > > > We are doing exactly the same demo/manual way (this is what
> > > > > Mike
> > > > > has
> > > > > sent
> > > > > me back in the days), so perhaps you want to see how to
> > > > > translate
> > > > > this into
> > > > > general things (so ANY to ANY storage migration) inside
> > > > > CloudStack.
> > > > > 
> > > > > Cheers
> > > > > 
> > > > > On 2 February 2018 at 10:28, Dag Sonstebo <
> > > > > Dag.Sonstebo@shapeblue.com
> > > > > wrote:
> > > > > 
> > > > > > All
> > > > > > 
> > > > > > I am doing a bit of R&D around this for a client at the
> > > > > > moment.
> > > > > > I
> > > > > > am
> > > > > > semi-successful in getting live migrations to different
> > > > > > storage
> > > > > > pools to
> > > > > > work. The method I’m using is as follows – this does not
> > > > > > take
> > > > > > into
> > > > > > account
> > > > > > any efficiency optimisation around the disk transfer (which
> > > > > > is
> > > > > > next
> > > > > > on my
> > > > > > list). The below should answer your question Eric about
> > > > > > moving
> > > > > > to a
> > > > > > different location – and I am also working with your steps
> > > > > > to
> > > > > > see
> > > > > > where I
> > > > > > can improve the following. Keep in mind all of this is
> > > > > > external
> > > > > > to
> > > > > > CloudStack – although CloudStack picks up the destination
> > > > > > KVM
> > > > > > host
> > > > > > automatically it does not update the volume tables etc.,
> > > > > > neither
> > > > > > does it do
> > > > > > any housekeeping.
> > > > > > 
> > > > > > 1) Ensure the same network bridges are up on source and
> > > > > > destination
> > > > > > –
> > > > > > these are found with:
> > > > > > 
> > > > > > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> > > > > >       <source file='/mnt/00e88a7b-985f-3be8-b717-
> > > > > > 0a59d8197640/d0ab5dd5-
> > > > > > e3dd-47ac-a326-5ce3d47d194d'/>
> > > > > >       <source bridge='breth1-725'/>
> > > > > >       <source path='/dev/pts/3'/>
> > > > > >       <source path='/dev/pts/3'/>
> > > > > > 
> > > > > > So from this make sure breth1-725 is up on the destionation
> > > > > > host
> > > > > > (do it
> > > > > > the hard way or cheat and spin up a VM from same account
> > > > > > and
> > > > > > network on
> > > > > > that host)
> > > > > > 
> > > > > > 2) Find size of source disk and create stub disk in
> > > > > > destination
> > > > > > (this part
> > > > > > can be made more efficient to speed up disk transfer – by
> > > > > > doing
> > > > > > similar
> > > > > > things to what Eric is doing):
> > > > > > 
> > > > > > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > > > > > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-
> > > > > > e3dd-
> > > > > > 47ac-a326-5ce3d47d194d
> > > > > > file format: qcow2
> > > > > > virtual size: 8.0G (8589934592 bytes)
> > > > > > disk size: 32M
> > > > > > cluster_size: 65536
> > > > > > backing file: /mnt/00e88a7b-985f-3be8-b717-
> > > > > > 0a59d8197640/3caaf4c9-
> > > > > > eaec-
> > > > > > 11e7-800b-06b4a401075c
> > > > > > 
> > > > > > ######################
> > > > > > 
> > > > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img
> > > > > > create
> > > > > > -f
> > > > > > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > > > > > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d',
> > > > > > fmt=qcow2
> > > > > > size=8589934592 encryption=off cluster_size=65536
> > > > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img
> > > > > > info
> > > > > > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > > file format: qcow2
> > > > > > virtual size: 8.0G (8589934592 bytes)
> > > > > > disk size: 448K
> > > > > > cluster_size: 65536
> > > > > > 
> > > > > > 3) Rewrite the new VM XML file for the destination with:
> > > > > > a) New disk location, in this case this is just a new path
> > > > > > (Eric –
> > > > > > this
> > > > > > answers your question)
> > > > > > b) Different IP addresses for VNC – in this case 10.0.0.1
> > > > > > to
> > > > > > 10.0.0.2
> > > > > > and carry out migration.
> > > > > > 
> > > > > > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-
> > > > > > 3be8-
> > > > > > b717-
> > > > > > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed
> > > > > > -e
> > > > > > 's/
> > > > > > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > > > > > 
> > > > > > [root@kvm1 ~]# virsh migrate --live --persistent --copy-
> > > > > > storage-all
> > > > > > --xml
> > > > > > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --
> > > > > > verbose
> > > > > > --abort-on-error
> > > > > > Migration: [ 25 %]
> > > > > > 
> > > > > > 4) Once complete delete the source file. This can be done
> > > > > > with
> > > > > > extra
> > > > > > switches on the virsh migrate command if need be.
> > > > > > = = =
> > > > > > 
> > > > > > In the simplest tests this works – destination VM remains
> > > > > > online
> > > > > > and has
> > > > > > storage in new location – but it’s not persistent –
> > > > > > sometimes
> > > > > > the
> > > > > > destination VM ends up in a paused state, and I’m working
> > > > > > on
> > > > > > how to
> > > > > > get
> > > > > > around this. I also noted virsh migrate has a  migrate-
> > > > > > setmaxdowntime which
> > > > > > I think can be useful here.
> > > > > > 
> > > > > > Regards,
> > > > > > Dag Sonstebo
> > > > > > Cloud Architect
> > > > > > ShapeBlue
> > > > > > 
> > > > > > On 01/02/2018, 20:30, "Andrija Panic" <
> > > > > > andrija.panic@gmail.com>
> > > > > > wrote:
> > > > > > 
> > > > > >     Actually,  we have this feature (we call this
> > > > > > internally
> > > > > >     online-storage-migration) to migrate volume from
> > > > > > CEPH/NFS
> > > > > > to
> > > > > > SolidFire
> > > > > >     (thanks to Mike Tutkowski)
> > > > > > 
> > > > > >     There is libvirt mechanism, where basically you start
> > > > > > another
> > > > > > PAUSED
> > > > > > VM on
> > > > > >     another host (same name and same XML file, except the
> > > > > > storage
> > > > > > volumes
> > > > > > are
> > > > > >     pointing to new storage, different paths, etc and maybe
> > > > > > VNC
> > > > > > listening
> > > > > >     address needs to be changed or so) and then you issue
> > > > > > on
> > > > > > original
> > > > > > host/VM
> > > > > >     the live migrate command with few parameters... the
> > > > > > libvirt
> > > > > > will
> > > > > >     transaprently handle the copy data process from Soruce
> > > > > > to
> > > > > > New
> > > > > > volumes,
> > > > > > and
> > > > > >     after migration the VM will be alive (with new XML
> > > > > > since
> > > > > > have
> > > > > > new
> > > > > > volumes)
> > > > > >     on new host, while the original VM on original host is
> > > > > > destroyed....
> > > > > > 
> > > > > >     (I can send you manual for this, that is realted to SF,
> > > > > > but
> > > > > > idea is the
> > > > > >     same and you can exercies this on i.e. 2 NFS volumes on
> > > > > > 2
> > > > > > different
> > > > > >     storages)
> > > > > > 
> > > > > >     This mechanism doesn't exist in ACS in general (AFAIK),
> > > > > > except
> > > > > > for when
> > > > > >     migrating to SolidFire.
> > > > > > 
> > > > > >     Perhaps community/DEV can help extend Mike's code to do
> > > > > > same
> > > > > > work on
> > > > > >     different storage types...
> > > > > > 
> > > > > >     Cheers
> > > > > > 
> > > > > > 
> > > > > > Dag.Sonstebo@shapeblue.com
> > > > > > www.shapeblue.com
> > > > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > > @shapeblue
> > > > > > 
> > > > > > 
> > > > > > 
> > > > > > On 19 January 2018 at 18:45, Eric Green <
> > > > > > eric.lee.green@gmail.com>
> > > > > > wrote:
> > > > > > 
> > > > > >     > KVM is able to live migrate entire virtual machines
> > > > > > complete
> > > > > > with
> > > > > > local
> > > > > >     > volumes (see 'man virsh') but does require nbd
> > > > > > (Network
> > > > > > Block
> > > > > > Device) to be
> > > > > >     > installed on the destination host to do so. It may
> > > > > > need
> > > > > > installation
> > > > > > of
> > > > > >     > later libvirt / qemu packages from OpenStack
> > > > > > repositories
> > > > > > on
> > > > > > Centos
> > > > > > 6, I'm
> > > > > >     > not sure, but just works on Centos 7. In any event, I
> > > > > > have
> > > > > > used this
> > > > > >     > functionality to move virtual machines between
> > > > > > virtualization
> > > > > > hosts
> > > > > > on my
> > > > > >     > home network. It works.
> > > > > >     >
> > > > > >     > What is missing is the ability to live-migrate a disk
> > > > > > from
> > > > > > one shared
> > > > > >     > storage to another. The functionality built into
> > > > > > virsh
> > > > > > live-
> > > > > > migrates
> > > > > > the
> > > > > >     > volume ***to the exact same location on the new
> > > > > > host***,
> > > > > > so
> > > > > > obviously is
> > > > > >     > useless for migrating the disk to a new location on
> > > > > > shared
> > > > > > storage. I
> > > > > >     > looked everywhere for the ability of KVM to live
> > > > > > migrate
> > > > > > a
> > > > > > disk from
> > > > > > point
> > > > > >     > A to point B all by itself, and found no such thing.
> > > > > > libvirt/qemu
> > > > > > has the
> > > > > >     > raw capabilities needed to do this, but it is not
> > > > > > currently
> > > > > > exposed
> > > > > > as a
> > > > > >     > single API via the qemu console or virsh. It can be
> > > > > > emulated
> > > > > > via
> > > > > > scripting
> > > > > >     > however:
> > > > > >     >
> > > > > >     > 1. Pause virtual machine
> > > > > >     > 2. Do qcow2 snapshot.
> > > > > >     > 3. Detach base disk, attach qcow2 snapshot
> > > > > >     > 4. unpause virtual machine
> > > > > >     > 5. copy qcow2 base file to new location
> > > > > >     > 6. pause virtual machine
> > > > > >     > 7. detach snapshot
> > > > > >     > 8. unsnapshot qcow2 snapshot at its new location.
> > > > > >     > 9. attach new base at new location.
> > > > > >     > 10. unpause virtual machine.
> > > > > >     >
> > > > > >     > Thing is, if that entire process is not built into
> > > > > > the
> > > > > > underlying
> > > > > >     > kvm/qemu/libvirt infrastructure as tested
> > > > > > functionality
> > > > > > with
> > > > > > a
> > > > > > defined API,
> > > > > >     > there's no guarantee that it will work seamlessly and
> > > > > > will
> > > > > > continue
> > > > > > working
> > > > > >     > with the next release of the underlying
> > > > > > infrastructure.
> > > > > > This
> > > > > > is using
> > > > > >     > multiple different tools to manipulate the qcow2 file
> > > > > > and
> > > > > > attach/detach
> > > > > >     > base disks to the running (but paused) kvm domain,
> > > > > > and
> > > > > > would
> > > > > > have to
> > > > > > be
> > > > > >     > tested against all variations of those tools on all
> > > > > > supported
> > > > > > Cloudstack
> > > > > >     > KVM host platforms. The test matrix looks pretty
> > > > > > grim.
> > > > > >     >
> > > > > >     > By contrast, the migrate-with-local-storage process
> > > > > > is
> > > > > > built
> > > > > > into
> > > > > > virsh
> > > > > >     > and is tested by the distribution vendor and the set
> > > > > > of
> > > > > > tools
> > > > > > provided with
> > > > > >     > the distribution is guaranteed to work with the virsh
> > > > > > /
> > > > > > libvirt/ qemu
> > > > > >     > distributed by the distribution vendor. That makes
> > > > > > the
> > > > > > test
> > > > > > matrix
> > > > > > for
> > > > > >     > move-with-local-storage look a lot simpler -- "is
> > > > > > this
> > > > > > functionality
> > > > > >     > supported by that version of virsh on that
> > > > > > distribution?
> > > > > > Yes?
> > > > > > Enable
> > > > > > it.
> > > > > >     > No? Don't enable it."
> > > > > >     >
> > > > > >     > I'd love to have live migration of disks on shared
> > > > > > storage
> > > > > > with
> > > > > > Cloudstack
> > > > > >     > KVM, but not at the expense of reliability. Shutting
> > > > > > down
> > > > > > a
> > > > > > virtual
> > > > > > machine
> > > > > >     > in order to migrate one of its disks from one shared
> > > > > > datastore to
> > > > > > another
> > > > > >     > is not ideal, but at least it's guaranteed reliable.
> > > > > >     >
> > > > > >     >
> > > > > >     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> > > > > >     > rafaelweingartner@gmail.com> wrote:
> > > > > >     > >
> > > > > >     > > Hey Marc,
> > > > > >     > > It is very interesting that you are going to pick
> > > > > > this
> > > > > > up
> > > > > > for KVM.
> > > > > > I am
> > > > > >     > > working in a related issue for XenServer [1].
> > > > > >     > > If you can confirm that KVM is able to live migrate
> > > > > > local
> > > > > > volumes
> > > > > > to
> > > > > >     > other
> > > > > >     > > local storage or shared storage I could make the
> > > > > > feature I
> > > > > > am
> > > > > > working on
> > > > > >     > > available to KVM as well.
> > > > > >     > >
> > > > > >     > >
> > > > > >     > > [1]
> > > > > > https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> > > > > >     > >
> > > > > >     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle
> > > > > > Brothier
> > > > > > <
> > > > > >     > marco@exoscale.ch>
> > > > > >     > > wrote:
> > > > > >     > >
> > > > > >     > >> There's a PR waiting to be fixed about live
> > > > > > migration
> > > > > > with
> > > > > > local
> > > > > > volume
> > > > > >     > for
> > > > > >     > >> KVM. So it will come at some point. I'm the one
> > > > > > who
> > > > > > made
> > > > > > this PR
> > > > > > but I'm
> > > > > >     > >> not using the upstream release so it's hard for me
> > > > > > to
> > > > > > debug the
> > > > > > problem.
> > > > > >     > >> You can add yourself to the PR to get notify when
> > > > > > things
> > > > > > are
> > > > > > moving on
> > > > > >     > it.
> > > > > >     > >>
> > > > > >     > >> https://github.com/apache/cloudstack/pull/1709
> > > > > >     > >>
> > > > > >     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> > > > > > eric.lee.green@gmail.com>
> > > > > >     > >> wrote:
> > > > > >     > >>
> > > > > >     > >>> Theoretically on Centos 7 as the host KVM OS it
> > > > > > could
> > > > > > be
> > > > > > done
> > > > > > with a
> > > > > >     > >>> couple of pauses and the snapshotting mechanism
> > > > > > built
> > > > > > into
> > > > > > qcow2, but
> > > > > >     > >> there
> > > > > >     > >>> is no simple way to do it directly via virsh, the
> > > > > > libvirtd/qemu
> > > > > > control
> > > > > >     > >>> program that is used to manage virtualization.
> > > > > > It's
> > > > > > not
> > > > > > as with
> > > > > >     > issuing a
> > > > > >     > >>> simple vmotion 'migrate volume' call in Vmware.
> > > > > >     > >>>
> > > > > >     > >>> I scripted out how it would work without that
> > > > > > direct
> > > > > > support in
> > > > > >     > >>> libvirt/virsh and after looking at all the points
> > > > > > where
> > > > > > things
> > > > > > could go
> > > > > >     > >>> wrong, honestly, I think we need to wait until
> > > > > > there
> > > > > > is
> > > > > > support
> > > > > > in
> > > > > >     > >>> libvirt/virsh to do this. virsh clearly has the
> > > > > > capability
> > > > > > internally
> > > > > >     > to
> > > > > >     > >> do
> > > > > >     > >>> live migration of storage, since it does this for
> > > > > > live
> > > > > > domain
> > > > > > migration
> > > > > >     > >> of
> > > > > >     > >>> local storage between machines when migrating KVM
> > > > > > domains
> > > > > > from
> > > > > > one host
> > > > > >     > >> to
> > > > > >     > >>> another, but that capability is not currently
> > > > > > exposed
> > > > > > in
> > > > > > a way
> > > > > >     > Cloudstack
> > > > > >     > >>> could use, at least not on Centos 7.
> > > > > >     > >>>
> > > > > >     > >>>
> > > > > >     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <
> > > > > > ppisz@pulab.pl>
> > > > > > wrote:
> > > > > >     > >>>>
> > > > > >     > >>>> Hello,
> > > > > >     > >>>>
> > > > > >     > >>>> Is there a chance that one day it will be
> > > > > > possible
> > > > > > to
> > > > > > migrate
> > > > > > volume
> > > > > >     > >>> (root disk) of a live VM in KVM between storage
> > > > > > pools
> > > > > > (in
> > > > > > CloudStack)?
> > > > > >     > >>>> Like a storage vMotion in Vmware.
> > > > > >     > >>>>
> > > > > >     > >>>> Best regards,
> > > > > >     > >>>> Piotr
> > > > > >     > >>>>
> > > > > >     > >>>
> > > > > >     > >>>
> > > > > >     > >>
> > > > > >     > >
> > > > > >     > >
> > > > > >     > >
> > > > > >     > > --
> > > > > >     > > Rafael Weingärtner
> > > > > >     >
> > > > > >     >
> > > > > > 
> > > > > > 
> > > > > >     --
> > > > > > 
> > > > > >     Andrija Panić
> > > > > > 
> > > > > > 
> > > > > > 
> > > > --
> > > > --
> > > > Heinlein Support GmbH
> > > > Schwedter Str. 8/9b, 10119 Berlin
> > > > 
> > > > https://www.heinlein-support.de
> > > > 
> > > > Tel: 030 / 40 50 51 - 62
> > > > Fax: 030 / 40 50 51 - 19
> > > > 
> > > > Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> > > > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > > > 
> > --
> > --
> > Heinlein Support GmbH
> > Schwedter Str. 8/9b, 10119 Berlin
> > 
> > https://www.heinlein-support.de
> > 
> > Tel: 030 / 40 50 51 - 62
> > Fax: 030 / 40 50 51 - 19
> > 
> > Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > 
> 
> 
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin

Re: kvm live volume migration

Posted by Andrija Panic <an...@gmail.com>.
Hi Mellanie,

so Ubuntu 14.04+  - i.e. 16.04 working fine, 18.04 also being supported in
later releases...
CentOS7 is THE recommended OS (or more recent Ubuntu) - but yes, RHEL makes
small surprises sometimes (until CentOS 7.1, if not mistaken, they also
didn't provide RBD/Ceph support, only in paid RHEV - won't comment on this
lousy behaviour...)

Afaik, for KVM specifically, no pooling of volumes' location, and you would
need to update DB (pay attention also to usage records if that is of your
interest)
You'll need to test this kind of migration and DB schema thoroughly
(including changing disk offerings and such in DB, in case your
source/destination storage solution have different Storage TAGs in ACS)

I'm trying to stay away from any clustered file systems, 'cause when they
break, they break bad...so can't comment there.
You are using those as preSetup in KVM/CloudStack I guess - if it works,
then all good.
But...move on I suggest, if possible :)

Best
Andrija

On Wed, 25 Sep 2019 at 13:16, Melanie Desaive <m....@heinlein-support.de>
wrote:

> Hi Andrija,
>
> thank you so much for your detailled explanation! Looks like the my
> problem can be solved. :)
>
> To summarize the information you provided:
>
> As long as CloudStack does not support volume live migration I could be
> using
>
> virsh with --copy-storage --xml.
>
> BUT: CentOS7 is lacking necessary features! Bad luck. I started out
> with CentOS7 as Disto.
>
> You suggest, that it could be worth trying the qemu/libvirt packages
> from the oVirt repository. I will look into this now.
>
> But if that gets complicated: Cloudstack documentation lists CentOS7
> and Ubuntu 14.04 as supported Distros. Are there other not officially
> supported Distros/Version I could be using? I wanted to avoid the quite
> outdated Ubuntu 14.04 and did for that reason decide towards CentOS7.
>
> And another general question: How is CloudStack getting along with the
> Volumes of its VMs changing the storage repository without beeing
> informed about it. Does it get this information through polling, or do
> I have to manipulate the database?
>
> And to make things clearer: At the moment I am using storage attached
> through Gibrechannel using clustered LVM logic. Could also be changing
> to GFS2 on cLVM. Never heard anyone mentioning such a setup by now. Am
> I the only one running KVM on a proprietary storage system over
> Fibrechannel, are there limitation/problems to be expected from such a
> setup?
>
> Greetings,
>
> Melanie
>
>
> Am Mittwoch, den 25.09.2019, 11:46 +0200 schrieb Andrija Panic:
> > So, let me explain.
> >
> > Doing "online storage migration" aka live storage migration is
> > working for
> > CEPH/NFS --> SolidFire, starting from 4.11+
> > Internally it is done in the same way as "virsh with --copy-storage-
> > all
> > --xml" in short
> >
> > Longer explanation:
> > Steps:
> > You create new volumes on the destination storage (SolidFire in this
> > case),
> > set QoS etc - simply prepare the destination volumes (empty volumes
> > atm).
> > On source host/VM, dump VM XML, edit XML, change disk section to
> > point to
> > new volume path, protocol, etc - and also the IP address for the VNC
> > (cloudstack requirement), save XMLT
> > Then you do "virsh with --copy-storage-all --xml. myEditedVM.xml ..."
> > stuff
> > that does the job.
> > Then NBD driver will be used to copy blocks from the source volumes
> > to the
> > destination volumes while that virsh command is working... (here's my
> > demo,
> > in details..
> >
> https://www.youtube.com/watch?v=Eo8BuHBnVgg&list=PLEr0fbgkyLKyiPnNzPz7XDjxnmQNxjJWT&index=5&t=2s
> > )
> >
> > This is yet to be extended/coded to support NFS-->NFS or CEPH-->CEPH
> > or
> > CEPH/NFS-->CEPH/NFS... should not be that much work, the logic is
> > there
> > (bit part of the code)
> > Also, starting from 4.12, you can actually  (I believe using
> > identical
> > logic) migrate only ROOT volume that are on the LOCAL storage (local
> > disks)
> > to another host/local storage - but DATA disks are not supported.
> >
> > Now...imagine the feature is there - if using CentOS7, our friends at
> > RedHat have removed support for actually using live storage migration
> > (unless you are paying for RHEV - but it does work fine on CentOS6,
> > and
> > Ubuntu 14.04+
> >
> > I recall "we" had to use qemu/libvirt from the "oVirt" repo which
> > DOES
> > (DID) support storage live migration (normal EV packages from the
> > Special
> > Interest Group (2.12 tested) - did NOT include this...)
> >
> > I can send you step-by-step .docx guide for manually mimicking what
> > is done
> > (in SolidFire, but identical logic for other storages) - but not sure
> > if
> > that still helps you...
> >
> >
> > Andrija
> >
> > On Wed, 25 Sep 2019 at 10:51, Melanie Desaive <
> > m.desaive@heinlein-support.de>
> > wrote:
> >
> > > Hi all,
> > >
> > > I am currently doing my first steps with KVM as hypervisor for
> > > CloudStack. I was shocked to realize that currently live volume
> > > migration between different shared storages is not supported with
> > > KVM.
> > > This is a feature I use intensively with XenServer.
> > >
> > > How do you get along with this limitation? I do really expect you
> > > to
> > > use some workarounds, or do you all only accept vm downtimes for a
> > > storage migration?
> > >
> > > With my first investigation I found three techniques mentioned and
> > > would like to ask for suggestions which to investigate deeper:
> > >
> > >  x Eric describes a technique using snapshosts and pauses to do a
> > > live
> > > storage migration in this mailing list tread.
> > >  x Dag suggests using virsh with --copy-storage-all --xml.
> > >  x I found articles about using virsh blockcopy for storage live
> > > migration.
> > >
> > > Greetings,
> > >
> > > Melanie
> > >
> > > Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> > > > @Dag, you might want to check with Mike Tutkowski, how he
> > > > implemented
> > > > this
> > > > for the "online storage migration" from other storages (CEPH and
> > > > NFS
> > > > implemented so far as sources) to SolidFire.
> > > >
> > > > We are doing exactly the same demo/manual way (this is what Mike
> > > > has
> > > > sent
> > > > me back in the days), so perhaps you want to see how to translate
> > > > this into
> > > > general things (so ANY to ANY storage migration) inside
> > > > CloudStack.
> > > >
> > > > Cheers
> > > >
> > > > On 2 February 2018 at 10:28, Dag Sonstebo <
> > > > Dag.Sonstebo@shapeblue.com
> > > > wrote:
> > > >
> > > > > All
> > > > >
> > > > > I am doing a bit of R&D around this for a client at the moment.
> > > > > I
> > > > > am
> > > > > semi-successful in getting live migrations to different storage
> > > > > pools to
> > > > > work. The method I’m using is as follows – this does not take
> > > > > into
> > > > > account
> > > > > any efficiency optimisation around the disk transfer (which is
> > > > > next
> > > > > on my
> > > > > list). The below should answer your question Eric about moving
> > > > > to a
> > > > > different location – and I am also working with your steps to
> > > > > see
> > > > > where I
> > > > > can improve the following. Keep in mind all of this is external
> > > > > to
> > > > > CloudStack – although CloudStack picks up the destination KVM
> > > > > host
> > > > > automatically it does not update the volume tables etc.,
> > > > > neither
> > > > > does it do
> > > > > any housekeeping.
> > > > >
> > > > > 1) Ensure the same network bridges are up on source and
> > > > > destination
> > > > > –
> > > > > these are found with:
> > > > >
> > > > > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> > > > >       <source file='/mnt/00e88a7b-985f-3be8-b717-
> > > > > 0a59d8197640/d0ab5dd5-
> > > > > e3dd-47ac-a326-5ce3d47d194d'/>
> > > > >       <source bridge='breth1-725'/>
> > > > >       <source path='/dev/pts/3'/>
> > > > >       <source path='/dev/pts/3'/>
> > > > >
> > > > > So from this make sure breth1-725 is up on the destionation
> > > > > host
> > > > > (do it
> > > > > the hard way or cheat and spin up a VM from same account and
> > > > > network on
> > > > > that host)
> > > > >
> > > > > 2) Find size of source disk and create stub disk in destination
> > > > > (this part
> > > > > can be made more efficient to speed up disk transfer – by doing
> > > > > similar
> > > > > things to what Eric is doing):
> > > > >
> > > > > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > > > > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> > > > > 47ac-a326-5ce3d47d194d
> > > > > file format: qcow2
> > > > > virtual size: 8.0G (8589934592 bytes)
> > > > > disk size: 32M
> > > > > cluster_size: 65536
> > > > > backing file: /mnt/00e88a7b-985f-3be8-b717-
> > > > > 0a59d8197640/3caaf4c9-
> > > > > eaec-
> > > > > 11e7-800b-06b4a401075c
> > > > >
> > > > > ######################
> > > > >
> > > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img
> > > > > create
> > > > > -f
> > > > > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > > > > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> > > > > size=8589934592 encryption=off cluster_size=65536
> > > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> > > > > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > > file format: qcow2
> > > > > virtual size: 8.0G (8589934592 bytes)
> > > > > disk size: 448K
> > > > > cluster_size: 65536
> > > > >
> > > > > 3) Rewrite the new VM XML file for the destination with:
> > > > > a) New disk location, in this case this is just a new path
> > > > > (Eric –
> > > > > this
> > > > > answers your question)
> > > > > b) Different IP addresses for VNC – in this case 10.0.0.1 to
> > > > > 10.0.0.2
> > > > > and carry out migration.
> > > > >
> > > > > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-
> > > > > b717-
> > > > > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e
> > > > > 's/
> > > > > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > > > >
> > > > > [root@kvm1 ~]# virsh migrate --live --persistent --copy-
> > > > > storage-all
> > > > > --xml
> > > > > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --
> > > > > verbose
> > > > > --abort-on-error
> > > > > Migration: [ 25 %]
> > > > >
> > > > > 4) Once complete delete the source file. This can be done with
> > > > > extra
> > > > > switches on the virsh migrate command if need be.
> > > > > = = =
> > > > >
> > > > > In the simplest tests this works – destination VM remains
> > > > > online
> > > > > and has
> > > > > storage in new location – but it’s not persistent – sometimes
> > > > > the
> > > > > destination VM ends up in a paused state, and I’m working on
> > > > > how to
> > > > > get
> > > > > around this. I also noted virsh migrate has a  migrate-
> > > > > setmaxdowntime which
> > > > > I think can be useful here.
> > > > >
> > > > > Regards,
> > > > > Dag Sonstebo
> > > > > Cloud Architect
> > > > > ShapeBlue
> > > > >
> > > > > On 01/02/2018, 20:30, "Andrija Panic" <an...@gmail.com>
> > > > > wrote:
> > > > >
> > > > >     Actually,  we have this feature (we call this internally
> > > > >     online-storage-migration) to migrate volume from CEPH/NFS
> > > > > to
> > > > > SolidFire
> > > > >     (thanks to Mike Tutkowski)
> > > > >
> > > > >     There is libvirt mechanism, where basically you start
> > > > > another
> > > > > PAUSED
> > > > > VM on
> > > > >     another host (same name and same XML file, except the
> > > > > storage
> > > > > volumes
> > > > > are
> > > > >     pointing to new storage, different paths, etc and maybe VNC
> > > > > listening
> > > > >     address needs to be changed or so) and then you issue on
> > > > > original
> > > > > host/VM
> > > > >     the live migrate command with few parameters... the libvirt
> > > > > will
> > > > >     transaprently handle the copy data process from Soruce to
> > > > > New
> > > > > volumes,
> > > > > and
> > > > >     after migration the VM will be alive (with new XML since
> > > > > have
> > > > > new
> > > > > volumes)
> > > > >     on new host, while the original VM on original host is
> > > > > destroyed....
> > > > >
> > > > >     (I can send you manual for this, that is realted to SF, but
> > > > > idea is the
> > > > >     same and you can exercies this on i.e. 2 NFS volumes on 2
> > > > > different
> > > > >     storages)
> > > > >
> > > > >     This mechanism doesn't exist in ACS in general (AFAIK),
> > > > > except
> > > > > for when
> > > > >     migrating to SolidFire.
> > > > >
> > > > >     Perhaps community/DEV can help extend Mike's code to do
> > > > > same
> > > > > work on
> > > > >     different storage types...
> > > > >
> > > > >     Cheers
> > > > >
> > > > >
> > > > > Dag.Sonstebo@shapeblue.com
> > > > > www.shapeblue.com
> > > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > > @shapeblue
> > > > >
> > > > >
> > > > >
> > > > > On 19 January 2018 at 18:45, Eric Green <
> > > > > eric.lee.green@gmail.com>
> > > > > wrote:
> > > > >
> > > > >     > KVM is able to live migrate entire virtual machines
> > > > > complete
> > > > > with
> > > > > local
> > > > >     > volumes (see 'man virsh') but does require nbd (Network
> > > > > Block
> > > > > Device) to be
> > > > >     > installed on the destination host to do so. It may need
> > > > > installation
> > > > > of
> > > > >     > later libvirt / qemu packages from OpenStack repositories
> > > > > on
> > > > > Centos
> > > > > 6, I'm
> > > > >     > not sure, but just works on Centos 7. In any event, I
> > > > > have
> > > > > used this
> > > > >     > functionality to move virtual machines between
> > > > > virtualization
> > > > > hosts
> > > > > on my
> > > > >     > home network. It works.
> > > > >     >
> > > > >     > What is missing is the ability to live-migrate a disk
> > > > > from
> > > > > one shared
> > > > >     > storage to another. The functionality built into virsh
> > > > > live-
> > > > > migrates
> > > > > the
> > > > >     > volume ***to the exact same location on the new host***,
> > > > > so
> > > > > obviously is
> > > > >     > useless for migrating the disk to a new location on
> > > > > shared
> > > > > storage. I
> > > > >     > looked everywhere for the ability of KVM to live migrate
> > > > > a
> > > > > disk from
> > > > > point
> > > > >     > A to point B all by itself, and found no such thing.
> > > > > libvirt/qemu
> > > > > has the
> > > > >     > raw capabilities needed to do this, but it is not
> > > > > currently
> > > > > exposed
> > > > > as a
> > > > >     > single API via the qemu console or virsh. It can be
> > > > > emulated
> > > > > via
> > > > > scripting
> > > > >     > however:
> > > > >     >
> > > > >     > 1. Pause virtual machine
> > > > >     > 2. Do qcow2 snapshot.
> > > > >     > 3. Detach base disk, attach qcow2 snapshot
> > > > >     > 4. unpause virtual machine
> > > > >     > 5. copy qcow2 base file to new location
> > > > >     > 6. pause virtual machine
> > > > >     > 7. detach snapshot
> > > > >     > 8. unsnapshot qcow2 snapshot at its new location.
> > > > >     > 9. attach new base at new location.
> > > > >     > 10. unpause virtual machine.
> > > > >     >
> > > > >     > Thing is, if that entire process is not built into the
> > > > > underlying
> > > > >     > kvm/qemu/libvirt infrastructure as tested functionality
> > > > > with
> > > > > a
> > > > > defined API,
> > > > >     > there's no guarantee that it will work seamlessly and
> > > > > will
> > > > > continue
> > > > > working
> > > > >     > with the next release of the underlying infrastructure.
> > > > > This
> > > > > is using
> > > > >     > multiple different tools to manipulate the qcow2 file and
> > > > > attach/detach
> > > > >     > base disks to the running (but paused) kvm domain, and
> > > > > would
> > > > > have to
> > > > > be
> > > > >     > tested against all variations of those tools on all
> > > > > supported
> > > > > Cloudstack
> > > > >     > KVM host platforms. The test matrix looks pretty grim.
> > > > >     >
> > > > >     > By contrast, the migrate-with-local-storage process is
> > > > > built
> > > > > into
> > > > > virsh
> > > > >     > and is tested by the distribution vendor and the set of
> > > > > tools
> > > > > provided with
> > > > >     > the distribution is guaranteed to work with the virsh /
> > > > > libvirt/ qemu
> > > > >     > distributed by the distribution vendor. That makes the
> > > > > test
> > > > > matrix
> > > > > for
> > > > >     > move-with-local-storage look a lot simpler -- "is this
> > > > > functionality
> > > > >     > supported by that version of virsh on that distribution?
> > > > > Yes?
> > > > > Enable
> > > > > it.
> > > > >     > No? Don't enable it."
> > > > >     >
> > > > >     > I'd love to have live migration of disks on shared
> > > > > storage
> > > > > with
> > > > > Cloudstack
> > > > >     > KVM, but not at the expense of reliability. Shutting down
> > > > > a
> > > > > virtual
> > > > > machine
> > > > >     > in order to migrate one of its disks from one shared
> > > > > datastore to
> > > > > another
> > > > >     > is not ideal, but at least it's guaranteed reliable.
> > > > >     >
> > > > >     >
> > > > >     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> > > > >     > rafaelweingartner@gmail.com> wrote:
> > > > >     > >
> > > > >     > > Hey Marc,
> > > > >     > > It is very interesting that you are going to pick this
> > > > > up
> > > > > for KVM.
> > > > > I am
> > > > >     > > working in a related issue for XenServer [1].
> > > > >     > > If you can confirm that KVM is able to live migrate
> > > > > local
> > > > > volumes
> > > > > to
> > > > >     > other
> > > > >     > > local storage or shared storage I could make the
> > > > > feature I
> > > > > am
> > > > > working on
> > > > >     > > available to KVM as well.
> > > > >     > >
> > > > >     > >
> > > > >     > > [1]
> > > > > https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> > > > >     > >
> > > > >     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier
> > > > > <
> > > > >     > marco@exoscale.ch>
> > > > >     > > wrote:
> > > > >     > >
> > > > >     > >> There's a PR waiting to be fixed about live migration
> > > > > with
> > > > > local
> > > > > volume
> > > > >     > for
> > > > >     > >> KVM. So it will come at some point. I'm the one who
> > > > > made
> > > > > this PR
> > > > > but I'm
> > > > >     > >> not using the upstream release so it's hard for me to
> > > > > debug the
> > > > > problem.
> > > > >     > >> You can add yourself to the PR to get notify when
> > > > > things
> > > > > are
> > > > > moving on
> > > > >     > it.
> > > > >     > >>
> > > > >     > >> https://github.com/apache/cloudstack/pull/1709
> > > > >     > >>
> > > > >     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> > > > > eric.lee.green@gmail.com>
> > > > >     > >> wrote:
> > > > >     > >>
> > > > >     > >>> Theoretically on Centos 7 as the host KVM OS it could
> > > > > be
> > > > > done
> > > > > with a
> > > > >     > >>> couple of pauses and the snapshotting mechanism built
> > > > > into
> > > > > qcow2, but
> > > > >     > >> there
> > > > >     > >>> is no simple way to do it directly via virsh, the
> > > > > libvirtd/qemu
> > > > > control
> > > > >     > >>> program that is used to manage virtualization. It's
> > > > > not
> > > > > as with
> > > > >     > issuing a
> > > > >     > >>> simple vmotion 'migrate volume' call in Vmware.
> > > > >     > >>>
> > > > >     > >>> I scripted out how it would work without that direct
> > > > > support in
> > > > >     > >>> libvirt/virsh and after looking at all the points
> > > > > where
> > > > > things
> > > > > could go
> > > > >     > >>> wrong, honestly, I think we need to wait until there
> > > > > is
> > > > > support
> > > > > in
> > > > >     > >>> libvirt/virsh to do this. virsh clearly has the
> > > > > capability
> > > > > internally
> > > > >     > to
> > > > >     > >> do
> > > > >     > >>> live migration of storage, since it does this for
> > > > > live
> > > > > domain
> > > > > migration
> > > > >     > >> of
> > > > >     > >>> local storage between machines when migrating KVM
> > > > > domains
> > > > > from
> > > > > one host
> > > > >     > >> to
> > > > >     > >>> another, but that capability is not currently exposed
> > > > > in
> > > > > a way
> > > > >     > Cloudstack
> > > > >     > >>> could use, at least not on Centos 7.
> > > > >     > >>>
> > > > >     > >>>
> > > > >     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <
> > > > > ppisz@pulab.pl>
> > > > > wrote:
> > > > >     > >>>>
> > > > >     > >>>> Hello,
> > > > >     > >>>>
> > > > >     > >>>> Is there a chance that one day it will be possible
> > > > > to
> > > > > migrate
> > > > > volume
> > > > >     > >>> (root disk) of a live VM in KVM between storage pools
> > > > > (in
> > > > > CloudStack)?
> > > > >     > >>>> Like a storage vMotion in Vmware.
> > > > >     > >>>>
> > > > >     > >>>> Best regards,
> > > > >     > >>>> Piotr
> > > > >     > >>>>
> > > > >     > >>>
> > > > >     > >>>
> > > > >     > >>
> > > > >     > >
> > > > >     > >
> > > > >     > >
> > > > >     > > --
> > > > >     > > Rafael Weingärtner
> > > > >     >
> > > > >     >
> > > > >
> > > > >
> > > > >     --
> > > > >
> > > > >     Andrija Panić
> > > > >
> > > > >
> > > > >
> > > --
> > > --
> > > Heinlein Support GmbH
> > > Schwedter Str. 8/9b, 10119 Berlin
> > >
> > > https://www.heinlein-support.de
> > >
> > > Tel: 030 / 40 50 51 - 62
> > > Fax: 030 / 40 50 51 - 19
> > >
> > > Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> > > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > >
> >
> >
> --
> --
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 40 50 51 - 62
> Fax: 030 / 40 50 51 - 19
>
> Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
>


-- 

Andrija Panić

Re: kvm live volume migration

Posted by Melanie Desaive <m....@heinlein-support.de>.
Hi Andrija,

thank you so much for your detailled explanation! Looks like the my
problem can be solved. :)

To summarize the information you provided:

As long as CloudStack does not support volume live migration I could be
using 

virsh with --copy-storage --xml.

BUT: CentOS7 is lacking necessary features! Bad luck. I started out
with CentOS7 as Disto.

You suggest, that it could be worth trying the qemu/libvirt packages
from the oVirt repository. I will look into this now.

But if that gets complicated: Cloudstack documentation lists CentOS7
and Ubuntu 14.04 as supported Distros. Are there other not officially
supported Distros/Version I could be using? I wanted to avoid the quite
outdated Ubuntu 14.04 and did for that reason decide towards CentOS7.

And another general question: How is CloudStack getting along with the
Volumes of its VMs changing the storage repository without beeing
informed about it. Does it get this information through polling, or do
I have to manipulate the database?

And to make things clearer: At the moment I am using storage attached
through Gibrechannel using clustered LVM logic. Could also be changing
to GFS2 on cLVM. Never heard anyone mentioning such a setup by now. Am
I the only one running KVM on a proprietary storage system over
Fibrechannel, are there limitation/problems to be expected from such a
setup?

Greetings,

Melanie 


Am Mittwoch, den 25.09.2019, 11:46 +0200 schrieb Andrija Panic:
> So, let me explain.
> 
> Doing "online storage migration" aka live storage migration is
> working for
> CEPH/NFS --> SolidFire, starting from 4.11+
> Internally it is done in the same way as "virsh with --copy-storage-
> all
> --xml" in short
> 
> Longer explanation:
> Steps:
> You create new volumes on the destination storage (SolidFire in this
> case),
> set QoS etc - simply prepare the destination volumes (empty volumes
> atm).
> On source host/VM, dump VM XML, edit XML, change disk section to
> point to
> new volume path, protocol, etc - and also the IP address for the VNC
> (cloudstack requirement), save XMLT
> Then you do "virsh with --copy-storage-all --xml. myEditedVM.xml ..."
> stuff
> that does the job.
> Then NBD driver will be used to copy blocks from the source volumes
> to the
> destination volumes while that virsh command is working... (here's my
> demo,
> in details..
> https://www.youtube.com/watch?v=Eo8BuHBnVgg&list=PLEr0fbgkyLKyiPnNzPz7XDjxnmQNxjJWT&index=5&t=2s
> )
> 
> This is yet to be extended/coded to support NFS-->NFS or CEPH-->CEPH
> or
> CEPH/NFS-->CEPH/NFS... should not be that much work, the logic is
> there
> (bit part of the code)
> Also, starting from 4.12, you can actually  (I believe using
> identical
> logic) migrate only ROOT volume that are on the LOCAL storage (local
> disks)
> to another host/local storage - but DATA disks are not supported.
> 
> Now...imagine the feature is there - if using CentOS7, our friends at
> RedHat have removed support for actually using live storage migration
> (unless you are paying for RHEV - but it does work fine on CentOS6,
> and
> Ubuntu 14.04+
> 
> I recall "we" had to use qemu/libvirt from the "oVirt" repo which
> DOES
> (DID) support storage live migration (normal EV packages from the
> Special
> Interest Group (2.12 tested) - did NOT include this...)
> 
> I can send you step-by-step .docx guide for manually mimicking what
> is done
> (in SolidFire, but identical logic for other storages) - but not sure
> if
> that still helps you...
> 
> 
> Andrija
> 
> On Wed, 25 Sep 2019 at 10:51, Melanie Desaive <
> m.desaive@heinlein-support.de>
> wrote:
> 
> > Hi all,
> > 
> > I am currently doing my first steps with KVM as hypervisor for
> > CloudStack. I was shocked to realize that currently live volume
> > migration between different shared storages is not supported with
> > KVM.
> > This is a feature I use intensively with XenServer.
> > 
> > How do you get along with this limitation? I do really expect you
> > to
> > use some workarounds, or do you all only accept vm downtimes for a
> > storage migration?
> > 
> > With my first investigation I found three techniques mentioned and
> > would like to ask for suggestions which to investigate deeper:
> > 
> >  x Eric describes a technique using snapshosts and pauses to do a
> > live
> > storage migration in this mailing list tread.
> >  x Dag suggests using virsh with --copy-storage-all --xml.
> >  x I found articles about using virsh blockcopy for storage live
> > migration.
> > 
> > Greetings,
> > 
> > Melanie
> > 
> > Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> > > @Dag, you might want to check with Mike Tutkowski, how he
> > > implemented
> > > this
> > > for the "online storage migration" from other storages (CEPH and
> > > NFS
> > > implemented so far as sources) to SolidFire.
> > > 
> > > We are doing exactly the same demo/manual way (this is what Mike
> > > has
> > > sent
> > > me back in the days), so perhaps you want to see how to translate
> > > this into
> > > general things (so ANY to ANY storage migration) inside
> > > CloudStack.
> > > 
> > > Cheers
> > > 
> > > On 2 February 2018 at 10:28, Dag Sonstebo <
> > > Dag.Sonstebo@shapeblue.com
> > > wrote:
> > > 
> > > > All
> > > > 
> > > > I am doing a bit of R&D around this for a client at the moment.
> > > > I
> > > > am
> > > > semi-successful in getting live migrations to different storage
> > > > pools to
> > > > work. The method I’m using is as follows – this does not take
> > > > into
> > > > account
> > > > any efficiency optimisation around the disk transfer (which is
> > > > next
> > > > on my
> > > > list). The below should answer your question Eric about moving
> > > > to a
> > > > different location – and I am also working with your steps to
> > > > see
> > > > where I
> > > > can improve the following. Keep in mind all of this is external
> > > > to
> > > > CloudStack – although CloudStack picks up the destination KVM
> > > > host
> > > > automatically it does not update the volume tables etc.,
> > > > neither
> > > > does it do
> > > > any housekeeping.
> > > > 
> > > > 1) Ensure the same network bridges are up on source and
> > > > destination
> > > > –
> > > > these are found with:
> > > > 
> > > > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> > > >       <source file='/mnt/00e88a7b-985f-3be8-b717-
> > > > 0a59d8197640/d0ab5dd5-
> > > > e3dd-47ac-a326-5ce3d47d194d'/>
> > > >       <source bridge='breth1-725'/>
> > > >       <source path='/dev/pts/3'/>
> > > >       <source path='/dev/pts/3'/>
> > > > 
> > > > So from this make sure breth1-725 is up on the destionation
> > > > host
> > > > (do it
> > > > the hard way or cheat and spin up a VM from same account and
> > > > network on
> > > > that host)
> > > > 
> > > > 2) Find size of source disk and create stub disk in destination
> > > > (this part
> > > > can be made more efficient to speed up disk transfer – by doing
> > > > similar
> > > > things to what Eric is doing):
> > > > 
> > > > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > > > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> > > > 47ac-a326-5ce3d47d194d
> > > > file format: qcow2
> > > > virtual size: 8.0G (8589934592 bytes)
> > > > disk size: 32M
> > > > cluster_size: 65536
> > > > backing file: /mnt/00e88a7b-985f-3be8-b717-
> > > > 0a59d8197640/3caaf4c9-
> > > > eaec-
> > > > 11e7-800b-06b4a401075c
> > > > 
> > > > ######################
> > > > 
> > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img
> > > > create
> > > > -f
> > > > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > > > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> > > > size=8589934592 encryption=off cluster_size=65536
> > > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> > > > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > > file format: qcow2
> > > > virtual size: 8.0G (8589934592 bytes)
> > > > disk size: 448K
> > > > cluster_size: 65536
> > > > 
> > > > 3) Rewrite the new VM XML file for the destination with:
> > > > a) New disk location, in this case this is just a new path
> > > > (Eric –
> > > > this
> > > > answers your question)
> > > > b) Different IP addresses for VNC – in this case 10.0.0.1 to
> > > > 10.0.0.2
> > > > and carry out migration.
> > > > 
> > > > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-
> > > > b717-
> > > > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e
> > > > 's/
> > > > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > > > 
> > > > [root@kvm1 ~]# virsh migrate --live --persistent --copy-
> > > > storage-all
> > > > --xml
> > > > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --
> > > > verbose
> > > > --abort-on-error
> > > > Migration: [ 25 %]
> > > > 
> > > > 4) Once complete delete the source file. This can be done with
> > > > extra
> > > > switches on the virsh migrate command if need be.
> > > > = = =
> > > > 
> > > > In the simplest tests this works – destination VM remains
> > > > online
> > > > and has
> > > > storage in new location – but it’s not persistent – sometimes
> > > > the
> > > > destination VM ends up in a paused state, and I’m working on
> > > > how to
> > > > get
> > > > around this. I also noted virsh migrate has a  migrate-
> > > > setmaxdowntime which
> > > > I think can be useful here.
> > > > 
> > > > Regards,
> > > > Dag Sonstebo
> > > > Cloud Architect
> > > > ShapeBlue
> > > > 
> > > > On 01/02/2018, 20:30, "Andrija Panic" <an...@gmail.com>
> > > > wrote:
> > > > 
> > > >     Actually,  we have this feature (we call this internally
> > > >     online-storage-migration) to migrate volume from CEPH/NFS
> > > > to
> > > > SolidFire
> > > >     (thanks to Mike Tutkowski)
> > > > 
> > > >     There is libvirt mechanism, where basically you start
> > > > another
> > > > PAUSED
> > > > VM on
> > > >     another host (same name and same XML file, except the
> > > > storage
> > > > volumes
> > > > are
> > > >     pointing to new storage, different paths, etc and maybe VNC
> > > > listening
> > > >     address needs to be changed or so) and then you issue on
> > > > original
> > > > host/VM
> > > >     the live migrate command with few parameters... the libvirt
> > > > will
> > > >     transaprently handle the copy data process from Soruce to
> > > > New
> > > > volumes,
> > > > and
> > > >     after migration the VM will be alive (with new XML since
> > > > have
> > > > new
> > > > volumes)
> > > >     on new host, while the original VM on original host is
> > > > destroyed....
> > > > 
> > > >     (I can send you manual for this, that is realted to SF, but
> > > > idea is the
> > > >     same and you can exercies this on i.e. 2 NFS volumes on 2
> > > > different
> > > >     storages)
> > > > 
> > > >     This mechanism doesn't exist in ACS in general (AFAIK),
> > > > except
> > > > for when
> > > >     migrating to SolidFire.
> > > > 
> > > >     Perhaps community/DEV can help extend Mike's code to do
> > > > same
> > > > work on
> > > >     different storage types...
> > > > 
> > > >     Cheers
> > > > 
> > > > 
> > > > Dag.Sonstebo@shapeblue.com
> > > > www.shapeblue.com
> > > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > > @shapeblue
> > > > 
> > > > 
> > > > 
> > > > On 19 January 2018 at 18:45, Eric Green <
> > > > eric.lee.green@gmail.com>
> > > > wrote:
> > > > 
> > > >     > KVM is able to live migrate entire virtual machines
> > > > complete
> > > > with
> > > > local
> > > >     > volumes (see 'man virsh') but does require nbd (Network
> > > > Block
> > > > Device) to be
> > > >     > installed on the destination host to do so. It may need
> > > > installation
> > > > of
> > > >     > later libvirt / qemu packages from OpenStack repositories
> > > > on
> > > > Centos
> > > > 6, I'm
> > > >     > not sure, but just works on Centos 7. In any event, I
> > > > have
> > > > used this
> > > >     > functionality to move virtual machines between
> > > > virtualization
> > > > hosts
> > > > on my
> > > >     > home network. It works.
> > > >     >
> > > >     > What is missing is the ability to live-migrate a disk
> > > > from
> > > > one shared
> > > >     > storage to another. The functionality built into virsh
> > > > live-
> > > > migrates
> > > > the
> > > >     > volume ***to the exact same location on the new host***,
> > > > so
> > > > obviously is
> > > >     > useless for migrating the disk to a new location on
> > > > shared
> > > > storage. I
> > > >     > looked everywhere for the ability of KVM to live migrate
> > > > a
> > > > disk from
> > > > point
> > > >     > A to point B all by itself, and found no such thing.
> > > > libvirt/qemu
> > > > has the
> > > >     > raw capabilities needed to do this, but it is not
> > > > currently
> > > > exposed
> > > > as a
> > > >     > single API via the qemu console or virsh. It can be
> > > > emulated
> > > > via
> > > > scripting
> > > >     > however:
> > > >     >
> > > >     > 1. Pause virtual machine
> > > >     > 2. Do qcow2 snapshot.
> > > >     > 3. Detach base disk, attach qcow2 snapshot
> > > >     > 4. unpause virtual machine
> > > >     > 5. copy qcow2 base file to new location
> > > >     > 6. pause virtual machine
> > > >     > 7. detach snapshot
> > > >     > 8. unsnapshot qcow2 snapshot at its new location.
> > > >     > 9. attach new base at new location.
> > > >     > 10. unpause virtual machine.
> > > >     >
> > > >     > Thing is, if that entire process is not built into the
> > > > underlying
> > > >     > kvm/qemu/libvirt infrastructure as tested functionality
> > > > with
> > > > a
> > > > defined API,
> > > >     > there's no guarantee that it will work seamlessly and
> > > > will
> > > > continue
> > > > working
> > > >     > with the next release of the underlying infrastructure.
> > > > This
> > > > is using
> > > >     > multiple different tools to manipulate the qcow2 file and
> > > > attach/detach
> > > >     > base disks to the running (but paused) kvm domain, and
> > > > would
> > > > have to
> > > > be
> > > >     > tested against all variations of those tools on all
> > > > supported
> > > > Cloudstack
> > > >     > KVM host platforms. The test matrix looks pretty grim.
> > > >     >
> > > >     > By contrast, the migrate-with-local-storage process is
> > > > built
> > > > into
> > > > virsh
> > > >     > and is tested by the distribution vendor and the set of
> > > > tools
> > > > provided with
> > > >     > the distribution is guaranteed to work with the virsh /
> > > > libvirt/ qemu
> > > >     > distributed by the distribution vendor. That makes the
> > > > test
> > > > matrix
> > > > for
> > > >     > move-with-local-storage look a lot simpler -- "is this
> > > > functionality
> > > >     > supported by that version of virsh on that distribution?
> > > > Yes?
> > > > Enable
> > > > it.
> > > >     > No? Don't enable it."
> > > >     >
> > > >     > I'd love to have live migration of disks on shared
> > > > storage
> > > > with
> > > > Cloudstack
> > > >     > KVM, but not at the expense of reliability. Shutting down
> > > > a
> > > > virtual
> > > > machine
> > > >     > in order to migrate one of its disks from one shared
> > > > datastore to
> > > > another
> > > >     > is not ideal, but at least it's guaranteed reliable.
> > > >     >
> > > >     >
> > > >     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> > > >     > rafaelweingartner@gmail.com> wrote:
> > > >     > >
> > > >     > > Hey Marc,
> > > >     > > It is very interesting that you are going to pick this
> > > > up
> > > > for KVM.
> > > > I am
> > > >     > > working in a related issue for XenServer [1].
> > > >     > > If you can confirm that KVM is able to live migrate
> > > > local
> > > > volumes
> > > > to
> > > >     > other
> > > >     > > local storage or shared storage I could make the
> > > > feature I
> > > > am
> > > > working on
> > > >     > > available to KVM as well.
> > > >     > >
> > > >     > >
> > > >     > > [1] 
> > > > https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> > > >     > >
> > > >     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier
> > > > <
> > > >     > marco@exoscale.ch>
> > > >     > > wrote:
> > > >     > >
> > > >     > >> There's a PR waiting to be fixed about live migration
> > > > with
> > > > local
> > > > volume
> > > >     > for
> > > >     > >> KVM. So it will come at some point. I'm the one who
> > > > made
> > > > this PR
> > > > but I'm
> > > >     > >> not using the upstream release so it's hard for me to
> > > > debug the
> > > > problem.
> > > >     > >> You can add yourself to the PR to get notify when
> > > > things
> > > > are
> > > > moving on
> > > >     > it.
> > > >     > >>
> > > >     > >> https://github.com/apache/cloudstack/pull/1709
> > > >     > >>
> > > >     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> > > > eric.lee.green@gmail.com>
> > > >     > >> wrote:
> > > >     > >>
> > > >     > >>> Theoretically on Centos 7 as the host KVM OS it could
> > > > be
> > > > done
> > > > with a
> > > >     > >>> couple of pauses and the snapshotting mechanism built
> > > > into
> > > > qcow2, but
> > > >     > >> there
> > > >     > >>> is no simple way to do it directly via virsh, the
> > > > libvirtd/qemu
> > > > control
> > > >     > >>> program that is used to manage virtualization. It's
> > > > not
> > > > as with
> > > >     > issuing a
> > > >     > >>> simple vmotion 'migrate volume' call in Vmware.
> > > >     > >>>
> > > >     > >>> I scripted out how it would work without that direct
> > > > support in
> > > >     > >>> libvirt/virsh and after looking at all the points
> > > > where
> > > > things
> > > > could go
> > > >     > >>> wrong, honestly, I think we need to wait until there
> > > > is
> > > > support
> > > > in
> > > >     > >>> libvirt/virsh to do this. virsh clearly has the
> > > > capability
> > > > internally
> > > >     > to
> > > >     > >> do
> > > >     > >>> live migration of storage, since it does this for
> > > > live
> > > > domain
> > > > migration
> > > >     > >> of
> > > >     > >>> local storage between machines when migrating KVM
> > > > domains
> > > > from
> > > > one host
> > > >     > >> to
> > > >     > >>> another, but that capability is not currently exposed
> > > > in
> > > > a way
> > > >     > Cloudstack
> > > >     > >>> could use, at least not on Centos 7.
> > > >     > >>>
> > > >     > >>>
> > > >     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <
> > > > ppisz@pulab.pl>
> > > > wrote:
> > > >     > >>>>
> > > >     > >>>> Hello,
> > > >     > >>>>
> > > >     > >>>> Is there a chance that one day it will be possible
> > > > to
> > > > migrate
> > > > volume
> > > >     > >>> (root disk) of a live VM in KVM between storage pools
> > > > (in
> > > > CloudStack)?
> > > >     > >>>> Like a storage vMotion in Vmware.
> > > >     > >>>>
> > > >     > >>>> Best regards,
> > > >     > >>>> Piotr
> > > >     > >>>>
> > > >     > >>>
> > > >     > >>>
> > > >     > >>
> > > >     > >
> > > >     > >
> > > >     > >
> > > >     > > --
> > > >     > > Rafael Weingärtner
> > > >     >
> > > >     >
> > > > 
> > > > 
> > > >     --
> > > > 
> > > >     Andrija Panić
> > > > 
> > > > 
> > > > 
> > --
> > --
> > Heinlein Support GmbH
> > Schwedter Str. 8/9b, 10119 Berlin
> > 
> > https://www.heinlein-support.de
> > 
> > Tel: 030 / 40 50 51 - 62
> > Fax: 030 / 40 50 51 - 19
> > 
> > Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> > Geschäftsführer: Peer Heinlein - Sitz: Berlin
> > 
> 
> 
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin

Re: kvm live volume migration

Posted by Andrija Panic <an...@gmail.com>.
So, let me explain.

Doing "online storage migration" aka live storage migration is working for
CEPH/NFS --> SolidFire, starting from 4.11+
Internally it is done in the same way as "virsh with --copy-storage-all
--xml" in short

Longer explanation:
Steps:
You create new volumes on the destination storage (SolidFire in this case),
set QoS etc - simply prepare the destination volumes (empty volumes atm).
On source host/VM, dump VM XML, edit XML, change disk section to point to
new volume path, protocol, etc - and also the IP address for the VNC
(cloudstack requirement), save XMLT
Then you do "virsh with --copy-storage-all --xml. myEditedVM.xml ..." stuff
that does the job.
Then NBD driver will be used to copy blocks from the source volumes to the
destination volumes while that virsh command is working... (here's my demo,
in details..
https://www.youtube.com/watch?v=Eo8BuHBnVgg&list=PLEr0fbgkyLKyiPnNzPz7XDjxnmQNxjJWT&index=5&t=2s
)

This is yet to be extended/coded to support NFS-->NFS or CEPH-->CEPH or
CEPH/NFS-->CEPH/NFS... should not be that much work, the logic is there
(bit part of the code)
Also, starting from 4.12, you can actually  (I believe using identical
logic) migrate only ROOT volume that are on the LOCAL storage (local disks)
to another host/local storage - but DATA disks are not supported.

Now...imagine the feature is there - if using CentOS7, our friends at
RedHat have removed support for actually using live storage migration
(unless you are paying for RHEV - but it does work fine on CentOS6, and
Ubuntu 14.04+

I recall "we" had to use qemu/libvirt from the "oVirt" repo which DOES
(DID) support storage live migration (normal EV packages from the Special
Interest Group (2.12 tested) - did NOT include this...)

I can send you step-by-step .docx guide for manually mimicking what is done
(in SolidFire, but identical logic for other storages) - but not sure if
that still helps you...


Andrija

On Wed, 25 Sep 2019 at 10:51, Melanie Desaive <m....@heinlein-support.de>
wrote:

> Hi all,
>
> I am currently doing my first steps with KVM as hypervisor for
> CloudStack. I was shocked to realize that currently live volume
> migration between different shared storages is not supported with KVM.
> This is a feature I use intensively with XenServer.
>
> How do you get along with this limitation? I do really expect you to
> use some workarounds, or do you all only accept vm downtimes for a
> storage migration?
>
> With my first investigation I found three techniques mentioned and
> would like to ask for suggestions which to investigate deeper:
>
>  x Eric describes a technique using snapshosts and pauses to do a live
> storage migration in this mailing list tread.
>  x Dag suggests using virsh with --copy-storage-all --xml.
>  x I found articles about using virsh blockcopy for storage live
> migration.
>
> Greetings,
>
> Melanie
>
> Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> > @Dag, you might want to check with Mike Tutkowski, how he implemented
> > this
> > for the "online storage migration" from other storages (CEPH and NFS
> > implemented so far as sources) to SolidFire.
> >
> > We are doing exactly the same demo/manual way (this is what Mike has
> > sent
> > me back in the days), so perhaps you want to see how to translate
> > this into
> > general things (so ANY to ANY storage migration) inside CloudStack.
> >
> > Cheers
> >
> > On 2 February 2018 at 10:28, Dag Sonstebo <Dag.Sonstebo@shapeblue.com
> > >
> > wrote:
> >
> > > All
> > >
> > > I am doing a bit of R&D around this for a client at the moment. I
> > > am
> > > semi-successful in getting live migrations to different storage
> > > pools to
> > > work. The method I’m using is as follows – this does not take into
> > > account
> > > any efficiency optimisation around the disk transfer (which is next
> > > on my
> > > list). The below should answer your question Eric about moving to a
> > > different location – and I am also working with your steps to see
> > > where I
> > > can improve the following. Keep in mind all of this is external to
> > > CloudStack – although CloudStack picks up the destination KVM host
> > > automatically it does not update the volume tables etc., neither
> > > does it do
> > > any housekeeping.
> > >
> > > 1) Ensure the same network bridges are up on source and destination
> > > –
> > > these are found with:
> > >
> > > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> > >       <source file='/mnt/00e88a7b-985f-3be8-b717-
> > > 0a59d8197640/d0ab5dd5-
> > > e3dd-47ac-a326-5ce3d47d194d'/>
> > >       <source bridge='breth1-725'/>
> > >       <source path='/dev/pts/3'/>
> > >       <source path='/dev/pts/3'/>
> > >
> > > So from this make sure breth1-725 is up on the destionation host
> > > (do it
> > > the hard way or cheat and spin up a VM from same account and
> > > network on
> > > that host)
> > >
> > > 2) Find size of source disk and create stub disk in destination
> > > (this part
> > > can be made more efficient to speed up disk transfer – by doing
> > > similar
> > > things to what Eric is doing):
> > >
> > > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> > > 47ac-a326-5ce3d47d194d
> > > file format: qcow2
> > > virtual size: 8.0G (8589934592 bytes)
> > > disk size: 32M
> > > cluster_size: 65536
> > > backing file: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/3caaf4c9-
> > > eaec-
> > > 11e7-800b-06b4a401075c
> > >
> > > ######################
> > >
> > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img create
> > > -f
> > > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> > > size=8589934592 encryption=off cluster_size=65536
> > > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> > > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > > file format: qcow2
> > > virtual size: 8.0G (8589934592 bytes)
> > > disk size: 448K
> > > cluster_size: 65536
> > >
> > > 3) Rewrite the new VM XML file for the destination with:
> > > a) New disk location, in this case this is just a new path (Eric –
> > > this
> > > answers your question)
> > > b) Different IP addresses for VNC – in this case 10.0.0.1 to
> > > 10.0.0.2
> > > and carry out migration.
> > >
> > > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-b717-
> > > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e 's/
> > > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > >
> > > [root@kvm1 ~]# virsh migrate --live --persistent --copy-storage-all
> > > --xml
> > > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --verbose
> > > --abort-on-error
> > > Migration: [ 25 %]
> > >
> > > 4) Once complete delete the source file. This can be done with
> > > extra
> > > switches on the virsh migrate command if need be.
> > > = = =
> > >
> > > In the simplest tests this works – destination VM remains online
> > > and has
> > > storage in new location – but it’s not persistent – sometimes the
> > > destination VM ends up in a paused state, and I’m working on how to
> > > get
> > > around this. I also noted virsh migrate has a  migrate-
> > > setmaxdowntime which
> > > I think can be useful here.
> > >
> > > Regards,
> > > Dag Sonstebo
> > > Cloud Architect
> > > ShapeBlue
> > >
> > > On 01/02/2018, 20:30, "Andrija Panic" <an...@gmail.com>
> > > wrote:
> > >
> > >     Actually,  we have this feature (we call this internally
> > >     online-storage-migration) to migrate volume from CEPH/NFS to
> > > SolidFire
> > >     (thanks to Mike Tutkowski)
> > >
> > >     There is libvirt mechanism, where basically you start another
> > > PAUSED
> > > VM on
> > >     another host (same name and same XML file, except the storage
> > > volumes
> > > are
> > >     pointing to new storage, different paths, etc and maybe VNC
> > > listening
> > >     address needs to be changed or so) and then you issue on
> > > original
> > > host/VM
> > >     the live migrate command with few parameters... the libvirt
> > > will
> > >     transaprently handle the copy data process from Soruce to New
> > > volumes,
> > > and
> > >     after migration the VM will be alive (with new XML since have
> > > new
> > > volumes)
> > >     on new host, while the original VM on original host is
> > > destroyed....
> > >
> > >     (I can send you manual for this, that is realted to SF, but
> > > idea is the
> > >     same and you can exercies this on i.e. 2 NFS volumes on 2
> > > different
> > >     storages)
> > >
> > >     This mechanism doesn't exist in ACS in general (AFAIK), except
> > > for when
> > >     migrating to SolidFire.
> > >
> > >     Perhaps community/DEV can help extend Mike's code to do same
> > > work on
> > >     different storage types...
> > >
> > >     Cheers
> > >
> > >
> > > Dag.Sonstebo@shapeblue.com
> > > www.shapeblue.com
> > > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > > @shapeblue
> > >
> > >
> > >
> > > On 19 January 2018 at 18:45, Eric Green <er...@gmail.com>
> > > wrote:
> > >
> > >     > KVM is able to live migrate entire virtual machines complete
> > > with
> > > local
> > >     > volumes (see 'man virsh') but does require nbd (Network Block
> > > Device) to be
> > >     > installed on the destination host to do so. It may need
> > > installation
> > > of
> > >     > later libvirt / qemu packages from OpenStack repositories on
> > > Centos
> > > 6, I'm
> > >     > not sure, but just works on Centos 7. In any event, I have
> > > used this
> > >     > functionality to move virtual machines between virtualization
> > > hosts
> > > on my
> > >     > home network. It works.
> > >     >
> > >     > What is missing is the ability to live-migrate a disk from
> > > one shared
> > >     > storage to another. The functionality built into virsh live-
> > > migrates
> > > the
> > >     > volume ***to the exact same location on the new host***, so
> > > obviously is
> > >     > useless for migrating the disk to a new location on shared
> > > storage. I
> > >     > looked everywhere for the ability of KVM to live migrate a
> > > disk from
> > > point
> > >     > A to point B all by itself, and found no such thing.
> > > libvirt/qemu
> > > has the
> > >     > raw capabilities needed to do this, but it is not currently
> > > exposed
> > > as a
> > >     > single API via the qemu console or virsh. It can be emulated
> > > via
> > > scripting
> > >     > however:
> > >     >
> > >     > 1. Pause virtual machine
> > >     > 2. Do qcow2 snapshot.
> > >     > 3. Detach base disk, attach qcow2 snapshot
> > >     > 4. unpause virtual machine
> > >     > 5. copy qcow2 base file to new location
> > >     > 6. pause virtual machine
> > >     > 7. detach snapshot
> > >     > 8. unsnapshot qcow2 snapshot at its new location.
> > >     > 9. attach new base at new location.
> > >     > 10. unpause virtual machine.
> > >     >
> > >     > Thing is, if that entire process is not built into the
> > > underlying
> > >     > kvm/qemu/libvirt infrastructure as tested functionality with
> > > a
> > > defined API,
> > >     > there's no guarantee that it will work seamlessly and will
> > > continue
> > > working
> > >     > with the next release of the underlying infrastructure. This
> > > is using
> > >     > multiple different tools to manipulate the qcow2 file and
> > > attach/detach
> > >     > base disks to the running (but paused) kvm domain, and would
> > > have to
> > > be
> > >     > tested against all variations of those tools on all supported
> > > Cloudstack
> > >     > KVM host platforms. The test matrix looks pretty grim.
> > >     >
> > >     > By contrast, the migrate-with-local-storage process is built
> > > into
> > > virsh
> > >     > and is tested by the distribution vendor and the set of tools
> > > provided with
> > >     > the distribution is guaranteed to work with the virsh /
> > > libvirt/ qemu
> > >     > distributed by the distribution vendor. That makes the test
> > > matrix
> > > for
> > >     > move-with-local-storage look a lot simpler -- "is this
> > > functionality
> > >     > supported by that version of virsh on that distribution? Yes?
> > > Enable
> > > it.
> > >     > No? Don't enable it."
> > >     >
> > >     > I'd love to have live migration of disks on shared storage
> > > with
> > > Cloudstack
> > >     > KVM, but not at the expense of reliability. Shutting down a
> > > virtual
> > > machine
> > >     > in order to migrate one of its disks from one shared
> > > datastore to
> > > another
> > >     > is not ideal, but at least it's guaranteed reliable.
> > >     >
> > >     >
> > >     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> > >     > rafaelweingartner@gmail.com> wrote:
> > >     > >
> > >     > > Hey Marc,
> > >     > > It is very interesting that you are going to pick this up
> > > for KVM.
> > > I am
> > >     > > working in a related issue for XenServer [1].
> > >     > > If you can confirm that KVM is able to live migrate local
> > > volumes
> > > to
> > >     > other
> > >     > > local storage or shared storage I could make the feature I
> > > am
> > > working on
> > >     > > available to KVM as well.
> > >     > >
> > >     > >
> > >     > > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> > >     > >
> > >     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier <
> > >     > marco@exoscale.ch>
> > >     > > wrote:
> > >     > >
> > >     > >> There's a PR waiting to be fixed about live migration with
> > > local
> > > volume
> > >     > for
> > >     > >> KVM. So it will come at some point. I'm the one who made
> > > this PR
> > > but I'm
> > >     > >> not using the upstream release so it's hard for me to
> > > debug the
> > > problem.
> > >     > >> You can add yourself to the PR to get notify when things
> > > are
> > > moving on
> > >     > it.
> > >     > >>
> > >     > >> https://github.com/apache/cloudstack/pull/1709
> > >     > >>
> > >     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> > > eric.lee.green@gmail.com>
> > >     > >> wrote:
> > >     > >>
> > >     > >>> Theoretically on Centos 7 as the host KVM OS it could be
> > > done
> > > with a
> > >     > >>> couple of pauses and the snapshotting mechanism built
> > > into
> > > qcow2, but
> > >     > >> there
> > >     > >>> is no simple way to do it directly via virsh, the
> > > libvirtd/qemu
> > > control
> > >     > >>> program that is used to manage virtualization. It's not
> > > as with
> > >     > issuing a
> > >     > >>> simple vmotion 'migrate volume' call in Vmware.
> > >     > >>>
> > >     > >>> I scripted out how it would work without that direct
> > > support in
> > >     > >>> libvirt/virsh and after looking at all the points where
> > > things
> > > could go
> > >     > >>> wrong, honestly, I think we need to wait until there is
> > > support
> > > in
> > >     > >>> libvirt/virsh to do this. virsh clearly has the
> > > capability
> > > internally
> > >     > to
> > >     > >> do
> > >     > >>> live migration of storage, since it does this for live
> > > domain
> > > migration
> > >     > >> of
> > >     > >>> local storage between machines when migrating KVM domains
> > > from
> > > one host
> > >     > >> to
> > >     > >>> another, but that capability is not currently exposed in
> > > a way
> > >     > Cloudstack
> > >     > >>> could use, at least not on Centos 7.
> > >     > >>>
> > >     > >>>
> > >     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <pp...@pulab.pl>
> > > wrote:
> > >     > >>>>
> > >     > >>>> Hello,
> > >     > >>>>
> > >     > >>>> Is there a chance that one day it will be possible to
> > > migrate
> > > volume
> > >     > >>> (root disk) of a live VM in KVM between storage pools (in
> > > CloudStack)?
> > >     > >>>> Like a storage vMotion in Vmware.
> > >     > >>>>
> > >     > >>>> Best regards,
> > >     > >>>> Piotr
> > >     > >>>>
> > >     > >>>
> > >     > >>>
> > >     > >>
> > >     > >
> > >     > >
> > >     > >
> > >     > > --
> > >     > > Rafael Weingärtner
> > >     >
> > >     >
> > >
> > >
> > >     --
> > >
> > >     Andrija Panić
> > >
> > >
> > >
> >
> >
> --
> --
> Heinlein Support GmbH
> Schwedter Str. 8/9b, 10119 Berlin
>
> https://www.heinlein-support.de
>
> Tel: 030 / 40 50 51 - 62
> Fax: 030 / 40 50 51 - 19
>
> Amtsgericht Berlin-Charlottenburg - HRB 93818 B
> Geschäftsführer: Peer Heinlein - Sitz: Berlin
>


-- 

Andrija Panić

Re: kvm live volume migration

Posted by Melanie Desaive <m....@heinlein-support.de>.
Hi all,

I am currently doing my first steps with KVM as hypervisor for
CloudStack. I was shocked to realize that currently live volume
migration between different shared storages is not supported with KVM.
This is a feature I use intensively with XenServer.

How do you get along with this limitation? I do really expect you to
use some workarounds, or do you all only accept vm downtimes for a
storage migration?

With my first investigation I found three techniques mentioned and
would like to ask for suggestions which to investigate deeper:

 x Eric describes a technique using snapshosts and pauses to do a live
storage migration in this mailing list tread.
 x Dag suggests using virsh with --copy-storage-all --xml.
 x I found articles about using virsh blockcopy for storage live
migration.

Greetings,

Melanie

Am Freitag, den 02.02.2018, 15:55 +0100 schrieb Andrija Panic:
> @Dag, you might want to check with Mike Tutkowski, how he implemented
> this
> for the "online storage migration" from other storages (CEPH and NFS
> implemented so far as sources) to SolidFire.
> 
> We are doing exactly the same demo/manual way (this is what Mike has
> sent
> me back in the days), so perhaps you want to see how to translate
> this into
> general things (so ANY to ANY storage migration) inside CloudStack.
> 
> Cheers
> 
> On 2 February 2018 at 10:28, Dag Sonstebo <Dag.Sonstebo@shapeblue.com
> >
> wrote:
> 
> > All
> > 
> > I am doing a bit of R&D around this for a client at the moment. I
> > am
> > semi-successful in getting live migrations to different storage
> > pools to
> > work. The method I’m using is as follows – this does not take into
> > account
> > any efficiency optimisation around the disk transfer (which is next
> > on my
> > list). The below should answer your question Eric about moving to a
> > different location – and I am also working with your steps to see
> > where I
> > can improve the following. Keep in mind all of this is external to
> > CloudStack – although CloudStack picks up the destination KVM host
> > automatically it does not update the volume tables etc., neither
> > does it do
> > any housekeeping.
> > 
> > 1) Ensure the same network bridges are up on source and destination
> > –
> > these are found with:
> > 
> > [root@kvm1 ~]# virsh dumpxml 9 | grep source
> >       <source file='/mnt/00e88a7b-985f-3be8-b717-
> > 0a59d8197640/d0ab5dd5-
> > e3dd-47ac-a326-5ce3d47d194d'/>
> >       <source bridge='breth1-725'/>
> >       <source path='/dev/pts/3'/>
> >       <source path='/dev/pts/3'/>
> > 
> > So from this make sure breth1-725 is up on the destionation host
> > (do it
> > the hard way or cheat and spin up a VM from same account and
> > network on
> > that host)
> > 
> > 2) Find size of source disk and create stub disk in destination
> > (this part
> > can be made more efficient to speed up disk transfer – by doing
> > similar
> > things to what Eric is doing):
> > 
> > [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> > 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> > 47ac-a326-5ce3d47d194d
> > file format: qcow2
> > virtual size: 8.0G (8589934592 bytes)
> > disk size: 32M
> > cluster_size: 65536
> > backing file: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/3caaf4c9-
> > eaec-
> > 11e7-800b-06b4a401075c
> > 
> > ######################
> > 
> > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img create
> > -f
> > qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> > Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> > size=8589934592 encryption=off cluster_size=65536
> > [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> > d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> > file format: qcow2
> > virtual size: 8.0G (8589934592 bytes)
> > disk size: 448K
> > cluster_size: 65536
> > 
> > 3) Rewrite the new VM XML file for the destination with:
> > a) New disk location, in this case this is just a new path (Eric –
> > this
> > answers your question)
> > b) Different IP addresses for VNC – in this case 10.0.0.1 to
> > 10.0.0.2
> > and carry out migration.
> > 
> > [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-b717-
> > 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e 's/
> > 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
> > 
> > [root@kvm1 ~]# virsh migrate --live --persistent --copy-storage-all 
> > --xml
> > /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --verbose
> > --abort-on-error
> > Migration: [ 25 %]
> > 
> > 4) Once complete delete the source file. This can be done with
> > extra
> > switches on the virsh migrate command if need be.
> > = = =
> > 
> > In the simplest tests this works – destination VM remains online
> > and has
> > storage in new location – but it’s not persistent – sometimes the
> > destination VM ends up in a paused state, and I’m working on how to
> > get
> > around this. I also noted virsh migrate has a  migrate-
> > setmaxdowntime which
> > I think can be useful here.
> > 
> > Regards,
> > Dag Sonstebo
> > Cloud Architect
> > ShapeBlue
> > 
> > On 01/02/2018, 20:30, "Andrija Panic" <an...@gmail.com>
> > wrote:
> > 
> >     Actually,  we have this feature (we call this internally
> >     online-storage-migration) to migrate volume from CEPH/NFS to
> > SolidFire
> >     (thanks to Mike Tutkowski)
> > 
> >     There is libvirt mechanism, where basically you start another
> > PAUSED
> > VM on
> >     another host (same name and same XML file, except the storage
> > volumes
> > are
> >     pointing to new storage, different paths, etc and maybe VNC
> > listening
> >     address needs to be changed or so) and then you issue on
> > original
> > host/VM
> >     the live migrate command with few parameters... the libvirt
> > will
> >     transaprently handle the copy data process from Soruce to New
> > volumes,
> > and
> >     after migration the VM will be alive (with new XML since have
> > new
> > volumes)
> >     on new host, while the original VM on original host is
> > destroyed....
> > 
> >     (I can send you manual for this, that is realted to SF, but
> > idea is the
> >     same and you can exercies this on i.e. 2 NFS volumes on 2
> > different
> >     storages)
> > 
> >     This mechanism doesn't exist in ACS in general (AFAIK), except
> > for when
> >     migrating to SolidFire.
> > 
> >     Perhaps community/DEV can help extend Mike's code to do same
> > work on
> >     different storage types...
> > 
> >     Cheers
> > 
> > 
> > Dag.Sonstebo@shapeblue.com
> > www.shapeblue.com
> > 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> > @shapeblue
> > 
> > 
> > 
> > On 19 January 2018 at 18:45, Eric Green <er...@gmail.com>
> > wrote:
> > 
> >     > KVM is able to live migrate entire virtual machines complete
> > with
> > local
> >     > volumes (see 'man virsh') but does require nbd (Network Block
> > Device) to be
> >     > installed on the destination host to do so. It may need
> > installation
> > of
> >     > later libvirt / qemu packages from OpenStack repositories on
> > Centos
> > 6, I'm
> >     > not sure, but just works on Centos 7. In any event, I have
> > used this
> >     > functionality to move virtual machines between virtualization
> > hosts
> > on my
> >     > home network. It works.
> >     >
> >     > What is missing is the ability to live-migrate a disk from
> > one shared
> >     > storage to another. The functionality built into virsh live-
> > migrates
> > the
> >     > volume ***to the exact same location on the new host***, so
> > obviously is
> >     > useless for migrating the disk to a new location on shared
> > storage. I
> >     > looked everywhere for the ability of KVM to live migrate a
> > disk from
> > point
> >     > A to point B all by itself, and found no such thing.
> > libvirt/qemu
> > has the
> >     > raw capabilities needed to do this, but it is not currently
> > exposed
> > as a
> >     > single API via the qemu console or virsh. It can be emulated
> > via
> > scripting
> >     > however:
> >     >
> >     > 1. Pause virtual machine
> >     > 2. Do qcow2 snapshot.
> >     > 3. Detach base disk, attach qcow2 snapshot
> >     > 4. unpause virtual machine
> >     > 5. copy qcow2 base file to new location
> >     > 6. pause virtual machine
> >     > 7. detach snapshot
> >     > 8. unsnapshot qcow2 snapshot at its new location.
> >     > 9. attach new base at new location.
> >     > 10. unpause virtual machine.
> >     >
> >     > Thing is, if that entire process is not built into the
> > underlying
> >     > kvm/qemu/libvirt infrastructure as tested functionality with
> > a
> > defined API,
> >     > there's no guarantee that it will work seamlessly and will
> > continue
> > working
> >     > with the next release of the underlying infrastructure. This
> > is using
> >     > multiple different tools to manipulate the qcow2 file and
> > attach/detach
> >     > base disks to the running (but paused) kvm domain, and would
> > have to
> > be
> >     > tested against all variations of those tools on all supported
> > Cloudstack
> >     > KVM host platforms. The test matrix looks pretty grim.
> >     >
> >     > By contrast, the migrate-with-local-storage process is built
> > into
> > virsh
> >     > and is tested by the distribution vendor and the set of tools
> > provided with
> >     > the distribution is guaranteed to work with the virsh /
> > libvirt/ qemu
> >     > distributed by the distribution vendor. That makes the test
> > matrix
> > for
> >     > move-with-local-storage look a lot simpler -- "is this
> > functionality
> >     > supported by that version of virsh on that distribution? Yes?
> > Enable
> > it.
> >     > No? Don't enable it."
> >     >
> >     > I'd love to have live migration of disks on shared storage
> > with
> > Cloudstack
> >     > KVM, but not at the expense of reliability. Shutting down a
> > virtual
> > machine
> >     > in order to migrate one of its disks from one shared
> > datastore to
> > another
> >     > is not ideal, but at least it's guaranteed reliable.
> >     >
> >     >
> >     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
> >     > rafaelweingartner@gmail.com> wrote:
> >     > >
> >     > > Hey Marc,
> >     > > It is very interesting that you are going to pick this up
> > for KVM.
> > I am
> >     > > working in a related issue for XenServer [1].
> >     > > If you can confirm that KVM is able to live migrate local
> > volumes
> > to
> >     > other
> >     > > local storage or shared storage I could make the feature I
> > am
> > working on
> >     > > available to KVM as well.
> >     > >
> >     > >
> >     > > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10240
> >     > >
> >     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier <
> >     > marco@exoscale.ch>
> >     > > wrote:
> >     > >
> >     > >> There's a PR waiting to be fixed about live migration with
> > local
> > volume
> >     > for
> >     > >> KVM. So it will come at some point. I'm the one who made
> > this PR
> > but I'm
> >     > >> not using the upstream release so it's hard for me to
> > debug the
> > problem.
> >     > >> You can add yourself to the PR to get notify when things
> > are
> > moving on
> >     > it.
> >     > >>
> >     > >> https://github.com/apache/cloudstack/pull/1709
> >     > >>
> >     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> > eric.lee.green@gmail.com>
> >     > >> wrote:
> >     > >>
> >     > >>> Theoretically on Centos 7 as the host KVM OS it could be
> > done
> > with a
> >     > >>> couple of pauses and the snapshotting mechanism built
> > into
> > qcow2, but
> >     > >> there
> >     > >>> is no simple way to do it directly via virsh, the
> > libvirtd/qemu
> > control
> >     > >>> program that is used to manage virtualization. It's not
> > as with
> >     > issuing a
> >     > >>> simple vmotion 'migrate volume' call in Vmware.
> >     > >>>
> >     > >>> I scripted out how it would work without that direct
> > support in
> >     > >>> libvirt/virsh and after looking at all the points where
> > things
> > could go
> >     > >>> wrong, honestly, I think we need to wait until there is
> > support
> > in
> >     > >>> libvirt/virsh to do this. virsh clearly has the
> > capability
> > internally
> >     > to
> >     > >> do
> >     > >>> live migration of storage, since it does this for live
> > domain
> > migration
> >     > >> of
> >     > >>> local storage between machines when migrating KVM domains
> > from
> > one host
> >     > >> to
> >     > >>> another, but that capability is not currently exposed in
> > a way
> >     > Cloudstack
> >     > >>> could use, at least not on Centos 7.
> >     > >>>
> >     > >>>
> >     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <pp...@pulab.pl>
> > wrote:
> >     > >>>>
> >     > >>>> Hello,
> >     > >>>>
> >     > >>>> Is there a chance that one day it will be possible to
> > migrate
> > volume
> >     > >>> (root disk) of a live VM in KVM between storage pools (in
> > CloudStack)?
> >     > >>>> Like a storage vMotion in Vmware.
> >     > >>>>
> >     > >>>> Best regards,
> >     > >>>> Piotr
> >     > >>>>
> >     > >>>
> >     > >>>
> >     > >>
> >     > >
> >     > >
> >     > >
> >     > > --
> >     > > Rafael Weingärtner
> >     >
> >     >
> > 
> > 
> >     --
> > 
> >     Andrija Panić
> > 
> > 
> > 
> 
> 
-- 
-- 
Heinlein Support GmbH
Schwedter Str. 8/9b, 10119 Berlin
  
https://www.heinlein-support.de
 
Tel: 030 / 40 50 51 - 62
Fax: 030 / 40 50 51 - 19
  
Amtsgericht Berlin-Charlottenburg - HRB 93818 B
Geschäftsführer: Peer Heinlein - Sitz: Berlin

Re: kvm live volume migration

Posted by Andrija Panic <an...@gmail.com>.
@Dag, you might want to check with Mike Tutkowski, how he implemented this
for the "online storage migration" from other storages (CEPH and NFS
implemented so far as sources) to SolidFire.

We are doing exactly the same demo/manual way (this is what Mike has sent
me back in the days), so perhaps you want to see how to translate this into
general things (so ANY to ANY storage migration) inside CloudStack.

Cheers

On 2 February 2018 at 10:28, Dag Sonstebo <Da...@shapeblue.com>
wrote:

> All
>
> I am doing a bit of R&D around this for a client at the moment. I am
> semi-successful in getting live migrations to different storage pools to
> work. The method I’m using is as follows – this does not take into account
> any efficiency optimisation around the disk transfer (which is next on my
> list). The below should answer your question Eric about moving to a
> different location – and I am also working with your steps to see where I
> can improve the following. Keep in mind all of this is external to
> CloudStack – although CloudStack picks up the destination KVM host
> automatically it does not update the volume tables etc., neither does it do
> any housekeeping.
>
> 1) Ensure the same network bridges are up on source and destination –
> these are found with:
>
> [root@kvm1 ~]# virsh dumpxml 9 | grep source
>       <source file='/mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-
> e3dd-47ac-a326-5ce3d47d194d'/>
>       <source bridge='breth1-725'/>
>       <source path='/dev/pts/3'/>
>       <source path='/dev/pts/3'/>
>
> So from this make sure breth1-725 is up on the destionation host (do it
> the hard way or cheat and spin up a VM from same account and network on
> that host)
>
> 2) Find size of source disk and create stub disk in destination (this part
> can be made more efficient to speed up disk transfer – by doing similar
> things to what Eric is doing):
>
> [root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-
> 0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-
> 47ac-a326-5ce3d47d194d
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 32M
> cluster_size: 65536
> backing file: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/3caaf4c9-eaec-
> 11e7-800b-06b4a401075c
>
> ######################
>
> [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img create -f
> qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
> Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2
> size=8589934592 encryption=off cluster_size=65536
> [root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info
> d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
> file format: qcow2
> virtual size: 8.0G (8589934592 bytes)
> disk size: 448K
> cluster_size: 65536
>
> 3) Rewrite the new VM XML file for the destination with:
> a) New disk location, in this case this is just a new path (Eric – this
> answers your question)
> b) Different IP addresses for VNC – in this case 10.0.0.1 to 10.0.0.2
> and carry out migration.
>
> [root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-b717-
> 0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e 's/
> 10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml
>
> [root@kvm1 ~]# virsh migrate --live --persistent --copy-storage-all --xml
> /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --verbose
> --abort-on-error
> Migration: [ 25 %]
>
> 4) Once complete delete the source file. This can be done with extra
> switches on the virsh migrate command if need be.
> = = =
>
> In the simplest tests this works – destination VM remains online and has
> storage in new location – but it’s not persistent – sometimes the
> destination VM ends up in a paused state, and I’m working on how to get
> around this. I also noted virsh migrate has a  migrate-setmaxdowntime which
> I think can be useful here.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 01/02/2018, 20:30, "Andrija Panic" <an...@gmail.com> wrote:
>
>     Actually,  we have this feature (we call this internally
>     online-storage-migration) to migrate volume from CEPH/NFS to SolidFire
>     (thanks to Mike Tutkowski)
>
>     There is libvirt mechanism, where basically you start another PAUSED
> VM on
>     another host (same name and same XML file, except the storage volumes
> are
>     pointing to new storage, different paths, etc and maybe VNC listening
>     address needs to be changed or so) and then you issue on original
> host/VM
>     the live migrate command with few parameters... the libvirt will
>     transaprently handle the copy data process from Soruce to New volumes,
> and
>     after migration the VM will be alive (with new XML since have new
> volumes)
>     on new host, while the original VM on original host is destroyed....
>
>     (I can send you manual for this, that is realted to SF, but idea is the
>     same and you can exercies this on i.e. 2 NFS volumes on 2 different
>     storages)
>
>     This mechanism doesn't exist in ACS in general (AFAIK), except for when
>     migrating to SolidFire.
>
>     Perhaps community/DEV can help extend Mike's code to do same work on
>     different storage types...
>
>     Cheers
>
>
> Dag.Sonstebo@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> On 19 January 2018 at 18:45, Eric Green <er...@gmail.com> wrote:
>
>     > KVM is able to live migrate entire virtual machines complete with
> local
>     > volumes (see 'man virsh') but does require nbd (Network Block
> Device) to be
>     > installed on the destination host to do so. It may need installation
> of
>     > later libvirt / qemu packages from OpenStack repositories on Centos
> 6, I'm
>     > not sure, but just works on Centos 7. In any event, I have used this
>     > functionality to move virtual machines between virtualization hosts
> on my
>     > home network. It works.
>     >
>     > What is missing is the ability to live-migrate a disk from one shared
>     > storage to another. The functionality built into virsh live-migrates
> the
>     > volume ***to the exact same location on the new host***, so
> obviously is
>     > useless for migrating the disk to a new location on shared storage. I
>     > looked everywhere for the ability of KVM to live migrate a disk from
> point
>     > A to point B all by itself, and found no such thing. libvirt/qemu
> has the
>     > raw capabilities needed to do this, but it is not currently exposed
> as a
>     > single API via the qemu console or virsh. It can be emulated via
> scripting
>     > however:
>     >
>     > 1. Pause virtual machine
>     > 2. Do qcow2 snapshot.
>     > 3. Detach base disk, attach qcow2 snapshot
>     > 4. unpause virtual machine
>     > 5. copy qcow2 base file to new location
>     > 6. pause virtual machine
>     > 7. detach snapshot
>     > 8. unsnapshot qcow2 snapshot at its new location.
>     > 9. attach new base at new location.
>     > 10. unpause virtual machine.
>     >
>     > Thing is, if that entire process is not built into the underlying
>     > kvm/qemu/libvirt infrastructure as tested functionality with a
> defined API,
>     > there's no guarantee that it will work seamlessly and will continue
> working
>     > with the next release of the underlying infrastructure. This is using
>     > multiple different tools to manipulate the qcow2 file and
> attach/detach
>     > base disks to the running (but paused) kvm domain, and would have to
> be
>     > tested against all variations of those tools on all supported
> Cloudstack
>     > KVM host platforms. The test matrix looks pretty grim.
>     >
>     > By contrast, the migrate-with-local-storage process is built into
> virsh
>     > and is tested by the distribution vendor and the set of tools
> provided with
>     > the distribution is guaranteed to work with the virsh / libvirt/ qemu
>     > distributed by the distribution vendor. That makes the test matrix
> for
>     > move-with-local-storage look a lot simpler -- "is this functionality
>     > supported by that version of virsh on that distribution? Yes? Enable
> it.
>     > No? Don't enable it."
>     >
>     > I'd love to have live migration of disks on shared storage with
> Cloudstack
>     > KVM, but not at the expense of reliability. Shutting down a virtual
> machine
>     > in order to migrate one of its disks from one shared datastore to
> another
>     > is not ideal, but at least it's guaranteed reliable.
>     >
>     >
>     > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
>     > rafaelweingartner@gmail.com> wrote:
>     > >
>     > > Hey Marc,
>     > > It is very interesting that you are going to pick this up for KVM.
> I am
>     > > working in a related issue for XenServer [1].
>     > > If you can confirm that KVM is able to live migrate local volumes
> to
>     > other
>     > > local storage or shared storage I could make the feature I am
> working on
>     > > available to KVM as well.
>     > >
>     > >
>     > > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10240
>     > >
>     > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier <
>     > marco@exoscale.ch>
>     > > wrote:
>     > >
>     > >> There's a PR waiting to be fixed about live migration with local
> volume
>     > for
>     > >> KVM. So it will come at some point. I'm the one who made this PR
> but I'm
>     > >> not using the upstream release so it's hard for me to debug the
> problem.
>     > >> You can add yourself to the PR to get notify when things are
> moving on
>     > it.
>     > >>
>     > >> https://github.com/apache/cloudstack/pull/1709
>     > >>
>     > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <
> eric.lee.green@gmail.com>
>     > >> wrote:
>     > >>
>     > >>> Theoretically on Centos 7 as the host KVM OS it could be done
> with a
>     > >>> couple of pauses and the snapshotting mechanism built into
> qcow2, but
>     > >> there
>     > >>> is no simple way to do it directly via virsh, the libvirtd/qemu
> control
>     > >>> program that is used to manage virtualization. It's not as with
>     > issuing a
>     > >>> simple vmotion 'migrate volume' call in Vmware.
>     > >>>
>     > >>> I scripted out how it would work without that direct support in
>     > >>> libvirt/virsh and after looking at all the points where things
> could go
>     > >>> wrong, honestly, I think we need to wait until there is support
> in
>     > >>> libvirt/virsh to do this. virsh clearly has the capability
> internally
>     > to
>     > >> do
>     > >>> live migration of storage, since it does this for live domain
> migration
>     > >> of
>     > >>> local storage between machines when migrating KVM domains from
> one host
>     > >> to
>     > >>> another, but that capability is not currently exposed in a way
>     > Cloudstack
>     > >>> could use, at least not on Centos 7.
>     > >>>
>     > >>>
>     > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <pp...@pulab.pl> wrote:
>     > >>>>
>     > >>>> Hello,
>     > >>>>
>     > >>>> Is there a chance that one day it will be possible to migrate
> volume
>     > >>> (root disk) of a live VM in KVM between storage pools (in
> CloudStack)?
>     > >>>> Like a storage vMotion in Vmware.
>     > >>>>
>     > >>>> Best regards,
>     > >>>> Piotr
>     > >>>>
>     > >>>
>     > >>>
>     > >>
>     > >
>     > >
>     > >
>     > > --
>     > > Rafael Weingärtner
>     >
>     >
>
>
>     --
>
>     Andrija Panić
>
>
>


-- 

Andrija Panić

Re: kvm live volume migration

Posted by Dag Sonstebo <Da...@shapeblue.com>.
All

I am doing a bit of R&D around this for a client at the moment. I am semi-successful in getting live migrations to different storage pools to work. The method I’m using is as follows – this does not take into account any efficiency optimisation around the disk transfer (which is next on my list). The below should answer your question Eric about moving to a different location – and I am also working with your steps to see where I can improve the following. Keep in mind all of this is external to CloudStack – although CloudStack picks up the destination KVM host automatically it does not update the volume tables etc., neither does it do any housekeeping.

1) Ensure the same network bridges are up on source and destination – these are found with:

[root@kvm1 ~]# virsh dumpxml 9 | grep source
      <source file='/mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d'/>
      <source bridge='breth1-725'/>
      <source path='/dev/pts/3'/>
      <source path='/dev/pts/3'/>

So from this make sure breth1-725 is up on the destionation host (do it the hard way or cheat and spin up a VM from same account and network on that host)

2) Find size of source disk and create stub disk in destination (this part can be made more efficient to speed up disk transfer – by doing similar things to what Eric is doing):

[root@kvm1 ~]# qemu-img info /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
image: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 32M
cluster_size: 65536
backing file: /mnt/00e88a7b-985f-3be8-b717-0a59d8197640/3caaf4c9-eaec-11e7-800b-06b4a401075c

######################

[root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img create -f qcow2 d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d 8G
Formatting 'd0ab5dd5-e3dd-47ac-a326-5ce3d47d194d', fmt=qcow2 size=8589934592 encryption=off cluster_size=65536
[root@kvm3 50848ff7-c6aa-3fdd-b487-27899bf2129c]# qemu-img info d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
image: d0ab5dd5-e3dd-47ac-a326-5ce3d47d194d
file format: qcow2
virtual size: 8.0G (8589934592 bytes)
disk size: 448K
cluster_size: 65536

3) Rewrite the new VM XML file for the destination with:
a) New disk location, in this case this is just a new path (Eric – this answers your question)
b) Different IP addresses for VNC – in this case 10.0.0.1 to 10.0.0.2
and carry out migration.

[root@kvm1 ~]# virsh dumpxml 9 | sed -e 's/00e88a7b-985f-3be8-b717-0a59d8197640/50848ff7-c6aa-3fdd-b487-27899bf2129c/g' | sed -e 's/10.0.0.1/10.0.0.2/g' > /root/i-2-14-VM.xml

[root@kvm1 ~]# virsh migrate --live --persistent --copy-storage-all --xml /root/i-2-14-VM.xml i-2-14-VM qemu+tcp://10.0.0.2/system --verbose --abort-on-error
Migration: [ 25 %]

4) Once complete delete the source file. This can be done with extra switches on the virsh migrate command if need be. 
= = = 

In the simplest tests this works – destination VM remains online and has storage in new location – but it’s not persistent – sometimes the destination VM ends up in a paused state, and I’m working on how to get around this. I also noted virsh migrate has a  migrate-setmaxdowntime which I think can be useful here. 

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 01/02/2018, 20:30, "Andrija Panic" <an...@gmail.com> wrote:

    Actually,  we have this feature (we call this internally
    online-storage-migration) to migrate volume from CEPH/NFS to SolidFire
    (thanks to Mike Tutkowski)
    
    There is libvirt mechanism, where basically you start another PAUSED VM on
    another host (same name and same XML file, except the storage volumes are
    pointing to new storage, different paths, etc and maybe VNC listening
    address needs to be changed or so) and then you issue on original host/VM
    the live migrate command with few parameters... the libvirt will
    transaprently handle the copy data process from Soruce to New volumes, and
    after migration the VM will be alive (with new XML since have new volumes)
    on new host, while the original VM on original host is destroyed....
    
    (I can send you manual for this, that is realted to SF, but idea is the
    same and you can exercies this on i.e. 2 NFS volumes on 2 different
    storages)
    
    This mechanism doesn't exist in ACS in general (AFAIK), except for when
    migrating to SolidFire.
    
    Perhaps community/DEV can help extend Mike's code to do same work on
    different storage types...
    
    Cheers
    
    
Dag.Sonstebo@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On 19 January 2018 at 18:45, Eric Green <er...@gmail.com> wrote:
    
    > KVM is able to live migrate entire virtual machines complete with local
    > volumes (see 'man virsh') but does require nbd (Network Block Device) to be
    > installed on the destination host to do so. It may need installation of
    > later libvirt / qemu packages from OpenStack repositories on Centos 6, I'm
    > not sure, but just works on Centos 7. In any event, I have used this
    > functionality to move virtual machines between virtualization hosts on my
    > home network. It works.
    >
    > What is missing is the ability to live-migrate a disk from one shared
    > storage to another. The functionality built into virsh live-migrates the
    > volume ***to the exact same location on the new host***, so obviously is
    > useless for migrating the disk to a new location on shared storage. I
    > looked everywhere for the ability of KVM to live migrate a disk from point
    > A to point B all by itself, and found no such thing. libvirt/qemu has the
    > raw capabilities needed to do this, but it is not currently exposed as a
    > single API via the qemu console or virsh. It can be emulated via scripting
    > however:
    >
    > 1. Pause virtual machine
    > 2. Do qcow2 snapshot.
    > 3. Detach base disk, attach qcow2 snapshot
    > 4. unpause virtual machine
    > 5. copy qcow2 base file to new location
    > 6. pause virtual machine
    > 7. detach snapshot
    > 8. unsnapshot qcow2 snapshot at its new location.
    > 9. attach new base at new location.
    > 10. unpause virtual machine.
    >
    > Thing is, if that entire process is not built into the underlying
    > kvm/qemu/libvirt infrastructure as tested functionality with a defined API,
    > there's no guarantee that it will work seamlessly and will continue working
    > with the next release of the underlying infrastructure. This is using
    > multiple different tools to manipulate the qcow2 file and attach/detach
    > base disks to the running (but paused) kvm domain, and would have to be
    > tested against all variations of those tools on all supported Cloudstack
    > KVM host platforms. The test matrix looks pretty grim.
    >
    > By contrast, the migrate-with-local-storage process is built into virsh
    > and is tested by the distribution vendor and the set of tools provided with
    > the distribution is guaranteed to work with the virsh / libvirt/ qemu
    > distributed by the distribution vendor. That makes the test matrix for
    > move-with-local-storage look a lot simpler -- "is this functionality
    > supported by that version of virsh on that distribution? Yes? Enable it.
    > No? Don't enable it."
    >
    > I'd love to have live migration of disks on shared storage with Cloudstack
    > KVM, but not at the expense of reliability. Shutting down a virtual machine
    > in order to migrate one of its disks from one shared datastore to another
    > is not ideal, but at least it's guaranteed reliable.
    >
    >
    > > On Jan 19, 2018, at 04:54, Rafael Weingärtner <
    > rafaelweingartner@gmail.com> wrote:
    > >
    > > Hey Marc,
    > > It is very interesting that you are going to pick this up for KVM. I am
    > > working in a related issue for XenServer [1].
    > > If you can confirm that KVM is able to live migrate local volumes to
    > other
    > > local storage or shared storage I could make the feature I am working on
    > > available to KVM as well.
    > >
    > >
    > > [1] https://issues.apache.org/jira/browse/CLOUDSTACK-10240
    > >
    > > On Thu, Jan 18, 2018 at 11:35 AM, Marc-Aurèle Brothier <
    > marco@exoscale.ch>
    > > wrote:
    > >
    > >> There's a PR waiting to be fixed about live migration with local volume
    > for
    > >> KVM. So it will come at some point. I'm the one who made this PR but I'm
    > >> not using the upstream release so it's hard for me to debug the problem.
    > >> You can add yourself to the PR to get notify when things are moving on
    > it.
    > >>
    > >> https://github.com/apache/cloudstack/pull/1709
    > >>
    > >> On Wed, Jan 17, 2018 at 10:56 AM, Eric Green <er...@gmail.com>
    > >> wrote:
    > >>
    > >>> Theoretically on Centos 7 as the host KVM OS it could be done with a
    > >>> couple of pauses and the snapshotting mechanism built into qcow2, but
    > >> there
    > >>> is no simple way to do it directly via virsh, the libvirtd/qemu control
    > >>> program that is used to manage virtualization. It's not as with
    > issuing a
    > >>> simple vmotion 'migrate volume' call in Vmware.
    > >>>
    > >>> I scripted out how it would work without that direct support in
    > >>> libvirt/virsh and after looking at all the points where things could go
    > >>> wrong, honestly, I think we need to wait until there is support in
    > >>> libvirt/virsh to do this. virsh clearly has the capability internally
    > to
    > >> do
    > >>> live migration of storage, since it does this for live domain migration
    > >> of
    > >>> local storage between machines when migrating KVM domains from one host
    > >> to
    > >>> another, but that capability is not currently exposed in a way
    > Cloudstack
    > >>> could use, at least not on Centos 7.
    > >>>
    > >>>
    > >>>> On Jan 17, 2018, at 01:05, Piotr Pisz <pp...@pulab.pl> wrote:
    > >>>>
    > >>>> Hello,
    > >>>>
    > >>>> Is there a chance that one day it will be possible to migrate volume
    > >>> (root disk) of a live VM in KVM between storage pools (in CloudStack)?
    > >>>> Like a storage vMotion in Vmware.
    > >>>>
    > >>>> Best regards,
    > >>>> Piotr
    > >>>>
    > >>>
    > >>>
    > >>
    > >
    > >
    > >
    > > --
    > > Rafael Weingärtner
    >
    >
    
    
    -- 
    
    Andrija Panić