You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Sateesh Chodapuneedi <sa...@citrix.com> on 2013/03/11 18:05:17 UTC

RE: [DISCUSS] Enabling storage xenmotion on xenserver 6.1

Seems each hypervisor has its limit of supporting parallel storage motion operations.
Should CloudStack understand the limits or facilitate a configurable limits over parallel operations?

Regards,
Sateesh

> -----Original Message-----
> From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> Sent: 18 January 2013 07:35
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> 
> I will double check, maybe things have changed in newer versions, but last I
> checked the copy storage qemu option doesn't support the fancy stuff like
> backing images, so no need to create the template for root. I will follow up on
> that though.
> 
> One other caveat I think. Source format probably has to equal destination
> format. So you can't live migrate between RBD and qcow2, but you could
> between NFS and local or some other primary storage that would support the
> same format.
> On Jan 17, 2013 6:41 PM, "Edison Su" <Ed...@citrix.com> wrote:
> 
> >
> >
> > > -----Original Message-----
> > > From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> > > Sent: Wednesday, January 16, 2013 4:35 PM
> > > To: cloudstack-dev@incubator.apache.org
> > > Subject: Re: [DISCUSS] Enabling storage xenmotion on xenserver 6.1
> > >
> > > The main blocker for KVM is that you can't start the vm with the
> > -snapshot
> > > command (which is a way to start a vm with a base image and use a
> > > temp
> > file
> > > for any changes, more of a temporary VM). Also, if you storage
> > > migrate a
> > > qcow2 image (or any sparse file), it gets fattened in the copy.
> > > Which is
> > bad,
> > > but people may still prefer the option to be able to migrate. For
> > > those
> > of us
> > > using luns on a SAN or something like that though it will be
> > > business as
> > usual,
> > > both clusters see the storage, VM hops from one host to the next.
> > >
> > > It actually works almost identically to the existing migrate, with
> > > the
> > additions
> > > that identical new storage needs to be set up on the destination,
> > > and an extra flag set in the migrate command.
> > >
> > >  I'm not sure about RBD support, but I'm assuming it works so long
> > > as the storage is available (Wido? Can you see me waving my hand?),
> > > I bring that one up because it's the one I'm aware of that is baked
> > > into qemu where it speaks to the storage directly through librados,
> > > rather than using some
> > sort
> > > of block device or filesystem that the host is aware of.
> > >
> > > I think this partially goes with the zone-wide primary storage. In
> > > that
> > case
> > > there's no longer the need to storage migrate in the case of moving
> > between
> > > clusters.
> > >
> > > As far as orchestration, I'll have to read through the other
> > > hypervisor requirements, but on the KVM side the storage migration
> > > is triggered via
> > the
> > > agent (via libvirt). If I remember correctly there's a portion of
> > > the cs
> > server
> > > that preps the destination host (passing
> > > PrepareForMigrationCommand) and then one that triggers the migration
> > > on the source host (passing MigrateCommand).  We'd just add to those
> > sections
> > > in the server to create any volumes necessary on the destination
> > > storage
> > first,
> >
> >
> > Good to know, we can control the whole process in KVM.
> > Is the following we need to do for storage migration in kvm?
> > 1. create template on destination storage pool, if src root disk's
> > template is not downloaded into destination storage pool 2. create
> > volumes on destination storage pool for each volume of the migrating
> > VM.
> > 3. send storage migration command to libvirt on src kvm host.
> >
> > > along with the nic/network prep it already does. Then on the source
> > > host
> > it
> > > does a MigrateCommand with the libvirt flag to copy storage
> > > (VIR_MIGRATE_NON_SHARED_DISK in the libvirt API, passed along to
> > > virDomainMigrate).
> > >
> > > I actually think it could be added to migrateVirtualMachine without
> > > too
> > much
> > > effort, just adding optional parameters "clusterid" and boolean
> > > "copystorage". That would make it flexible enough to copy local
> > > storage between hosts by providing a hostid and the copystorage=1,
> > > as well as migration between clusters. That's what I'd do if we only
> > > had to deal
> > with
> > > KVM anyway, it might make sense to do something entirely different
> > > if
> > that
> > > isn't sufficiently hypervisor agnostic.
> > >
> > > On Wed, Jan 16, 2013 at 2:46 PM, Alex Huang <Al...@citrix.com>
> > > wrote:
> > >
> > > > Marcus,
> > > >
> > > > Nice.  Can you take it for the KVM resource then?  Do you have any
> > > > comments on the orchestration flow of this?
> > > >
> > > > --Alex
> > > >
> > > > > -----Original Message-----
> > > > > From: Marcus Sorensen [mailto:shadowsor@gmail.com]
> > > > > Sent: Wednesday, January 16, 2013 1:33 PM
> > > > > To: cloudstack-dev@incubator.apache.org
> > > > > Subject: Re: [DISCUSS] Enabling storage xenmotion on xenserver
> > > > > 6.1
> > > > >
> > > > > We want this for KVM as well. libvirt has been able to do a
> > > > > storage copy along with a live migrate for awhile now, so it's
> > > > > just a matter of making sure that the implementation is
> > > > > compatible so we can plumb in the libvirt stuff.
> > > > >
> > > > > On Wed, Jan 16, 2013 at 9:35 AM, Chris Sears
> > > > > <ch...@sungard.com>wrote:
> > > > >
> > > > > > On Wed, Jan 16, 2013 at 11:07 AM, Koushik Das
> > > > > > <koushik.das@citrix.com
> > > > > > >wrote:
> > > > > >
> > > > > > > Looks like storage vmotion is not possible for VMs with
> > > > > > > linked
> > clone.
> > > > > > > Check this KB
> > > > > > >
> > > > > >
> > > > >
> > > http://kb.vmware.com/selfservice/microsites/search.do?language=en_US
> > > > > &
> > > > > cmd=displayKC&externalId=1014249
> > > > > > .
> > > > > > > This is for 4.x, not sure if the latest version supports it.
> > > > > > >
> > > > > >
> > > > > > That is no longer a limitation in versions 5.x of vSphere. VMs
> > > > > > with snapshots and linked clones can be storage vMotioned.
> > > > > >
> > > > > > Mentioned here:
> > > > > >
> > > > > > http://blogs.vmware.com/vsphere/2011/07/new-vsphere-50-storage
> > > > > > -
> > > > > features-part-2-storage-vmotion.html
> > > > > >
> > > > > >  - Chris
> > > > > >
> > > >
> >