You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by cs user <ac...@gmail.com> on 2016/08/24 14:20:50 UTC

Cloudstack - volume migration between clusters (primary storage)

Hi All,

Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes

Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
storage.

If I migrate a volume from one primary storage to the other one, using
cloudstack, what aspect of the environment is responsible for this copy?

I'm trying to identify bottlenecks but I can't see what is responsible for
this copying. Is it is the xen hosts themselves or the secondary storage vm?

Thanks!

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by cs user <ac...@gmail.com>.
Hi All,

So after 24 hours I hit this:

Caused by: com.mysql.jdbc.exceptions.jdbc4.CommunicationsException: The
last packet successfully received from the server was 86,290,759
milliseconds ago.  The last packet sent successf
ully to the server was 86,290,760 milliseconds ago. is longer than the
server configured value of 'wait_timeout'. You should consider either
expiring and/or testing connection validity
before use in your application, increasing the server configured values for
client timeouts, or using the Connector/J connection property
'autoReconnect=true' to avoid this problem.

I think I hit a cloudstack timeout of 24 hours somewhere, I need to check
which one I might have hit. But I hit the above error as it was cleaning
up, so even if the copy had of worked, it looks it would have failed anyway
when it tried to update the db.

This looks like a jdbc setting to me, or perhaps a mysql setting? How can I
increase this setting?

Cheers!


On Thu, Aug 25, 2016 at 7:41 AM, cs user <ac...@gmail.com> wrote:

> Thanks Makrand, I will take a look at these values, much appreciated!
>
> On Thu, Aug 25, 2016 at 7:36 AM, Makrand <ma...@gmail.com> wrote:
>
>> Hi,
>>
>> Timeouts while moving big volumes on top of XENServer is pretty common
>> thing. Last time for bigger volumes we had and issue and following global
>> settings were changed. (Just added one  extra zero at end of each
>> settings.
>> this was on XEN 6.2 and ACS4.4.2)
>>
>> migratewait: 3600
>> storage.pool.max.waitseconds: 3600
>> vm.op.cancel.interval: 3600
>> vm.op.cleanup.wait: 3600
>> wait:1800
>> vm.tranisition.wait.interval:3600
>>
>> --
>> Makrand
>>
>>
>> On Thu, Aug 25, 2016 at 11:56 AM, cs user <ac...@gmail.com> wrote:
>>
>> > Hi Makrand,
>> >
>> > Thanks for responding!
>> >
>> > Yes this does work for us, for small disks. Even with small disks
>> however
>> > it seems to take a while, 5 mins or so, on a pretty fast 10GB network.
>> >
>> > However currently I am trying to move 2TB disks which is taking a while,
>> > and was hitting a 3 hour time limit which seemed to be a cloudstack
>> default
>> > for cloning disks.
>> >
>> > Migrations within the cluster, as you say, seem to use Storage Xen
>> motion
>> > and these work fine with small disks. Haven't really tried huge ones
>> with
>> > that.
>> >
>> > I think you are right that it is using the the SSVM, despite the fact I
>> > can't seem to see much activity when the copy is occurring.
>> >
>> > Thanks!
>> >
>> >
>> >
>> > On Wed, Aug 24, 2016 at 7:36 PM, Makrand <ma...@gmail.com>
>> wrote:
>> >
>> > > Hi,
>> > >
>> > > I think you must be seeing the option like (storage migration
>> required)
>> > > while you move the volumes between primary storage. I've seen in past
>> > > people complaining about this option not working (using GUI or API)
>> with
>> > > setup similar as yours. Did you get this working?
>> > >
>> > > Anyways, I think it has to be system VM, coz primary storage A have
>> not
>> > > Idea about primary storage B via hypervisor, only cloudstack ( (SSVM)
>> can
>> > > see it as part as one cloud zone.
>> > >
>> > > In normal case of moving volume within cluster, Storge XEN motion is
>> what
>> > > it uses.
>> > >
>> > >
>> > >
>> > > --
>> > > Makrand
>> > >
>> > >
>> > > On Wed, Aug 24, 2016 at 7:50 PM, cs user <ac...@gmail.com>
>> wrote:
>> > >
>> > > > Hi All,
>> > > >
>> > > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
>> > > >
>> > > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
>> > primary
>> > > > storage.
>> > > >
>> > > > If I migrate a volume from one primary storage to the other one,
>> using
>> > > > cloudstack, what aspect of the environment is responsible for this
>> > copy?
>> > > >
>> > > > I'm trying to identify bottlenecks but I can't see what is
>> responsible
>> > > for
>> > > > this copying. Is it is the xen hosts themselves or the secondary
>> > storage
>> > > > vm?
>> > > >
>> > > > Thanks!
>> > > >
>> > >
>> >
>>
>
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by cs user <ac...@gmail.com>.
Thanks Makrand, I will take a look at these values, much appreciated!

On Thu, Aug 25, 2016 at 7:36 AM, Makrand <ma...@gmail.com> wrote:

> Hi,
>
> Timeouts while moving big volumes on top of XENServer is pretty common
> thing. Last time for bigger volumes we had and issue and following global
> settings were changed. (Just added one  extra zero at end of each settings.
> this was on XEN 6.2 and ACS4.4.2)
>
> migratewait: 3600
> storage.pool.max.waitseconds: 3600
> vm.op.cancel.interval: 3600
> vm.op.cleanup.wait: 3600
> wait:1800
> vm.tranisition.wait.interval:3600
>
> --
> Makrand
>
>
> On Thu, Aug 25, 2016 at 11:56 AM, cs user <ac...@gmail.com> wrote:
>
> > Hi Makrand,
> >
> > Thanks for responding!
> >
> > Yes this does work for us, for small disks. Even with small disks however
> > it seems to take a while, 5 mins or so, on a pretty fast 10GB network.
> >
> > However currently I am trying to move 2TB disks which is taking a while,
> > and was hitting a 3 hour time limit which seemed to be a cloudstack
> default
> > for cloning disks.
> >
> > Migrations within the cluster, as you say, seem to use Storage Xen motion
> > and these work fine with small disks. Haven't really tried huge ones with
> > that.
> >
> > I think you are right that it is using the the SSVM, despite the fact I
> > can't seem to see much activity when the copy is occurring.
> >
> > Thanks!
> >
> >
> >
> > On Wed, Aug 24, 2016 at 7:36 PM, Makrand <ma...@gmail.com> wrote:
> >
> > > Hi,
> > >
> > > I think you must be seeing the option like (storage migration required)
> > > while you move the volumes between primary storage. I've seen in past
> > > people complaining about this option not working (using GUI or API)
> with
> > > setup similar as yours. Did you get this working?
> > >
> > > Anyways, I think it has to be system VM, coz primary storage A have not
> > > Idea about primary storage B via hypervisor, only cloudstack ( (SSVM)
> can
> > > see it as part as one cloud zone.
> > >
> > > In normal case of moving volume within cluster, Storge XEN motion is
> what
> > > it uses.
> > >
> > >
> > >
> > > --
> > > Makrand
> > >
> > >
> > > On Wed, Aug 24, 2016 at 7:50 PM, cs user <ac...@gmail.com> wrote:
> > >
> > > > Hi All,
> > > >
> > > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> > > >
> > > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
> > primary
> > > > storage.
> > > >
> > > > If I migrate a volume from one primary storage to the other one,
> using
> > > > cloudstack, what aspect of the environment is responsible for this
> > copy?
> > > >
> > > > I'm trying to identify bottlenecks but I can't see what is
> responsible
> > > for
> > > > this copying. Is it is the xen hosts themselves or the secondary
> > storage
> > > > vm?
> > > >
> > > > Thanks!
> > > >
> > >
> >
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by Makrand <ma...@gmail.com>.
Hi,

Timeouts while moving big volumes on top of XENServer is pretty common
thing. Last time for bigger volumes we had and issue and following global
settings were changed. (Just added one  extra zero at end of each settings.
this was on XEN 6.2 and ACS4.4.2)

migratewait: 3600
storage.pool.max.waitseconds: 3600
vm.op.cancel.interval: 3600
vm.op.cleanup.wait: 3600
wait:1800
vm.tranisition.wait.interval:3600

--
Makrand


On Thu, Aug 25, 2016 at 11:56 AM, cs user <ac...@gmail.com> wrote:

> Hi Makrand,
>
> Thanks for responding!
>
> Yes this does work for us, for small disks. Even with small disks however
> it seems to take a while, 5 mins or so, on a pretty fast 10GB network.
>
> However currently I am trying to move 2TB disks which is taking a while,
> and was hitting a 3 hour time limit which seemed to be a cloudstack default
> for cloning disks.
>
> Migrations within the cluster, as you say, seem to use Storage Xen motion
> and these work fine with small disks. Haven't really tried huge ones with
> that.
>
> I think you are right that it is using the the SSVM, despite the fact I
> can't seem to see much activity when the copy is occurring.
>
> Thanks!
>
>
>
> On Wed, Aug 24, 2016 at 7:36 PM, Makrand <ma...@gmail.com> wrote:
>
> > Hi,
> >
> > I think you must be seeing the option like (storage migration required)
> > while you move the volumes between primary storage. I've seen in past
> > people complaining about this option not working (using GUI or API) with
> > setup similar as yours. Did you get this working?
> >
> > Anyways, I think it has to be system VM, coz primary storage A have not
> > Idea about primary storage B via hypervisor, only cloudstack ( (SSVM) can
> > see it as part as one cloud zone.
> >
> > In normal case of moving volume within cluster, Storge XEN motion is what
> > it uses.
> >
> >
> >
> > --
> > Makrand
> >
> >
> > On Wed, Aug 24, 2016 at 7:50 PM, cs user <ac...@gmail.com> wrote:
> >
> > > Hi All,
> > >
> > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> > >
> > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
> primary
> > > storage.
> > >
> > > If I migrate a volume from one primary storage to the other one, using
> > > cloudstack, what aspect of the environment is responsible for this
> copy?
> > >
> > > I'm trying to identify bottlenecks but I can't see what is responsible
> > for
> > > this copying. Is it is the xen hosts themselves or the secondary
> storage
> > > vm?
> > >
> > > Thanks!
> > >
> >
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by cs user <ac...@gmail.com>.
Hi Makrand,

Thanks for responding!

Yes this does work for us, for small disks. Even with small disks however
it seems to take a while, 5 mins or so, on a pretty fast 10GB network.

However currently I am trying to move 2TB disks which is taking a while,
and was hitting a 3 hour time limit which seemed to be a cloudstack default
for cloning disks.

Migrations within the cluster, as you say, seem to use Storage Xen motion
and these work fine with small disks. Haven't really tried huge ones with
that.

I think you are right that it is using the the SSVM, despite the fact I
can't seem to see much activity when the copy is occurring.

Thanks!



On Wed, Aug 24, 2016 at 7:36 PM, Makrand <ma...@gmail.com> wrote:

> Hi,
>
> I think you must be seeing the option like (storage migration required)
> while you move the volumes between primary storage. I've seen in past
> people complaining about this option not working (using GUI or API) with
> setup similar as yours. Did you get this working?
>
> Anyways, I think it has to be system VM, coz primary storage A have not
> Idea about primary storage B via hypervisor, only cloudstack ( (SSVM) can
> see it as part as one cloud zone.
>
> In normal case of moving volume within cluster, Storge XEN motion is what
> it uses.
>
>
>
> --
> Makrand
>
>
> On Wed, Aug 24, 2016 at 7:50 PM, cs user <ac...@gmail.com> wrote:
>
> > Hi All,
> >
> > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> >
> > Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
> > storage.
> >
> > If I migrate a volume from one primary storage to the other one, using
> > cloudstack, what aspect of the environment is responsible for this copy?
> >
> > I'm trying to identify bottlenecks but I can't see what is responsible
> for
> > this copying. Is it is the xen hosts themselves or the secondary storage
> > vm?
> >
> > Thanks!
> >
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by Makrand <ma...@gmail.com>.
Hi,

I think you must be seeing the option like (storage migration required)
while you move the volumes between primary storage. I've seen in past
people complaining about this option not working (using GUI or API) with
setup similar as yours. Did you get this working?

Anyways, I think it has to be system VM, coz primary storage A have not
Idea about primary storage B via hypervisor, only cloudstack ( (SSVM) can
see it as part as one cloud zone.

In normal case of moving volume within cluster, Storge XEN motion is what
it uses.



--
Makrand


On Wed, Aug 24, 2016 at 7:50 PM, cs user <ac...@gmail.com> wrote:

> Hi All,
>
> Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
>
> Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
> storage.
>
> If I migrate a volume from one primary storage to the other one, using
> cloudstack, what aspect of the environment is responsible for this copy?
>
> I'm trying to identify bottlenecks but I can't see what is responsible for
> this copying. Is it is the xen hosts themselves or the secondary storage
> vm?
>
> Thanks!
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by cs user <ac...@gmail.com>.
Hi Ilya,

Thanks for getting back to me. Hmmm, perhaps then it isn't using the SSVM,
which might explain why I can't see anything happening on there when the
migration is happening. So if its just the Xen hosts involved, perhaps as
you say, first a host in the source cluster copies the disk to secondary
storage, and then a host in the target cluster transfers this disk from
secondary storage to its own primary storage.

It would be great if a dev could confirm this, someone on this list must
know for sure ! :-)

Also, for very large disks, does anyone have a list of parameters which
would need to be changed to make sure there is enough time for a copy to
complete?

When I first tried it, it timed out after 3 hours. It said the following:

Resource [StoragePool:220] is unreachable: Migrate volume failed: Failed to
copy volume to secondary:
java.util.concurrent.TimeoutException: Async 10800 seconds timeout for task
com.xensource.xenapi.Task@1b417e44

I found the following parameters which are set to 10800:

copy.volume.wait                           10800
create.private.template.from.snapshot.wait 10800
create.private.template.from.volume.wait   10800
create.volume.from.snapshot.wait           10800
primary.storage.download.wait              10800

Any idea which one of those is relevant? I presume its copy.volume.wait.

Thanks!




On Wed, Aug 24, 2016 at 11:54 PM, ilya <il...@gmail.com> wrote:

> Not certain how Xen Storage Migration is implemented in 4.5.2
>
> I'd suspect legacy mode would be
>
> 1) copy disks from primary store to secondary NFS
> 2) copy disks from secondary NFS to new primary store
>
> it might be slow... but if you have enough space - it should work...
>
> My understanding is that NFS is mounted directly on hypervisors. I'd ask
> someone else to confirm though...
>
> On 8/24/16 7:20 AM, cs user wrote:
> > Hi All,
> >
> > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> >
> > Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
> > storage.
> >
> > If I migrate a volume from one primary storage to the other one, using
> > cloudstack, what aspect of the environment is responsible for this copy?
> >
> > I'm trying to identify bottlenecks but I can't see what is responsible
> for
> > this copying. Is it is the xen hosts themselves or the secondary storage
> vm?
> >
> > Thanks!
> >
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by Makrand <ma...@gmail.com>.
AFAIK.....the concept of Storage XEN motion, while volume is attached to
running VM, works like:-

1) Placeholder volumes are created on destination storage. A base copy and
disk.
2) Original volume is snapshoted and base copy is created. A  new disk is
created  to receive or store the current writes to disk. Same writes will
be synchronised in real time to place holder disk on destination.
3) This base copy is then synced the base copy on destination volume.
4) Once all process is completed, things will be cleared from source and
base copy on destination will be merged with disk (kind of coalescing)

Above is for case, where both source and destination primary LUNS  are
mounted under cluster.

If storage migration is what is needed and if secondary  storage is
involved (moving path:- volume from source>>secondary>>Volume on
destination primary), then it should take very long time. i am not sure how
storage XEN motion operations are handles in this case. Especially, current
writes to the disk that is being moved.

--
Makrand


On Wed, Aug 31, 2016 at 1:18 PM, cs user <ac...@gmail.com> wrote:

> Yep, I've narrowed this down to it running on the xen host. You can see it
> in two ways on the xen host:
>
> # xe task-list
> uuid ( RO)                : 69w52567-124h-14r5-16vb-2c4567143541
>           name-label ( RO): Async.VDI.copy
>     name-description ( RO):
>               status ( RO): pending
>             progress ( RO): 0.700
>
> This actually gives you the percentage completed as well.
>
>
> Also look for the process with:
>
>  ps -ef|grep sparse_dd
>
>
> You will see a process running with a parameter which shows the size of the
> disk, like:
>
> -size 2147483648000 -prezeroed
>
> (   2T in my case :-)   )
>
>
> So once the migration begins, you can look for it on your xen hosts in the
> cluster, it usually starts on the same host as the VM to which the disk was
> attached to.
>
>
> On Tue, Aug 30, 2016 at 6:00 PM, Yiping Zhang <yz...@marketo.com> wrote:
>
> > I think the work is mostly performed by the hypervisors. I had seen
> > following during storage live migration in XenCenter:
> >
> > Highlight the primary storage for the departing cluster, then select the
> > “Storage” tab on the right side panel.  You should see disk volumes on
> that
> > primary storage. The far right column is the “Virtual Machine” the disk
> > belongs to.
> >
> > While the live storage migration is running, the migrating volume is
> shown
> > as attached to a VM with the name “control domain for host xxx”, instead
> of
> > the VM name it actually belongs to.
> >
> > To me, this is pretty convincing that Xen cluster is doing the migration.
> >
> > Yiping
> >
> > On 8/27/16, 5:10 AM, "Makrand" <ma...@gmail.com> wrote:
> >
> >     Hello ilya,
> >
> >     If I am not mistaken, while adding secondary storage NFS server ip
> and
> > path
> >     is all one specifies in cloud-stack. Running df -h on ACS management
> > server
> >     shows you secondary storage mounted there. Don't think hypervisor
> sees
> > NFS
> >     (even if primary storage and NFS coming from same storage box). Plus,
> > while
> >     doing activities like VM deploy and snapshot things always move from
> >     secondary to primary via SSVM.
> >
> >     Have you actually seen any setup where you have verified this?
> >
> >     @ cs user,
> >     When you're moving the volumes, are those attached to running VM? or
> > those
> >     are just standalone orphan volumes?
> >
> >
> >
> >     --
> >     Makrand
> >
> >
> >     On Thu, Aug 25, 2016 at 4:24 AM, ilya <il...@gmail.com>
> > wrote:
> >
> >     > Not certain how Xen Storage Migration is implemented in 4.5.2
> >     >
> >     > I'd suspect legacy mode would be
> >     >
> >     > 1) copy disks from primary store to secondary NFS
> >     > 2) copy disks from secondary NFS to new primary store
> >     >
> >     > it might be slow... but if you have enough space - it should
> work...
> >     >
> >     > My understanding is that NFS is mounted directly on hypervisors.
> I'd
> > ask
> >     > someone else to confirm though...
> >     >
> >     > On 8/24/16 7:20 AM, cs user wrote:
> >     > > Hi All,
> >     > >
> >     > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> >     > >
> >     > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
> > primary
> >     > > storage.
> >     > >
> >     > > If I migrate a volume from one primary storage to the other one,
> > using
> >     > > cloudstack, what aspect of the environment is responsible for
> this
> > copy?
> >     > >
> >     > > I'm trying to identify bottlenecks but I can't see what is
> > responsible
> >     > for
> >     > > this copying. Is it is the xen hosts themselves or the secondary
> > storage
> >     > vm?
> >     > >
> >     > > Thanks!
> >     > >
> >     >
> >
> >
> >
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by cs user <ac...@gmail.com>.
Yep, I've narrowed this down to it running on the xen host. You can see it
in two ways on the xen host:

# xe task-list
uuid ( RO)                : 69w52567-124h-14r5-16vb-2c4567143541
          name-label ( RO): Async.VDI.copy
    name-description ( RO):
              status ( RO): pending
            progress ( RO): 0.700

This actually gives you the percentage completed as well.


Also look for the process with:

 ps -ef|grep sparse_dd


You will see a process running with a parameter which shows the size of the
disk, like:

-size 2147483648000 -prezeroed

(   2T in my case :-)   )


So once the migration begins, you can look for it on your xen hosts in the
cluster, it usually starts on the same host as the VM to which the disk was
attached to.


On Tue, Aug 30, 2016 at 6:00 PM, Yiping Zhang <yz...@marketo.com> wrote:

> I think the work is mostly performed by the hypervisors. I had seen
> following during storage live migration in XenCenter:
>
> Highlight the primary storage for the departing cluster, then select the
> “Storage” tab on the right side panel.  You should see disk volumes on that
> primary storage. The far right column is the “Virtual Machine” the disk
> belongs to.
>
> While the live storage migration is running, the migrating volume is shown
> as attached to a VM with the name “control domain for host xxx”, instead of
> the VM name it actually belongs to.
>
> To me, this is pretty convincing that Xen cluster is doing the migration.
>
> Yiping
>
> On 8/27/16, 5:10 AM, "Makrand" <ma...@gmail.com> wrote:
>
>     Hello ilya,
>
>     If I am not mistaken, while adding secondary storage NFS server ip and
> path
>     is all one specifies in cloud-stack. Running df -h on ACS management
> server
>     shows you secondary storage mounted there. Don't think hypervisor sees
> NFS
>     (even if primary storage and NFS coming from same storage box). Plus,
> while
>     doing activities like VM deploy and snapshot things always move from
>     secondary to primary via SSVM.
>
>     Have you actually seen any setup where you have verified this?
>
>     @ cs user,
>     When you're moving the volumes, are those attached to running VM? or
> those
>     are just standalone orphan volumes?
>
>
>
>     --
>     Makrand
>
>
>     On Thu, Aug 25, 2016 at 4:24 AM, ilya <il...@gmail.com>
> wrote:
>
>     > Not certain how Xen Storage Migration is implemented in 4.5.2
>     >
>     > I'd suspect legacy mode would be
>     >
>     > 1) copy disks from primary store to secondary NFS
>     > 2) copy disks from secondary NFS to new primary store
>     >
>     > it might be slow... but if you have enough space - it should work...
>     >
>     > My understanding is that NFS is mounted directly on hypervisors. I'd
> ask
>     > someone else to confirm though...
>     >
>     > On 8/24/16 7:20 AM, cs user wrote:
>     > > Hi All,
>     > >
>     > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
>     > >
>     > > Lets say I have 1 pod, with 2 clusters, each cluster has its own
> primary
>     > > storage.
>     > >
>     > > If I migrate a volume from one primary storage to the other one,
> using
>     > > cloudstack, what aspect of the environment is responsible for this
> copy?
>     > >
>     > > I'm trying to identify bottlenecks but I can't see what is
> responsible
>     > for
>     > > this copying. Is it is the xen hosts themselves or the secondary
> storage
>     > vm?
>     > >
>     > > Thanks!
>     > >
>     >
>
>
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by Yiping Zhang <yz...@marketo.com>.
I think the work is mostly performed by the hypervisors. I had seen following during storage live migration in XenCenter:

Highlight the primary storage for the departing cluster, then select the “Storage” tab on the right side panel.  You should see disk volumes on that primary storage. The far right column is the “Virtual Machine” the disk belongs to.

While the live storage migration is running, the migrating volume is shown as attached to a VM with the name “control domain for host xxx”, instead of the VM name it actually belongs to.

To me, this is pretty convincing that Xen cluster is doing the migration.

Yiping

On 8/27/16, 5:10 AM, "Makrand" <ma...@gmail.com> wrote:

    Hello ilya,
    
    If I am not mistaken, while adding secondary storage NFS server ip and path
    is all one specifies in cloud-stack. Running df -h on ACS management server
    shows you secondary storage mounted there. Don't think hypervisor sees NFS
    (even if primary storage and NFS coming from same storage box). Plus, while
    doing activities like VM deploy and snapshot things always move from
    secondary to primary via SSVM.
    
    Have you actually seen any setup where you have verified this?
    
    @ cs user,
    When you're moving the volumes, are those attached to running VM? or those
    are just standalone orphan volumes?
    
    
    
    --
    Makrand
    
    
    On Thu, Aug 25, 2016 at 4:24 AM, ilya <il...@gmail.com> wrote:
    
    > Not certain how Xen Storage Migration is implemented in 4.5.2
    >
    > I'd suspect legacy mode would be
    >
    > 1) copy disks from primary store to secondary NFS
    > 2) copy disks from secondary NFS to new primary store
    >
    > it might be slow... but if you have enough space - it should work...
    >
    > My understanding is that NFS is mounted directly on hypervisors. I'd ask
    > someone else to confirm though...
    >
    > On 8/24/16 7:20 AM, cs user wrote:
    > > Hi All,
    > >
    > > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
    > >
    > > Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
    > > storage.
    > >
    > > If I migrate a volume from one primary storage to the other one, using
    > > cloudstack, what aspect of the environment is responsible for this copy?
    > >
    > > I'm trying to identify bottlenecks but I can't see what is responsible
    > for
    > > this copying. Is it is the xen hosts themselves or the secondary storage
    > vm?
    > >
    > > Thanks!
    > >
    >
    


Re: Cloudstack - volume migration between clusters (primary storage)

Posted by Makrand <ma...@gmail.com>.
Hello ilya,

If I am not mistaken, while adding secondary storage NFS server ip and path
is all one specifies in cloud-stack. Running df -h on ACS management server
shows you secondary storage mounted there. Don't think hypervisor sees NFS
(even if primary storage and NFS coming from same storage box). Plus, while
doing activities like VM deploy and snapshot things always move from
secondary to primary via SSVM.

Have you actually seen any setup where you have verified this?

@ cs user,
When you're moving the volumes, are those attached to running VM? or those
are just standalone orphan volumes?



--
Makrand


On Thu, Aug 25, 2016 at 4:24 AM, ilya <il...@gmail.com> wrote:

> Not certain how Xen Storage Migration is implemented in 4.5.2
>
> I'd suspect legacy mode would be
>
> 1) copy disks from primary store to secondary NFS
> 2) copy disks from secondary NFS to new primary store
>
> it might be slow... but if you have enough space - it should work...
>
> My understanding is that NFS is mounted directly on hypervisors. I'd ask
> someone else to confirm though...
>
> On 8/24/16 7:20 AM, cs user wrote:
> > Hi All,
> >
> > Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> >
> > Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
> > storage.
> >
> > If I migrate a volume from one primary storage to the other one, using
> > cloudstack, what aspect of the environment is responsible for this copy?
> >
> > I'm trying to identify bottlenecks but I can't see what is responsible
> for
> > this copying. Is it is the xen hosts themselves or the secondary storage
> vm?
> >
> > Thanks!
> >
>

Re: Cloudstack - volume migration between clusters (primary storage)

Posted by ilya <il...@gmail.com>.
Not certain how Xen Storage Migration is implemented in 4.5.2

I'd suspect legacy mode would be

1) copy disks from primary store to secondary NFS
2) copy disks from secondary NFS to new primary store

it might be slow... but if you have enough space - it should work...

My understanding is that NFS is mounted directly on hypervisors. I'd ask
someone else to confirm though...

On 8/24/16 7:20 AM, cs user wrote:
> Hi All,
> 
> Xenserver 6.5, cloudstack 4.5.2. NFS primary storage volumes
> 
> Lets say I have 1 pod, with 2 clusters, each cluster has its own primary
> storage.
> 
> If I migrate a volume from one primary storage to the other one, using
> cloudstack, what aspect of the environment is responsible for this copy?
> 
> I'm trying to identify bottlenecks but I can't see what is responsible for
> this copying. Is it is the xen hosts themselves or the secondary storage vm?
> 
> Thanks!
>