You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Lucas Hughes <lu...@ecommerce.com> on 2012/07/20 17:16:49 UTC

Support for Citrix StorageLink in Cloudstack

Hello,

 

We are using CloudStack in an environment with Citrix Xen (Enterprise
Edition) and Dell EqualLogic SANs. 

 

We are trying to take advantage of Citrix Storage Link features, especially
the feature that allows to have a thin provisioned volume/LUN on the SAN per
Vitrual Machine, rather than using LVM over ISCSI and a big LUN on the SAN
for all the VMs, and we want to use SAN-level snapshots available in
CloudStack

 

For the first part of our problem, we found that using "Presetup"
configuration for ISCSI connections in our SAN with StorageLink technology
allows us to have the desired outcome when creating the VMs from an ISO.
However, if we want to use templates, things are not working.

 

Further investigation showed that CloudStacks model involves copying of all
data from the "primary" storage to "secondary" storage when creating a
template, and copy from "secondary" storage to "primary" storage when
creating a VM from the template (Also, copy from "primary1" to "secondary"
to "primary2" when moving a VM from a primary storage to another primary
storage). It turns out that the scripts that facilitate this copying and
that reside on the Xen host (in /opt/xensource/bin, specifically
copy_vhd_to_secondarystorage.sh and copy_vhd_from_secondarystorage.sh) do
not support StorageLink operations.

 

>From our perspective (and please correct me if I am wrong), it appears these
are the only scripts that need to be modified to add StorageLink support to
CloudStack. We have successfully modified the script
copy_vhd_to_secondarystorage.sh to correctly copy from a StorageLink LUN to
the secondary storage (patch included below [1]), and we are successfully
able to migrate VMs from StorageLink LUN to an LVM over ISCSI LUN. We lack
the proper understanding of the technology to modify the
copy_vhd_from_secondarystorage.sh script to be able to perform the reverse
operation (and here we need your help). We are able to provide test
environment for somebody who would be able to help us.

 

Also, if this problem has already been solved but is not yet committed to
"production" version of CloudStack, we would gladly perform beta testing in
our environment. We are aware there is a bug opened about this issue in
CloudStack (bug CS-11486)

 

We have not yet pursued the second part of our problem (SAN-level snapshots
in CloudStack), but as we feel these problems are related, we are willing to
offer all the help we can in fixing them.

 

[1]

-- start here --

[root@a11-3-05 bin]# diff copy_vhd_to_secondarystorage.sh.orig
copy_vhd_to_secondarystorage.sh

41c41

<   echo "2#no uuid of the source sr"

---

>   echo "2#no uuid of the source vdi"

85a86,106

> elif [ $type == "cslg" -o $type == "equal" ]; then

>   idstr=$(xe host-list name-label=$(hostname) params=uuid)

>   hostuuid=$(echo $idstr | awk -F: '$1 != ""{print $2}' | awk '{print
$1}')

>   CONTROL_DOMAIN_UUID=$(xe vm-list is-control-domain=true
resident-on=$hostuuid params=uuid | awk '$1 == "uuid"{print $5}')

>   vbd_uuid=$(xe vbd-create vm-uuid=${CONTROL_DOMAIN_UUID}
vdi-uuid=${vdiuuid} device=autodetect) 

>   if [ $? -ne 0 ]; then

>     echo "999#failed to create VBD for vdi uuid ${uuid}"

>     cleanup

>     exit 0

>   fi

>   xe vbd-plug uuid=${vbd_uuid}

>   svhdfile=/dev/$(xe vbd-param-get uuid=${vbd_uuid} param-name=device)

>   dd if=${svhdfile} of=${vhdfile} bs=2M

>   if [ $? -ne 0 ]; then

>     echo "998#failed to dd $svhdfile to $vhdfile"

>     xe vbd-unplug uuid=${vbd_uuid}

>     xe vbd-destroy uuid=${vbd_uuid}

>     cleanup

>     exit 0

>   fi

> 

123a145,147

> xe vbd-unplug uuid=${vbd_uuid}

> xe vbd-destroy uuid=${vbd_uuid}

> 

-- end here --

 

 

Lucas Hughes

Cloud Engineer

Ecommerce Inc

 


RE: Support for Citrix StorageLink in Cloudstack

Posted by Anthony Xu <Xu...@citrix.com>.
Hi Lucas,

Great work,

Basically, that is a VDI-per-LUN SR.
You are right , only copy_vhd_to_secondarystorage.sh and copy_vhd_from_secondarystorage.sh need to handle new type of SR.
The reason we use these script to do copy VDI is that VDI-COPY is slow,
It will plug two VDIs into dom0 and do copy between two block devices which introduces a lot of context switch.


There is a VM allocator in CloudStack, which decides which primary storage the new VM will deploy on, it assumes there is not thin-provision in the backend of primary storage,
That also means CloudStack doesn't handle out-of-space of SAN side, 
It might be nice if allocator can take backend thin-provision into consideration, and make the allocation decision based on real storage space.


As for the patch, I have several comments, I did not use Storage Link before, so I might be wrong, please correct me.
I assume each LUN is just a raw disk, there is not VHD format on top LUN,

1. the output file in secondary storage is a raw file, CloudStack doesn't support it right now, that means we cannot create template or volume from it.
2. since it is raw file, there is not thin provision.

In this case, I prefer to call vdi-copy API in java code directly, then you don't need to deal with copy_vhd_from_secondarystorage.sh


As for StorageLink,  I came across below page, it said some features of Storagelink are dropped in XenServer 6,
Do you want add StorageLink support for XenServer 6 or XenServer 5.6?


http://forums.citrix.com/thread.jspa?threadID=301945&tstart=0


Thanks,
Anthony




> -----Original Message-----
> From: Lucas Hughes [mailto:lucas.hughes@ecommerce.com]
> Sent: Friday, July 20, 2012 8:17 AM
> To: cloudstack-dev@incubator.apache.org
> Cc: Tiberiu Ungureanu
> Subject: Support for Citrix StorageLink in Cloudstack
> 
> Hello,
> 
> 
> 
> We are using CloudStack in an environment with Citrix Xen (Enterprise
> Edition) and Dell EqualLogic SANs.
> 
> 
> 
> We are trying to take advantage of Citrix Storage Link features,
> especially
> the feature that allows to have a thin provisioned volume/LUN on the
> SAN per
> Vitrual Machine, rather than using LVM over ISCSI and a big LUN on the
> SAN
> for all the VMs, and we want to use SAN-level snapshots available in
> CloudStack
> 
> 
> 
> For the first part of our problem, we found that using "Presetup"
> configuration for ISCSI connections in our SAN with StorageLink
> technology
> allows us to have the desired outcome when creating the VMs from an ISO.
> However, if we want to use templates, things are not working.
> 
> 
> 
> Further investigation showed that CloudStacks model involves copying of
> all
> data from the "primary" storage to "secondary" storage when creating a
> template, and copy from "secondary" storage to "primary" storage when
> creating a VM from the template (Also, copy from "primary1" to
> "secondary"
> to "primary2" when moving a VM from a primary storage to another
> primary
> storage). It turns out that the scripts that facilitate this copying
> and
> that reside on the Xen host (in /opt/xensource/bin, specifically
> copy_vhd_to_secondarystorage.sh and copy_vhd_from_secondarystorage.sh)
> do
> not support StorageLink operations.
> 
> 
> 
> From our perspective (and please correct me if I am wrong), it appears
> these
> are the only scripts that need to be modified to add StorageLink
> support to
> CloudStack. We have successfully modified the script
> copy_vhd_to_secondarystorage.sh to correctly copy from a StorageLink
> LUN to
> the secondary storage (patch included below [1]), and we are
> successfully
> able to migrate VMs from StorageLink LUN to an LVM over ISCSI LUN. We
> lack
> the proper understanding of the technology to modify the
> copy_vhd_from_secondarystorage.sh script to be able to perform the
> reverse
> operation (and here we need your help). We are able to provide test
> environment for somebody who would be able to help us.
> 
> 
> 
> Also, if this problem has already been solved but is not yet committed
> to
> "production" version of CloudStack, we would gladly perform beta
> testing in
> our environment. We are aware there is a bug opened about this issue in
> CloudStack (bug CS-11486)
> 
> 
> 
> We have not yet pursued the second part of our problem (SAN-level
> snapshots
> in CloudStack), but as we feel these problems are related, we are
> willing to
> offer all the help we can in fixing them.
> 
> 
> 
> [1]
> 
> -- start here --
> 
> [root@a11-3-05 bin]# diff copy_vhd_to_secondarystorage.sh.orig
> copy_vhd_to_secondarystorage.sh
> 
> 41c41
> 
> <   echo "2#no uuid of the source sr"
> 
> ---
> 
> >   echo "2#no uuid of the source vdi"
> 
> 85a86,106
> 
> > elif [ $type == "cslg" -o $type == "equal" ]; then
> 
> >   idstr=$(xe host-list name-label=$(hostname) params=uuid)
> 
> >   hostuuid=$(echo $idstr | awk -F: '$1 != ""{print $2}' | awk '{print
> $1}')
> 
> >   CONTROL_DOMAIN_UUID=$(xe vm-list is-control-domain=true
> resident-on=$hostuuid params=uuid | awk '$1 == "uuid"{print $5}')
> 
> >   vbd_uuid=$(xe vbd-create vm-uuid=${CONTROL_DOMAIN_UUID}
> vdi-uuid=${vdiuuid} device=autodetect)
> 
> >   if [ $? -ne 0 ]; then
> 
> >     echo "999#failed to create VBD for vdi uuid ${uuid}"
> 
> >     cleanup
> 
> >     exit 0
> 
> >   fi
> 
> >   xe vbd-plug uuid=${vbd_uuid}
> 
> >   svhdfile=/dev/$(xe vbd-param-get uuid=${vbd_uuid} param-name=device)
> 
> >   dd if=${svhdfile} of=${vhdfile} bs=2M
> 
> >   if [ $? -ne 0 ]; then
> 
> >     echo "998#failed to dd $svhdfile to $vhdfile"
> 
> >     xe vbd-unplug uuid=${vbd_uuid}
> 
> >     xe vbd-destroy uuid=${vbd_uuid}
> 
> >     cleanup
> 
> >     exit 0
> 
> >   fi
> 
> >
> 
> 123a145,147
> 
> > xe vbd-unplug uuid=${vbd_uuid}
> 
> > xe vbd-destroy uuid=${vbd_uuid}
> 
> >
> 
> -- end here --
> 
> 
> 
> 
> 
> Lucas Hughes
> 
> Cloud Engineer
> 
> Ecommerce Inc
> 
> 


RE: Support for Citrix StorageLink in Cloudstack

Posted by Edison Su <Ed...@citrix.com>.
Copy_vhd_from_secondary:
1. take three parameters: 
     the url of template: e.g. nfs://your-secondary-storage/path/templatename.vhd, 
     dest primary storage sr uuid
     name label of the to be created VHD
2. mount secondary storage
3. create a vhd file on primary storage
4. if the dest primary storage is on nfs or ext
   dd dest template to primary vhd
5. if the dest primary storage is on other devics, e.g. lvm etc.
   dd dest template to primary vhd
   caveat: also copy the last 512 bytes(the VHD footer) to primary vhd, to make sure primary vhd has the correct VHD footer.

> -----Original Message-----
> From: Lucas Hughes [mailto:lucas.hughes@ecommerce.com]
> Sent: Friday, July 20, 2012 8:17 AM
> To: cloudstack-dev@incubator.apache.org
> Cc: Tiberiu Ungureanu
> Subject: Support for Citrix StorageLink in Cloudstack
> 
> Hello,
> 
> 
> 
> We are using CloudStack in an environment with Citrix Xen (Enterprise
> Edition) and Dell EqualLogic SANs.
> 
> 
> 
> We are trying to take advantage of Citrix Storage Link features,
> especially
> the feature that allows to have a thin provisioned volume/LUN on the
> SAN per
> Vitrual Machine, rather than using LVM over ISCSI and a big LUN on the
> SAN
> for all the VMs, and we want to use SAN-level snapshots available in
> CloudStack
> 
> 
> 
> For the first part of our problem, we found that using "Presetup"
> configuration for ISCSI connections in our SAN with StorageLink
> technology
> allows us to have the desired outcome when creating the VMs from an ISO.
> However, if we want to use templates, things are not working.
> 
> 
> 
> Further investigation showed that CloudStacks model involves copying of
> all
> data from the "primary" storage to "secondary" storage when creating a
> template, and copy from "secondary" storage to "primary" storage when
> creating a VM from the template (Also, copy from "primary1" to
> "secondary"
> to "primary2" when moving a VM from a primary storage to another
> primary
> storage). It turns out that the scripts that facilitate this copying
> and
> that reside on the Xen host (in /opt/xensource/bin, specifically
> copy_vhd_to_secondarystorage.sh and copy_vhd_from_secondarystorage.sh)
> do
> not support StorageLink operations.
> 
> 
> 
> From our perspective (and please correct me if I am wrong), it appears
> these
> are the only scripts that need to be modified to add StorageLink
> support to
> CloudStack. We have successfully modified the script
> copy_vhd_to_secondarystorage.sh to correctly copy from a StorageLink
> LUN to
> the secondary storage (patch included below [1]), and we are
> successfully
> able to migrate VMs from StorageLink LUN to an LVM over ISCSI LUN. We
> lack
> the proper understanding of the technology to modify the
> copy_vhd_from_secondarystorage.sh script to be able to perform the
> reverse
> operation (and here we need your help). We are able to provide test
> environment for somebody who would be able to help us.
> 
> 
> 
> Also, if this problem has already been solved but is not yet committed
> to
> "production" version of CloudStack, we would gladly perform beta
> testing in
> our environment. We are aware there is a bug opened about this issue in
> CloudStack (bug CS-11486)
> 
> 
> 
> We have not yet pursued the second part of our problem (SAN-level
> snapshots
> in CloudStack), but as we feel these problems are related, we are
> willing to
> offer all the help we can in fixing them.
> 
> 
> 
> [1]
> 
> -- start here --
> 
> [root@a11-3-05 bin]# diff copy_vhd_to_secondarystorage.sh.orig
> copy_vhd_to_secondarystorage.sh
> 
> 41c41
> 
> <   echo "2#no uuid of the source sr"
> 
> ---
> 
> >   echo "2#no uuid of the source vdi"
> 
> 85a86,106
> 
> > elif [ $type == "cslg" -o $type == "equal" ]; then
> 
> >   idstr=$(xe host-list name-label=$(hostname) params=uuid)
> 
> >   hostuuid=$(echo $idstr | awk -F: '$1 != ""{print $2}' | awk '{print
> $1}')
> 
> >   CONTROL_DOMAIN_UUID=$(xe vm-list is-control-domain=true
> resident-on=$hostuuid params=uuid | awk '$1 == "uuid"{print $5}')
> 
> >   vbd_uuid=$(xe vbd-create vm-uuid=${CONTROL_DOMAIN_UUID}
> vdi-uuid=${vdiuuid} device=autodetect)
> 
> >   if [ $? -ne 0 ]; then
> 
> >     echo "999#failed to create VBD for vdi uuid ${uuid}"
> 
> >     cleanup
> 
> >     exit 0
> 
> >   fi
> 
> >   xe vbd-plug uuid=${vbd_uuid}
> 
> >   svhdfile=/dev/$(xe vbd-param-get uuid=${vbd_uuid} param-name=device)
> 
> >   dd if=${svhdfile} of=${vhdfile} bs=2M
> 
> >   if [ $? -ne 0 ]; then
> 
> >     echo "998#failed to dd $svhdfile to $vhdfile"
> 
> >     xe vbd-unplug uuid=${vbd_uuid}
> 
> >     xe vbd-destroy uuid=${vbd_uuid}
> 
> >     cleanup
> 
> >     exit 0
> 
> >   fi
> 
> >
> 
> 123a145,147
> 
> > xe vbd-unplug uuid=${vbd_uuid}
> 
> > xe vbd-destroy uuid=${vbd_uuid}
> 
> >
> 
> -- end here --
> 
> 
> 
> 
> 
> Lucas Hughes
> 
> Cloud Engineer
> 
> Ecommerce Inc
> 
>