You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Chiradeep Vittal <Ch...@citrix.com> on 2012/06/01 07:16:45 UTC

making VM startup more fine-grained

I was helping someone with their integration of an SDN controller with
CloudStack. The requirement was that the SDN controller needed the uuid of
the virtual interface (vif) of the virtual machine so that it could plug
it into the right softswitch, manage the vif etc. This vif uuid is
generated by the XenServer.

My recommendation was to write a plugin (implement NetworkElement) that
would get the vif uuid after the vm started by making a XAPI call (via the
agent manager) and then call the SDN controller API with this value.
The response: 
"Unfortunately, the mechanism you describe wouldn't be sufficient  as we
would require the the VIF uuid before the VM boots, otherwise there might
be a race condition where sometimes VMs will boot up and lack network
connectivity and therefore might not even receive their DHCP addresses and
such.
"
Currently, when CloudStack starts a VM, all information regarding the VM
(including nics and storage) is passed down in a single StartCommand to
the hypervisor resource. The hypervisor resource (e.g., CitrixResourceBase
or LibVirtComputingResource) takes appropriate actions to create vifs and
plug them into the vm and start the vm.

One way to solve the integration problem would be to split the
StartCommand into multiple commands: for e.g., CreateVif, CreateVolume,
CreateVm, StartVm. This changes the agent API and affects all hypervisor
resources.
Another way is to modify the specific hypervisor resource to do something
just after creating the vifs but prior to starting the vm.
A third way is to split the agent api into 2 commands: CreateVm and
StartVm.

Thoughts?
--
Chiradeep


Re: making VM startup more fine-grained

Posted by Chiradeep Vittal <Ch...@citrix.com>.

On 8/7/12 1:59 AM, "Tomoe Sugihara" <to...@midokura.com> wrote:

>On Tue, Aug 7, 2012 at 4:30 PM, Murali Reddy <Mu...@citrix.com>
>wrote:
>> On 07/08/12 12:11 PM, "Tomoe Sugihara" <to...@midokura.com> wrote:
>>
>>>On Tue, Aug 7, 2012 at 3:34 PM, Alex Huang <Al...@citrix.com>
>>>wrote:
>>>>> I have looked at the code more in detail and found a bit tricky
>>>>>thing.
>>>>>
>>>>> Inside the createVif() method, it calls getNetwork(conn, nic) to set
>>>>> vifr.network record. And inside getNetwork(), it differentiates by
>>>>> BroadcastDomainType.
>>>>>
>>>>> Now I'm wondering if the method getNetwork() to be inside
>>>>>to-be-created
>>>>> default vif driver's  implementation, or out side of the vif driver.
>>>>>So, I'd like to
>>>>> ask for comments or suggestions from the community.
>>>>>
>>>> Tomoe,
>>>>
>>>> I think you might be looking at obsolete code.   Starting in 3.x,
>>>>CloudStack shouldn't be looking at the broadcast type to determine the
>>>>network.  It is determined by the name tag set when you setup the
>>>>physical network during zone setups.  Let me know if you have any
>>>>questions.
>>>
>>>Hi Alex,
>>>
>>>Thanks for your comments.
>>>
>>>Actually I'm looking at here called by the createVif method:
>>>
>>>https://git-wip-us.apache.org/repos/asf?p=incubator-cloudstack.git;a=blo
>>>b;
>>>f=plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixRe
>>>so
>>>urceBase.java;h=19cb79618197c042606185fbe4300acf477c4b31;hb=HEAD#l719
>>>
>>>And it does look for broadcast cast type.
>>>Does that mean this code is obsolete and should be updated?
>>>
>>>Tomoe
>>>
>>
>> Actually, this is in use function. Creating a Xen virtual network is
>> rolled into getNetwork() and being setup based on the isolation/SDN
>> controller as identified by BroadcastDomainType.
>>
>> IMO, with multiple isolation mechanism/SDN controller integrating, it
>>make
>> sense to push out setup/destroy of hypervisor network object for the
>>guest
>> networks from hypervisor server resource into drivers as-well.
>
>Thanks Murali for your opinion.
>
>As I looked more in the code, managing host networks parts seems to be
>closely coupled to CitrixResourceBase and scattered inside the class.
>
>How about doing vif driver (NicTO handler ) first as a first
>incremental step and then clean up the
>networking parts to cleanly separate out from the resource?
>
>Thanks,
>Tomoe

I think this is a good first step. 


Re: making VM startup more fine-grained

Posted by Tomoe Sugihara <to...@midokura.com>.
On Tue, Aug 7, 2012 at 4:30 PM, Murali Reddy <Mu...@citrix.com> wrote:
> On 07/08/12 12:11 PM, "Tomoe Sugihara" <to...@midokura.com> wrote:
>
>>On Tue, Aug 7, 2012 at 3:34 PM, Alex Huang <Al...@citrix.com> wrote:
>>>> I have looked at the code more in detail and found a bit tricky thing.
>>>>
>>>> Inside the createVif() method, it calls getNetwork(conn, nic) to set
>>>> vifr.network record. And inside getNetwork(), it differentiates by
>>>> BroadcastDomainType.
>>>>
>>>> Now I'm wondering if the method getNetwork() to be inside to-be-created
>>>> default vif driver's  implementation, or out side of the vif driver.
>>>>So, I'd like to
>>>> ask for comments or suggestions from the community.
>>>>
>>> Tomoe,
>>>
>>> I think you might be looking at obsolete code.   Starting in 3.x,
>>>CloudStack shouldn't be looking at the broadcast type to determine the
>>>network.  It is determined by the name tag set when you setup the
>>>physical network during zone setups.  Let me know if you have any
>>>questions.
>>
>>Hi Alex,
>>
>>Thanks for your comments.
>>
>>Actually I'm looking at here called by the createVif method:
>>
>>https://git-wip-us.apache.org/repos/asf?p=incubator-cloudstack.git;a=blob;
>>f=plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixReso
>>urceBase.java;h=19cb79618197c042606185fbe4300acf477c4b31;hb=HEAD#l719
>>
>>And it does look for broadcast cast type.
>>Does that mean this code is obsolete and should be updated?
>>
>>Tomoe
>>
>
> Actually, this is in use function. Creating a Xen virtual network is
> rolled into getNetwork() and being setup based on the isolation/SDN
> controller as identified by BroadcastDomainType.
>
> IMO, with multiple isolation mechanism/SDN controller integrating, it make
> sense to push out setup/destroy of hypervisor network object for the guest
> networks from hypervisor server resource into drivers as-well.

Thanks Murali for your opinion.

As I looked more in the code, managing host networks parts seems to be
closely coupled to CitrixResourceBase and scattered inside the class.

How about doing vif driver (NicTO handler ) first as a first
incremental step and then clean up the
networking parts to cleanly separate out from the resource?

Thanks,
Tomoe

Re: making VM startup more fine-grained

Posted by Murali Reddy <Mu...@citrix.com>.
On 07/08/12 12:11 PM, "Tomoe Sugihara" <to...@midokura.com> wrote:

>On Tue, Aug 7, 2012 at 3:34 PM, Alex Huang <Al...@citrix.com> wrote:
>>> I have looked at the code more in detail and found a bit tricky thing.
>>>
>>> Inside the createVif() method, it calls getNetwork(conn, nic) to set
>>> vifr.network record. And inside getNetwork(), it differentiates by
>>> BroadcastDomainType.
>>>
>>> Now I'm wondering if the method getNetwork() to be inside to-be-created
>>> default vif driver's  implementation, or out side of the vif driver.
>>>So, I'd like to
>>> ask for comments or suggestions from the community.
>>>
>> Tomoe,
>>
>> I think you might be looking at obsolete code.   Starting in 3.x,
>>CloudStack shouldn't be looking at the broadcast type to determine the
>>network.  It is determined by the name tag set when you setup the
>>physical network during zone setups.  Let me know if you have any
>>questions.
>
>Hi Alex,
>
>Thanks for your comments.
>
>Actually I'm looking at here called by the createVif method:
>
>https://git-wip-us.apache.org/repos/asf?p=incubator-cloudstack.git;a=blob;
>f=plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixReso
>urceBase.java;h=19cb79618197c042606185fbe4300acf477c4b31;hb=HEAD#l719
>
>And it does look for broadcast cast type.
>Does that mean this code is obsolete and should be updated?
>
>Tomoe
>

Actually, this is in use function. Creating a Xen virtual network is
rolled into getNetwork() and being setup based on the isolation/SDN
controller as identified by BroadcastDomainType.

IMO, with multiple isolation mechanism/SDN controller integrating, it make
sense to push out setup/destroy of hypervisor network object for the guest
networks from hypervisor server resource into drivers as-well.


Re: making VM startup more fine-grained

Posted by Tomoe Sugihara <to...@midokura.com>.
On Tue, Aug 7, 2012 at 3:34 PM, Alex Huang <Al...@citrix.com> wrote:
>> I have looked at the code more in detail and found a bit tricky thing.
>>
>> Inside the createVif() method, it calls getNetwork(conn, nic) to set
>> vifr.network record. And inside getNetwork(), it differentiates by
>> BroadcastDomainType.
>>
>> Now I'm wondering if the method getNetwork() to be inside to-be-created
>> default vif driver's  implementation, or out side of the vif driver. So, I'd like to
>> ask for comments or suggestions from the community.
>>
> Tomoe,
>
> I think you might be looking at obsolete code.   Starting in 3.x, CloudStack shouldn't be looking at the broadcast type to determine the network.  It is determined by the name tag set when you setup the physical network during zone setups.  Let me know if you have any questions.

Hi Alex,

Thanks for your comments.

Actually I'm looking at here called by the createVif method:

https://git-wip-us.apache.org/repos/asf?p=incubator-cloudstack.git;a=blob;f=plugins/hypervisors/xen/src/com/cloud/hypervisor/xen/resource/CitrixResourceBase.java;h=19cb79618197c042606185fbe4300acf477c4b31;hb=HEAD#l719

And it does look for broadcast cast type.
Does that mean this code is obsolete and should be updated?

Tomoe

RE: making VM startup more fine-grained

Posted by Alex Huang <Al...@citrix.com>.
> I have looked at the code more in detail and found a bit tricky thing.
> 
> Inside the createVif() method, it calls getNetwork(conn, nic) to set
> vifr.network record. And inside getNetwork(), it differentiates by
> BroadcastDomainType.
> 
> Now I'm wondering if the method getNetwork() to be inside to-be-created
> default vif driver's  implementation, or out side of the vif driver. So, I'd like to
> ask for comments or suggestions from the community.
> 
Tomoe,

I think you might be looking at obsolete code.   Starting in 3.x, CloudStack shouldn't be looking at the broadcast type to determine the network.  It is determined by the name tag set when you setup the physical network during zone setups.  Let me know if you have any questions.

--Alex

Re: making VM startup more fine-grained

Posted by Tomoe Sugihara <to...@midokura.com>.
On Mon, Aug 6, 2012 at 6:18 PM, Tomoe Sugihara <to...@midokura.com> wrote:
> On Fri, Jul 27, 2012 at 10:13 AM, Chiradeep Vittal
> <Ch...@citrix.com> wrote:
>>
>>
>> On 7/25/12 10:52 PM, "Ishimoto, Ryu" <ry...@midokura.com> wrote:
>>
>>>On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
>>>Chiradeep.Vittal@citrix.com> wrote:
>>>
>>>>
>>>> Also note that in order to support hotplug and hot-detach of nics, we
>>>>need
>>>> commands like CreateNic and AttachNic.
>>>>
>>>>
>>>This is a great point.  I feel that the right approach is to consider NIC
>>>to exist only within the VM lifetime, and thus the API that the cloud
>>>orchestrator needs to expose are:
>>>
>>>- PlugNIC
>>>- UnplugNIC
>>
>> If you consider the Elastic Network Interface feature in AWS, the NIC
>> could exist independent of a VM.
>> There are several interesting applications that this enables.
>>
>>>
>>>Where the hypervisor resources must implement these methods in the
>>>hypervisor-specific way.  Depending on the hypervisor, this may include
>>>creating a VIF, hot-attaching it to the VM, and plugging it into the
>>>appropriate network.  These are only necessary when CloudStack needs to
>>>support hot-attach and hot-detaching VIFs.
>>
>> +1. There are some issues to consider like if it is possible to have a VM
>> without a NIC and how to issue an IP after hot-attach.
>>
>>>
>>>On a related but a different topic, during the VM launch, VIF plugging has
>>>to also occur, and it has to be designed in a way that both Xen and
>>>Libvirt/KVM can agree on.
>>
>> And vSphere and OVM...
>>
>> [snip]
>>>This VIF attachment logic should be done in a driver model in which
>>>vendors can supply their own logic, and I think this is essential for SDN
>>>integration.  Each hypervisor should have its own VIF driver interface, so
>>>there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
>>>define 'plug' and 'unplug' methods but perhaps differ in signatures.
>>
>> This seems similar to what Murali proposed during the NVP integration
>> effort.
>> The NVP component wished to add specific meta-data to the hypervisor db.
>
> Hi Chiradeep,
>
> Regarding Xen, I just took a look at the TODO section here:
> https://cwiki.apache.org/CLOUDSTACK/feature-nicira-nvp-integration.html.
>
> I'm thinking to move the following method:
>
> 761:     protected VIF createVif(Connection conn, String vmName, VM
> vm, NicTO nic) throws XmlRpcException, XenAPIException {
>
>
> to vif driver's responsibility(same idea as handler in the wiki) and
> make the default implementation have
> more generic other-config values like "cs-iface-id" and  "cs-vm-id".
> That way, vendors can put whatever information they want to
> other-config and/or even make other xen-api calls in their vif
> drivers.

I have looked at the code more in detail and found a bit tricky thing.

Inside the createVif() method, it calls getNetwork(conn, nic) to
set vifr.network record. And inside getNetwork(), it differentiates by
BroadcastDomainType.

Now I'm wondering if the method getNetwork() to be inside
to-be-created default vif driver's  implementation, or out side of the
vif driver. So, I'd like to ask for comments or suggestions from the
community.

Thanks in Advance,
Tomoe

>
> Any comments?
>
> Thanks,
> Tomoe

Re: making VM startup more fine-grained

Posted by Tomoe Sugihara <to...@midokura.com>.
On Fri, Jul 27, 2012 at 10:13 AM, Chiradeep Vittal
<Ch...@citrix.com> wrote:
>
>
> On 7/25/12 10:52 PM, "Ishimoto, Ryu" <ry...@midokura.com> wrote:
>
>>On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
>>Chiradeep.Vittal@citrix.com> wrote:
>>
>>>
>>> Also note that in order to support hotplug and hot-detach of nics, we
>>>need
>>> commands like CreateNic and AttachNic.
>>>
>>>
>>This is a great point.  I feel that the right approach is to consider NIC
>>to exist only within the VM lifetime, and thus the API that the cloud
>>orchestrator needs to expose are:
>>
>>- PlugNIC
>>- UnplugNIC
>
> If you consider the Elastic Network Interface feature in AWS, the NIC
> could exist independent of a VM.
> There are several interesting applications that this enables.
>
>>
>>Where the hypervisor resources must implement these methods in the
>>hypervisor-specific way.  Depending on the hypervisor, this may include
>>creating a VIF, hot-attaching it to the VM, and plugging it into the
>>appropriate network.  These are only necessary when CloudStack needs to
>>support hot-attach and hot-detaching VIFs.
>
> +1. There are some issues to consider like if it is possible to have a VM
> without a NIC and how to issue an IP after hot-attach.
>
>>
>>On a related but a different topic, during the VM launch, VIF plugging has
>>to also occur, and it has to be designed in a way that both Xen and
>>Libvirt/KVM can agree on.
>
> And vSphere and OVM...
>
> [snip]
>>This VIF attachment logic should be done in a driver model in which
>>vendors can supply their own logic, and I think this is essential for SDN
>>integration.  Each hypervisor should have its own VIF driver interface, so
>>there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
>>define 'plug' and 'unplug' methods but perhaps differ in signatures.
>
> This seems similar to what Murali proposed during the NVP integration
> effort.
> The NVP component wished to add specific meta-data to the hypervisor db.

Hi Chiradeep,

Regarding Xen, I just took a look at the TODO section here:
https://cwiki.apache.org/CLOUDSTACK/feature-nicira-nvp-integration.html.

I'm thinking to move the following method:

761:     protected VIF createVif(Connection conn, String vmName, VM
vm, NicTO nic) throws XmlRpcException, XenAPIException {


to vif driver's responsibility(same idea as handler in the wiki) and
make the default implementation have
more generic other-config values like "cs-iface-id" and  "cs-vm-id".
That way, vendors can put whatever information they want to
other-config and/or even make other xen-api calls in their vif
drivers.

Any comments?

Thanks,
Tomoe

Re: making VM startup more fine-grained

Posted by Tomoe Sugihara <to...@midokura.com>.
On Fri, Jul 27, 2012 at 10:13 AM, Chiradeep Vittal
<Ch...@citrix.com> wrote:
>
>
> On 7/25/12 10:52 PM, "Ishimoto, Ryu" <ry...@midokura.com> wrote:
>
>>On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
>>Chiradeep.Vittal@citrix.com> wrote:
>>
>>>
>>> Also note that in order to support hotplug and hot-detach of nics, we
>>>need
>>> commands like CreateNic and AttachNic.
>>>
>>>
>>This is a great point.  I feel that the right approach is to consider NIC
>>to exist only within the VM lifetime, and thus the API that the cloud
>>orchestrator needs to expose are:
>>
>>- PlugNIC
>>- UnplugNIC
>
> If you consider the Elastic Network Interface feature in AWS, the NIC
> could exist independent of a VM.
> There are several interesting applications that this enables.
>
>>
>>Where the hypervisor resources must implement these methods in the
>>hypervisor-specific way.  Depending on the hypervisor, this may include
>>creating a VIF, hot-attaching it to the VM, and plugging it into the
>>appropriate network.  These are only necessary when CloudStack needs to
>>support hot-attach and hot-detaching VIFs.
>
> +1. There are some issues to consider like if it is possible to have a VM
> without a NIC and how to issue an IP after hot-attach.
>
>>
>>On a related but a different topic, during the VM launch, VIF plugging has
>>to also occur, and it has to be designed in a way that both Xen and
>>Libvirt/KVM can agree on.
>
> And vSphere and OVM...
>
> [snip]
>>This VIF attachment logic should be done in a driver model in which
>>vendors can supply their own logic, and I think this is essential for SDN
>>integration.  Each hypervisor should have its own VIF driver interface, so
>>there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
>>define 'plug' and 'unplug' methods but perhaps differ in signatures.
>
> This seems similar to what Murali proposed during the NVP integration
> effort.
> The NVP component wished to add specific meta-data to the hypervisor db.
>
>>For Xen, you'd only need to make xapi calls but this VIF
>>driver gives the vendors a place to customize parameters sent to
>>VIF.create
>>call, such as setting 'other-configs' values.
>
> Yes, absolutely.
>
>>
>>Any feedback would be greatly appreciated.  I've only recently started
>>looking at the CloudStack architecture so please correct me if I said
>>something off-base.
>
> I would welcome a patch with this kind of functionality.

Hi Chiradeep,

I just posted a review request for vif plugin PoC implementation:
https://reviews.apache.org/r/6285/

If you could take a look and give me some comments, that'd be great.

Thanks,
Tomoe

>
> --
> Chiradeep
>

Re: making VM startup more fine-grained

Posted by Tomoe Sugihara <to...@midokura.com>.
On Fri, Jul 27, 2012 at 10:13 AM, Chiradeep Vittal
<Ch...@citrix.com> wrote:
>
>
> On 7/25/12 10:52 PM, "Ishimoto, Ryu" <ry...@midokura.com> wrote:
>
>>On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
>>Chiradeep.Vittal@citrix.com> wrote:
>>
>>>
>>> Also note that in order to support hotplug and hot-detach of nics, we
>>>need
>>> commands like CreateNic and AttachNic.
>>>
>>>
>>This is a great point.  I feel that the right approach is to consider NIC
>>to exist only within the VM lifetime, and thus the API that the cloud
>>orchestrator needs to expose are:
>>
>>- PlugNIC
>>- UnplugNIC
>
> If you consider the Elastic Network Interface feature in AWS, the NIC
> could exist independent of a VM.
> There are several interesting applications that this enables.
>
>>
>>Where the hypervisor resources must implement these methods in the
>>hypervisor-specific way.  Depending on the hypervisor, this may include
>>creating a VIF, hot-attaching it to the VM, and plugging it into the
>>appropriate network.  These are only necessary when CloudStack needs to
>>support hot-attach and hot-detaching VIFs.
>
> +1. There are some issues to consider like if it is possible to have a VM
> without a NIC and how to issue an IP after hot-attach.
>
>>
>>On a related but a different topic, during the VM launch, VIF plugging has
>>to also occur, and it has to be designed in a way that both Xen and
>>Libvirt/KVM can agree on.
>
> And vSphere and OVM...
>
> [snip]
>>This VIF attachment logic should be done in a driver model in which
>>vendors can supply their own logic, and I think this is essential for SDN
>>integration.  Each hypervisor should have its own VIF driver interface, so
>>there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
>>define 'plug' and 'unplug' methods but perhaps differ in signatures.
>
> This seems similar to what Murali proposed during the NVP integration
> effort.
> The NVP component wished to add specific meta-data to the hypervisor db.
>
>>For Xen, you'd only need to make xapi calls but this VIF
>>driver gives the vendors a place to customize parameters sent to
>>VIF.create
>>call, such as setting 'other-configs' values.
>
> Yes, absolutely.
>
>>
>>Any feedback would be greatly appreciated.  I've only recently started
>>looking at the CloudStack architecture so please correct me if I said
>>something off-base.
>
> I would welcome a patch with this kind of functionality.

I'll submit patches for this shortly.
Thanks for the comments.

Tomoe

Re: making VM startup more fine-grained

Posted by Chiradeep Vittal <Ch...@citrix.com>.

On 7/25/12 10:52 PM, "Ishimoto, Ryu" <ry...@midokura.com> wrote:

>On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
>Chiradeep.Vittal@citrix.com> wrote:
>
>>
>> Also note that in order to support hotplug and hot-detach of nics, we
>>need
>> commands like CreateNic and AttachNic.
>>
>>
>This is a great point.  I feel that the right approach is to consider NIC
>to exist only within the VM lifetime, and thus the API that the cloud
>orchestrator needs to expose are:
>
>- PlugNIC
>- UnplugNIC

If you consider the Elastic Network Interface feature in AWS, the NIC
could exist independent of a VM.
There are several interesting applications that this enables.

>
>Where the hypervisor resources must implement these methods in the
>hypervisor-specific way.  Depending on the hypervisor, this may include
>creating a VIF, hot-attaching it to the VM, and plugging it into the
>appropriate network.  These are only necessary when CloudStack needs to
>support hot-attach and hot-detaching VIFs.

+1. There are some issues to consider like if it is possible to have a VM
without a NIC and how to issue an IP after hot-attach.

>
>On a related but a different topic, during the VM launch, VIF plugging has
>to also occur, and it has to be designed in a way that both Xen and
>Libvirt/KVM can agree on.

And vSphere and OVM...

[snip]
>This VIF attachment logic should be done in a driver model in which
>vendors can supply their own logic, and I think this is essential for SDN
>integration.  Each hypervisor should have its own VIF driver interface, so
>there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
>define 'plug' and 'unplug' methods but perhaps differ in signatures.

This seems similar to what Murali proposed during the NVP integration
effort.
The NVP component wished to add specific meta-data to the hypervisor db.

>For Xen, you'd only need to make xapi calls but this VIF
>driver gives the vendors a place to customize parameters sent to
>VIF.create
>call, such as setting 'other-configs' values.

Yes, absolutely.

>
>Any feedback would be greatly appreciated.  I've only recently started
>looking at the CloudStack architecture so please correct me if I said
>something off-base.

I would welcome a patch with this kind of functionality.

--
Chiradeep


Re: making VM startup more fine-grained

Posted by "Ishimoto, Ryu" <ry...@midokura.com>.
On Mon, Jun 4, 2012 at 3:02 PM, Chiradeep Vittal <
Chiradeep.Vittal@citrix.com> wrote:

>
> Also note that in order to support hotplug and hot-detach of nics, we need
> commands like CreateNic and AttachNic.
>
>
This is a great point.  I feel that the right approach is to consider NIC
to exist only within the VM lifetime, and thus the API that the cloud
orchestrator needs to expose are:

- PlugNIC
- UnplugNIC

Where the hypervisor resources must implement these methods in the
hypervisor-specific way.  Depending on the hypervisor, this may include
creating a VIF, hot-attaching it to the VM, and plugging it into the
appropriate network.  These are only necessary when CloudStack needs to
support hot-attach and hot-detaching VIFs.

On a related but a different topic, during the VM launch, VIF plugging has
to also occur, and it has to be designed in a way that both Xen and
Libvirt/KVM can agree on.  If you look at the way libvirt generates the VM
definition, which is an XML configuration, it seems to make sense that you
perform the plug operation in the same place as the XML generation.  This
means that it's ok to keep the 'startVM' at the orchestration level and let
the individual hypervisor resources implement their own VIF attachment
logic.  This VIF attachment logic should be done in a driver model in which
vendors can supply their own logic, and I think this is essential for SDN
integration.  Each hypervisor should have its own VIF driver interface, so
there should be LibvirtVifDriver and XenVifDriver interfaces.  They both
define 'plug' and 'unplug' methods but perhaps differ in signatures.  As
one example, an implementation of an  OpenvswitchLibvirtVIF driver might
use 'ethernet' mode instead of 'bridge' mode, create a tap interface on the
host, create a port on the bridge, and attach the VIF into it before
launching the VM.  For Xen, you'd only need to make xapi calls but this VIF
driver gives the vendors a place to customize parameters sent to VIF.create
call, such as setting 'other-configs' values.

Any feedback would be greatly appreciated.  I've only recently started
looking at the CloudStack architecture so please correct me if I said
something off-base.

Cheers!
Ryu



>
> The other alternative is to launch the vm in a stopped state. Obtain the
> vif uuid and then start it.
>
> From the latest docs:
> "CloudStack now supports creating a VM without starting it on the
> backend. You can determine whether the VM need to
> be started as part of the VM deployment. A VM can now be deployed in two
> ways: create and start a VM (the default
> method); create a VM and leave it in stopped state.
>
>                         A new request parameter, startVM, is introduced in
> the deployVm API to
> support the stopped VM feature. The possible
> values are:
>
>                         true - The VM starts as a part of the VM
> deployment.
> false - The VM is left in stopped state at the end of the VM deployment.
>
>                         The default value is true"
>
>
> On 6/1/12 12:16 PM, "Alex Huang" <Al...@citrix.com> wrote:
>
> >Even in this plan, the resource is required to have knowledge of someone
> >wanting the know about the vif.  I think Chiradeep's proposal is trying
> >to avoid having the Resource itself changed.
> >
> >To the original proposal, I think to break it down to that level makes it
> >very difficult to manage.  We can't dictate the apis on the hypervisors
> >and to what level they actually support a api by api construction of a
> >virtual machine.  It works out well for XenServer but if a certain
> >hypervisor supports only a XML based virtual machine description, then it
> >won't work.  Therefore, it's best to send down a machine description and
> >let the resource do the translation.
> >
> >For the original problem, I don't think there's any way to get around the
> >changing either the Resource or the hypervisor itself to implement that
> >feature.  I think XenServer team actually mentioned that they're willing
> >to put in script callouts around vif being brought up and down and that
> >might be one approach but we'll have to investigate what version it has
> >been put into.
> >
> >--Alex
> >
> >> -----Original Message-----
> >> From: Kelven Yang [mailto:kelven.yang@citrix.com]
> >> Sent: Thursday, May 31, 2012 11:30 PM
> >> To: cloudstack-dev@incubator.apache.org
> >> Subject: RE: making VM startup more fine-grained
> >>
> >> Another way to state my point - don't let CloudStack orchestrators do
> >>micro-
> >> management. It is impossible to handle every case cleanly if we do
> >>micro-
> >> management at one level. Let these orchestrators behave like people
> >> managers,
> >>
> >>      Hey, this is the user's configuration (network config, CPU, memory,
> >> disk etc),
> >>      This is what I have with my available facilities (physical
> >>infrastructure),
> >>      We need to realize an execution plan (orchestration flow)
> >>      Chiradeep, I need you to work on the network (resource realization)
> >>      Kelven, I need you to work on storage (resource realization)
> >>      Do whatever you need to, you have access to the lab (service
> >> callbacks)
> >>      but please fulfill the plan (try to keep high-level orchestration
> flow
> >> intact)
> >>
> >> Kelven
> >>
> >>
> >> > -----Original Message-----
> >> > From: Kelven Yang [mailto:kelven.yang@citrix.com]
> >> > Sent: Thursday, May 31, 2012 11:07 PM
> >> > To: cloudstack-dev@incubator.apache.org
> >> > Subject: RE: making VM startup more fine-grained
> >> >
> >> > > Another way is to modify the specific hypervisor resource to do
> >> > something
> >> > > just after creating the vifs but prior to starting the vm.
> >> >
> >> > I would go with this way. I'm proposing bi-directional communication
> >> > between resource agent and the CloudStack kernel. Let CloudStack
> >> > kernel only manage meta database for network configuration,
> >> > virtual-to-physical mapping configuration etc, information that is
> >> > generic, stable and independent of underlying resource realization
> >> > technologies, Let resource provisioning orchestrators manage and
> >> > orchestrate the process at flow- level, but leave the resource
> >> > realization details to down-level components. If a down-level
> >> > component needs to access the configuration data related with the
> >> > operation, it calls back into the service API provided by CloudStack
> >>kernel.
> >> >
> >> > In this SDN example, the overall orchestration flow should not be
> >> > affected by its implementation details, changes can be scoped at
> >> > resource level if it has the access to the information it needs from
> >> > common service API provided by CloudStack kernel.
> >> >
> >> > Kelven
> >> >
> >> >
> >> >
> >> >
> >> >
> >> > > -----Original Message-----
> >> > > From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> >> > > Sent: Thursday, May 31, 2012 10:17 PM
> >> > > To: CloudStack DeveloperList
> >> > > Subject: making VM startup more fine-grained
> >> > >
> >> > > I was helping someone with their integration of an SDN controller
> >> > > with CloudStack. The requirement was that the SDN controller needed
> >> > > the uuid of the virtual interface (vif) of the virtual machine so
> >> > > that it could
> >> > plug
> >> > > it into the right softswitch, manage the vif etc. This vif uuid is
> >> > > generated by the XenServer.
> >> > >
> >> > > My recommendation was to write a plugin (implement NetworkElement)
> >> > > that would get the vif uuid after the vm started by making a XAPI
> >> > > call (via the agent manager) and then call the SDN controller API
> >> > > with this value.
> >> > > The response:
> >> > > "Unfortunately, the mechanism you describe wouldn't be sufficient
> >> > > as
> >> > we
> >> > > would require the the VIF uuid before the VM boots, otherwise there
> >> > might
> >> > > be a race condition where sometimes VMs will boot up and lack
> >> > > network connectivity and therefore might not even receive their DHCP
> >> > > addresses and such.
> >> > > "
> >> > > Currently, when CloudStack starts a VM, all information regarding
> >> > > the
> >> > VM
> >> > > (including nics and storage) is passed down in a single StartCommand
> >> > > to the hypervisor resource. The hypervisor resource (e.g.,
> >> > > CitrixResourceBase or LibVirtComputingResource) takes appropriate
> >> > > actions to create vifs
> >> > and
> >> > > plug them into the vm and start the vm.
> >> > >
> >> > > One way to solve the integration problem would be to split the
> >> > > StartCommand into multiple commands: for e.g., CreateVif,
> >> > > CreateVolume, CreateVm, StartVm. This changes the agent API and
> >> > > affects all
> >> > hypervisor
> >> > > resources.
> >> > > Another way is to modify the specific hypervisor resource to do
> >> > something
> >> > > just after creating the vifs but prior to starting the vm.
> >> > > A third way is to split the agent api into 2 commands: CreateVm and
> >> > > StartVm.
> >> > >
> >> > > Thoughts?
> >> > > --
> >> > > Chiradeep
> >
>
>

Re: making VM startup more fine-grained

Posted by Chiradeep Vittal <Ch...@citrix.com>.
Anybody else integrating their SDN controllers / network orchestrators
with CloudStack? 
Do you require the vif uuid similarly?
What is the strategy for KVM?

Although I get Alex's point, I think at an logical level, it makes sense
to break it up in such a fashion?
Perhaps it should be up to the hypervisor resource to stitch together
these units of orchestration?

Also note that in order to support hotplug and hot-detach of nics, we need
commands like CreateNic and AttachNic.


The other alternative is to launch the vm in a stopped state. Obtain the
vif uuid and then start it.

>From the latest docs:
"CloudStack now supports creating a VM without starting it on the
backend. You can determine whether the VM need to
be started as part of the VM deployment. A VM can now be deployed in two
ways: create and start a VM (the default
method); create a VM and leave it in stopped state.

			A new request parameter, startVM, is introduced in the deployVm API to
support the stopped VM feature. The possible
values are:

			true - The VM starts as a part of the VM deployment.
false - The VM is left in stopped state at the end of the VM deployment.

			The default value is true"


On 6/1/12 12:16 PM, "Alex Huang" <Al...@citrix.com> wrote:

>Even in this plan, the resource is required to have knowledge of someone
>wanting the know about the vif.  I think Chiradeep's proposal is trying
>to avoid having the Resource itself changed.
>
>To the original proposal, I think to break it down to that level makes it
>very difficult to manage.  We can't dictate the apis on the hypervisors
>and to what level they actually support a api by api construction of a
>virtual machine.  It works out well for XenServer but if a certain
>hypervisor supports only a XML based virtual machine description, then it
>won't work.  Therefore, it's best to send down a machine description and
>let the resource do the translation.
>
>For the original problem, I don't think there's any way to get around the
>changing either the Resource or the hypervisor itself to implement that
>feature.  I think XenServer team actually mentioned that they're willing
>to put in script callouts around vif being brought up and down and that
>might be one approach but we'll have to investigate what version it has
>been put into.
>
>--Alex
>
>> -----Original Message-----
>> From: Kelven Yang [mailto:kelven.yang@citrix.com]
>> Sent: Thursday, May 31, 2012 11:30 PM
>> To: cloudstack-dev@incubator.apache.org
>> Subject: RE: making VM startup more fine-grained
>> 
>> Another way to state my point - don't let CloudStack orchestrators do
>>micro-
>> management. It is impossible to handle every case cleanly if we do
>>micro-
>> management at one level. Let these orchestrators behave like people
>> managers,
>> 
>> 	Hey, this is the user's configuration (network config, CPU, memory,
>> disk etc),
>> 	This is what I have with my available facilities (physical
>>infrastructure),
>> 	We need to realize an execution plan (orchestration flow)
>> 	Chiradeep, I need you to work on the network (resource realization)
>> 	Kelven, I need you to work on storage (resource realization)
>> 	Do whatever you need to, you have access to the lab (service
>> callbacks)
>> 	but please fulfill the plan (try to keep high-level orchestration flow
>> intact)
>> 
>> Kelven
>> 
>> 
>> > -----Original Message-----
>> > From: Kelven Yang [mailto:kelven.yang@citrix.com]
>> > Sent: Thursday, May 31, 2012 11:07 PM
>> > To: cloudstack-dev@incubator.apache.org
>> > Subject: RE: making VM startup more fine-grained
>> >
>> > > Another way is to modify the specific hypervisor resource to do
>> > something
>> > > just after creating the vifs but prior to starting the vm.
>> >
>> > I would go with this way. I'm proposing bi-directional communication
>> > between resource agent and the CloudStack kernel. Let CloudStack
>> > kernel only manage meta database for network configuration,
>> > virtual-to-physical mapping configuration etc, information that is
>> > generic, stable and independent of underlying resource realization
>> > technologies, Let resource provisioning orchestrators manage and
>> > orchestrate the process at flow- level, but leave the resource
>> > realization details to down-level components. If a down-level
>> > component needs to access the configuration data related with the
>> > operation, it calls back into the service API provided by CloudStack
>>kernel.
>> >
>> > In this SDN example, the overall orchestration flow should not be
>> > affected by its implementation details, changes can be scoped at
>> > resource level if it has the access to the information it needs from
>> > common service API provided by CloudStack kernel.
>> >
>> > Kelven
>> >
>> >
>> >
>> >
>> >
>> > > -----Original Message-----
>> > > From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
>> > > Sent: Thursday, May 31, 2012 10:17 PM
>> > > To: CloudStack DeveloperList
>> > > Subject: making VM startup more fine-grained
>> > >
>> > > I was helping someone with their integration of an SDN controller
>> > > with CloudStack. The requirement was that the SDN controller needed
>> > > the uuid of the virtual interface (vif) of the virtual machine so
>> > > that it could
>> > plug
>> > > it into the right softswitch, manage the vif etc. This vif uuid is
>> > > generated by the XenServer.
>> > >
>> > > My recommendation was to write a plugin (implement NetworkElement)
>> > > that would get the vif uuid after the vm started by making a XAPI
>> > > call (via the agent manager) and then call the SDN controller API
>> > > with this value.
>> > > The response:
>> > > "Unfortunately, the mechanism you describe wouldn't be sufficient
>> > > as
>> > we
>> > > would require the the VIF uuid before the VM boots, otherwise there
>> > might
>> > > be a race condition where sometimes VMs will boot up and lack
>> > > network connectivity and therefore might not even receive their DHCP
>> > > addresses and such.
>> > > "
>> > > Currently, when CloudStack starts a VM, all information regarding
>> > > the
>> > VM
>> > > (including nics and storage) is passed down in a single StartCommand
>> > > to the hypervisor resource. The hypervisor resource (e.g.,
>> > > CitrixResourceBase or LibVirtComputingResource) takes appropriate
>> > > actions to create vifs
>> > and
>> > > plug them into the vm and start the vm.
>> > >
>> > > One way to solve the integration problem would be to split the
>> > > StartCommand into multiple commands: for e.g., CreateVif,
>> > > CreateVolume, CreateVm, StartVm. This changes the agent API and
>> > > affects all
>> > hypervisor
>> > > resources.
>> > > Another way is to modify the specific hypervisor resource to do
>> > something
>> > > just after creating the vifs but prior to starting the vm.
>> > > A third way is to split the agent api into 2 commands: CreateVm and
>> > > StartVm.
>> > >
>> > > Thoughts?
>> > > --
>> > > Chiradeep
>


RE: making VM startup more fine-grained

Posted by Alex Huang <Al...@citrix.com>.
Even in this plan, the resource is required to have knowledge of someone wanting the know about the vif.  I think Chiradeep's proposal is trying to avoid having the Resource itself changed.

To the original proposal, I think to break it down to that level makes it very difficult to manage.  We can't dictate the apis on the hypervisors and to what level they actually support a api by api construction of a virtual machine.  It works out well for XenServer but if a certain hypervisor supports only a XML based virtual machine description, then it won't work.  Therefore, it's best to send down a machine description and let the resource do the translation. 

For the original problem, I don't think there's any way to get around the changing either the Resource or the hypervisor itself to implement that feature.  I think XenServer team actually mentioned that they're willing to put in script callouts around vif being brought up and down and that might be one approach but we'll have to investigate what version it has been put into.

--Alex

> -----Original Message-----
> From: Kelven Yang [mailto:kelven.yang@citrix.com]
> Sent: Thursday, May 31, 2012 11:30 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: making VM startup more fine-grained
> 
> Another way to state my point - don't let CloudStack orchestrators do micro-
> management. It is impossible to handle every case cleanly if we do micro-
> management at one level. Let these orchestrators behave like people
> managers,
> 
> 	Hey, this is the user's configuration (network config, CPU, memory,
> disk etc),
> 	This is what I have with my available facilities (physical infrastructure),
> 	We need to realize an execution plan (orchestration flow)
> 	Chiradeep, I need you to work on the network (resource realization)
> 	Kelven, I need you to work on storage (resource realization)
> 	Do whatever you need to, you have access to the lab (service
> callbacks)
> 	but please fulfill the plan (try to keep high-level orchestration flow
> intact)
> 
> Kelven
> 
> 
> > -----Original Message-----
> > From: Kelven Yang [mailto:kelven.yang@citrix.com]
> > Sent: Thursday, May 31, 2012 11:07 PM
> > To: cloudstack-dev@incubator.apache.org
> > Subject: RE: making VM startup more fine-grained
> >
> > > Another way is to modify the specific hypervisor resource to do
> > something
> > > just after creating the vifs but prior to starting the vm.
> >
> > I would go with this way. I'm proposing bi-directional communication
> > between resource agent and the CloudStack kernel. Let CloudStack
> > kernel only manage meta database for network configuration,
> > virtual-to-physical mapping configuration etc, information that is
> > generic, stable and independent of underlying resource realization
> > technologies, Let resource provisioning orchestrators manage and
> > orchestrate the process at flow- level, but leave the resource
> > realization details to down-level components. If a down-level
> > component needs to access the configuration data related with the
> > operation, it calls back into the service API provided by CloudStack kernel.
> >
> > In this SDN example, the overall orchestration flow should not be
> > affected by its implementation details, changes can be scoped at
> > resource level if it has the access to the information it needs from
> > common service API provided by CloudStack kernel.
> >
> > Kelven
> >
> >
> >
> >
> >
> > > -----Original Message-----
> > > From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> > > Sent: Thursday, May 31, 2012 10:17 PM
> > > To: CloudStack DeveloperList
> > > Subject: making VM startup more fine-grained
> > >
> > > I was helping someone with their integration of an SDN controller
> > > with CloudStack. The requirement was that the SDN controller needed
> > > the uuid of the virtual interface (vif) of the virtual machine so
> > > that it could
> > plug
> > > it into the right softswitch, manage the vif etc. This vif uuid is
> > > generated by the XenServer.
> > >
> > > My recommendation was to write a plugin (implement NetworkElement)
> > > that would get the vif uuid after the vm started by making a XAPI
> > > call (via the agent manager) and then call the SDN controller API
> > > with this value.
> > > The response:
> > > "Unfortunately, the mechanism you describe wouldn't be sufficient
> > > as
> > we
> > > would require the the VIF uuid before the VM boots, otherwise there
> > might
> > > be a race condition where sometimes VMs will boot up and lack
> > > network connectivity and therefore might not even receive their DHCP
> > > addresses and such.
> > > "
> > > Currently, when CloudStack starts a VM, all information regarding
> > > the
> > VM
> > > (including nics and storage) is passed down in a single StartCommand
> > > to the hypervisor resource. The hypervisor resource (e.g.,
> > > CitrixResourceBase or LibVirtComputingResource) takes appropriate
> > > actions to create vifs
> > and
> > > plug them into the vm and start the vm.
> > >
> > > One way to solve the integration problem would be to split the
> > > StartCommand into multiple commands: for e.g., CreateVif,
> > > CreateVolume, CreateVm, StartVm. This changes the agent API and
> > > affects all
> > hypervisor
> > > resources.
> > > Another way is to modify the specific hypervisor resource to do
> > something
> > > just after creating the vifs but prior to starting the vm.
> > > A third way is to split the agent api into 2 commands: CreateVm and
> > > StartVm.
> > >
> > > Thoughts?
> > > --
> > > Chiradeep


RE: making VM startup more fine-grained

Posted by Kelven Yang <ke...@citrix.com>.
Another way to state my point - don't let CloudStack orchestrators do micro-management. It is impossible to handle every case cleanly if we do micro-management at one level. Let these orchestrators behave like people managers, 

	Hey, this is the user's configuration (network config, CPU, memory, disk etc),
	This is what I have with my available facilities (physical infrastructure),
	We need to realize an execution plan (orchestration flow)
	Chiradeep, I need you to work on the network (resource realization)
	Kelven, I need you to work on storage (resource realization)
	Do whatever you need to, you have access to the lab (service callbacks)
	but please fulfill the plan (try to keep high-level orchestration flow intact)

Kelven


> -----Original Message-----
> From: Kelven Yang [mailto:kelven.yang@citrix.com]
> Sent: Thursday, May 31, 2012 11:07 PM
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: making VM startup more fine-grained
> 
> > Another way is to modify the specific hypervisor resource to do
> something
> > just after creating the vifs but prior to starting the vm.
> 
> I would go with this way. I'm proposing bi-directional communication
> between resource agent and the CloudStack kernel. Let CloudStack kernel
> only manage meta database for network configuration, virtual-to-physical
> mapping configuration etc, information that is generic, stable and
> independent of underlying resource realization technologies, Let resource
> provisioning orchestrators manage and orchestrate the process at flow-
> level, but leave the resource realization details to down-level
> components. If a down-level component needs to access the configuration
> data related with the operation, it calls back into the service API
> provided by CloudStack kernel.
> 
> In this SDN example, the overall orchestration flow should not be
> affected by its implementation details, changes can be scoped at resource
> level if it has the access to the information it needs from common
> service API provided by CloudStack kernel.
> 
> Kelven
> 
> 
> 
> 
> 
> > -----Original Message-----
> > From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> > Sent: Thursday, May 31, 2012 10:17 PM
> > To: CloudStack DeveloperList
> > Subject: making VM startup more fine-grained
> >
> > I was helping someone with their integration of an SDN controller with
> > CloudStack. The requirement was that the SDN controller needed the uuid
> > of
> > the virtual interface (vif) of the virtual machine so that it could
> plug
> > it into the right softswitch, manage the vif etc. This vif uuid is
> > generated by the XenServer.
> >
> > My recommendation was to write a plugin (implement NetworkElement) that
> > would get the vif uuid after the vm started by making a XAPI call (via
> > the
> > agent manager) and then call the SDN controller API with this value.
> > The response:
> > "Unfortunately, the mechanism you describe wouldn't be sufficient  as
> we
> > would require the the VIF uuid before the VM boots, otherwise there
> might
> > be a race condition where sometimes VMs will boot up and lack network
> > connectivity and therefore might not even receive their DHCP addresses
> > and
> > such.
> > "
> > Currently, when CloudStack starts a VM, all information regarding the
> VM
> > (including nics and storage) is passed down in a single StartCommand to
> > the hypervisor resource. The hypervisor resource (e.g.,
> > CitrixResourceBase
> > or LibVirtComputingResource) takes appropriate actions to create vifs
> and
> > plug them into the vm and start the vm.
> >
> > One way to solve the integration problem would be to split the
> > StartCommand into multiple commands: for e.g., CreateVif, CreateVolume,
> > CreateVm, StartVm. This changes the agent API and affects all
> hypervisor
> > resources.
> > Another way is to modify the specific hypervisor resource to do
> something
> > just after creating the vifs but prior to starting the vm.
> > A third way is to split the agent api into 2 commands: CreateVm and
> > StartVm.
> >
> > Thoughts?
> > --
> > Chiradeep


RE: making VM startup more fine-grained

Posted by Kelven Yang <ke...@citrix.com>.
> Another way is to modify the specific hypervisor resource to do something
> just after creating the vifs but prior to starting the vm.

I would go with this way. I'm proposing bi-directional communication between resource agent and the CloudStack kernel. Let CloudStack kernel only manage meta database for network configuration, virtual-to-physical mapping configuration etc, information that is generic, stable and independent of underlying resource realization technologies, Let resource provisioning orchestrators manage and orchestrate the process at flow-level, but leave the resource realization details to down-level components. If a down-level component needs to access the configuration data related with the operation, it calls back into the service API provided by CloudStack kernel.

In this SDN example, the overall orchestration flow should not be affected by its implementation details, changes can be scoped at resource level if it has the access to the information it needs from common service API provided by CloudStack kernel.

Kelven





> -----Original Message-----
> From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> Sent: Thursday, May 31, 2012 10:17 PM
> To: CloudStack DeveloperList
> Subject: making VM startup more fine-grained
> 
> I was helping someone with their integration of an SDN controller with
> CloudStack. The requirement was that the SDN controller needed the uuid
> of
> the virtual interface (vif) of the virtual machine so that it could plug
> it into the right softswitch, manage the vif etc. This vif uuid is
> generated by the XenServer.
> 
> My recommendation was to write a plugin (implement NetworkElement) that
> would get the vif uuid after the vm started by making a XAPI call (via
> the
> agent manager) and then call the SDN controller API with this value.
> The response:
> "Unfortunately, the mechanism you describe wouldn't be sufficient  as we
> would require the the VIF uuid before the VM boots, otherwise there might
> be a race condition where sometimes VMs will boot up and lack network
> connectivity and therefore might not even receive their DHCP addresses
> and
> such.
> "
> Currently, when CloudStack starts a VM, all information regarding the VM
> (including nics and storage) is passed down in a single StartCommand to
> the hypervisor resource. The hypervisor resource (e.g.,
> CitrixResourceBase
> or LibVirtComputingResource) takes appropriate actions to create vifs and
> plug them into the vm and start the vm.
> 
> One way to solve the integration problem would be to split the
> StartCommand into multiple commands: for e.g., CreateVif, CreateVolume,
> CreateVm, StartVm. This changes the agent API and affects all hypervisor
> resources.
> Another way is to modify the specific hypervisor resource to do something
> just after creating the vifs but prior to starting the vm.
> A third way is to split the agent api into 2 commands: CreateVm and
> StartVm.
> 
> Thoughts?
> --
> Chiradeep


RE: making VM startup more fine-grained

Posted by Kelven Yang <ke...@citrix.com>.
> If volumes are prepared as part of CreateVm, is there a reason why nics
> cannot be as well? Is it because the volumes are prepared before the
> destination host is chosen?

It depends on whether or not nics are first or secondary class objects in the hypervisor, being first class object means that it can be created and managed in its own life cycle. In certain hypervisors, nics may only be created along with the VM.

Kelven



> -----Original Message-----
> From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> Sent: Sunday, June 03, 2012 11:05 PM
> To: CloudStack DeveloperList
> Subject: Re: making VM startup more fine-grained
> 
> This could work as well.
> If volumes are prepared as part of CreateVm, is there a reason why nics
> cannot be as well? Is it because the volumes are prepared before the
> destination host is chosen?
> 
> On 6/1/12 5:38 AM, "Murali Reddy" <Mu...@citrix.com> wrote:
> 
> >On 01/06/12 10:46 AM, "Chiradeep Vittal" <Ch...@citrix.com>
> >wrote:
> >
> >>A third way is to split the agent api into 2 commands: CreateVm and
> >>StartVm.
> >
> >CloudStack already has two separate agent api commands for
> >creating(CreateCommand) and starting (StartCommand) VM operations. Not
> >sure if any optimization influenced this but, unfortunately VM creation
> >operations are spread across both CreateCommand and StartCommand
> >implementations. So volumes for the VM are prepared as part of the
> >CreateCommand and actual VM creation on Hypervisor, nic's for the VM,
> and
> >ISO etc are created as part of the StartCommand. Also there is no state
> >transition for VM from Created/Creating to Starting. If we have it, a
> >pluggable service can register for state change and act on it.
> >


RE: making VM startup more fine-grained

Posted by Alex Huang <Al...@citrix.com>.

> -----Original Message-----
> From: Hugo Trippaers [mailto:HTrippaers@schubergphilis.com]
> Sent: Monday, June 04, 2012 11:11 AM
> To: cloudstack-dev@incubator.apache.org
> Subject: RE: making VM startup more fine-grained
> 
> Heya,
> 
> What would be the best way to get this done, or worked around on short
> notice? We have some dedicated resources to work on the SDN integration
> this week so we would like to move this forward.

Hugo, 

The fastest way would definitely be changing the CitrixResourceBase to provide this functionality.

--Alex


RE: making VM startup more fine-grained

Posted by Hugo Trippaers <HT...@schubergphilis.com>.
Heya,

What would be the best way to get this done, or worked around on short notice? We have some dedicated resources to work on the SDN integration this week so we would like to move this forward.

Cheers,

Hugo

-----Original Message-----
From: Alex Huang [mailto:Alex.Huang@citrix.com] 
Sent: Monday, June 04, 2012 3:10 AM
To: cloudstack-dev@incubator.apache.org
Subject: RE: making VM startup more fine-grained

Volumes can outlive VMs but nics cannot.  

I like your idea in the other email.  Break the StartCommand into two commands, a ConstructCommand and a StartCommand.  That makes sense to me and would resolve this problem.

I think createVif and addVif support will depend on the hypervisor capabilities.  Something we should develop for each hypervisor.

--Alex

> -----Original Message-----
> From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> Sent: Sunday, June 03, 2012 11:05 PM
> To: CloudStack DeveloperList
> Subject: Re: making VM startup more fine-grained
> 
> This could work as well.
> If volumes are prepared as part of CreateVm, is there a reason why 
> nics cannot be as well? Is it because the volumes are prepared before 
> the destination host is chosen?
> 
> On 6/1/12 5:38 AM, "Murali Reddy" <Mu...@citrix.com> wrote:
> 
> >On 01/06/12 10:46 AM, "Chiradeep Vittal" 
> ><Ch...@citrix.com>
> >wrote:
> >
> >>A third way is to split the agent api into 2 commands: CreateVm and 
> >>StartVm.
> >
> >CloudStack already has two separate agent api commands for
> >creating(CreateCommand) and starting (StartCommand) VM operations.
> Not
> >sure if any optimization influenced this but, unfortunately VM 
> >creation operations are spread across both CreateCommand and 
> >StartCommand implementations. So volumes for the VM are prepared as 
> >part of the CreateCommand and actual VM creation on Hypervisor, nic's 
> >for the VM, and ISO etc are created as part of the StartCommand. Also 
> >there is no state transition for VM from Created/Creating to 
> >Starting. If we have it, a pluggable service can register for state change and act on it.
> >


RE: making VM startup more fine-grained

Posted by Alex Huang <Al...@citrix.com>.
Volumes can outlive VMs but nics cannot.  

I like your idea in the other email.  Break the StartCommand into two commands, a ConstructCommand and a StartCommand.  That makes sense to me and would resolve this problem.

I think createVif and addVif support will depend on the hypervisor capabilities.  Something we should develop for each hypervisor.

--Alex

> -----Original Message-----
> From: Chiradeep Vittal [mailto:Chiradeep.Vittal@citrix.com]
> Sent: Sunday, June 03, 2012 11:05 PM
> To: CloudStack DeveloperList
> Subject: Re: making VM startup more fine-grained
> 
> This could work as well.
> If volumes are prepared as part of CreateVm, is there a reason why nics
> cannot be as well? Is it because the volumes are prepared before the
> destination host is chosen?
> 
> On 6/1/12 5:38 AM, "Murali Reddy" <Mu...@citrix.com> wrote:
> 
> >On 01/06/12 10:46 AM, "Chiradeep Vittal" <Ch...@citrix.com>
> >wrote:
> >
> >>A third way is to split the agent api into 2 commands: CreateVm and
> >>StartVm.
> >
> >CloudStack already has two separate agent api commands for
> >creating(CreateCommand) and starting (StartCommand) VM operations.
> Not
> >sure if any optimization influenced this but, unfortunately VM creation
> >operations are spread across both CreateCommand and StartCommand
> >implementations. So volumes for the VM are prepared as part of the
> >CreateCommand and actual VM creation on Hypervisor, nic's for the VM,
> >and ISO etc are created as part of the StartCommand. Also there is no
> >state transition for VM from Created/Creating to Starting. If we have
> >it, a pluggable service can register for state change and act on it.
> >


Re: making VM startup more fine-grained

Posted by Chiradeep Vittal <Ch...@citrix.com>.
This could work as well.
If volumes are prepared as part of CreateVm, is there a reason why nics
cannot be as well? Is it because the volumes are prepared before the
destination host is chosen?

On 6/1/12 5:38 AM, "Murali Reddy" <Mu...@citrix.com> wrote:

>On 01/06/12 10:46 AM, "Chiradeep Vittal" <Ch...@citrix.com>
>wrote:
>
>>A third way is to split the agent api into 2 commands: CreateVm and
>>StartVm.
>
>CloudStack already has two separate agent api commands for
>creating(CreateCommand) and starting (StartCommand) VM operations. Not
>sure if any optimization influenced this but, unfortunately VM creation
>operations are spread across both CreateCommand and StartCommand
>implementations. So volumes for the VM are prepared as part of the
>CreateCommand and actual VM creation on Hypervisor, nic's for the VM, and
>ISO etc are created as part of the StartCommand. Also there is no state
>transition for VM from Created/Creating to Starting. If we have it, a
>pluggable service can register for state change and act on it.
>


Re: making VM startup more fine-grained

Posted by Murali Reddy <Mu...@citrix.com>.
On 01/06/12 10:46 AM, "Chiradeep Vittal" <Ch...@citrix.com>
wrote:

>A third way is to split the agent api into 2 commands: CreateVm and
>StartVm.

CloudStack already has two separate agent api commands for
creating(CreateCommand) and starting (StartCommand) VM operations. Not
sure if any optimization influenced this but, unfortunately VM creation
operations are spread across both CreateCommand and StartCommand
implementations. So volumes for the VM are prepared as part of the
CreateCommand and actual VM creation on Hypervisor, nic's for the VM, and
ISO etc are created as part of the StartCommand. Also there is no state
transition for VM from Created/Creating to Starting. If we have it, a
pluggable service can register for state change and act on it.