You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Pierre-Luc Dion <pd...@cloudops.com> on 2018/01/12 14:13:48 UTC

[DISCUSS] running sVM and VR as HVM on XenServer

Hi,

We need to start a architecture discussion about running SystemVM and
Virtual-Router as HVM instances in XenServer. With recent Meltdown-Spectre,
one of the mitigation step is currently to run VMs as HVM on XenServer to
self contain a user space attack from a guest OS.

Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start has
HVM. This is currently problematic for Virtual Routers and SystemVM because
CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
cloud_link_local. While using HVM the "OS boot Options" is not accessible
to the VM so the VR fail to be properly configured.

I currently see 2 potential approaches for this:
1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would receive
is network configuration at boot.
2. Change the current way of managing VR, SVMs on XenServer, potentiall do
same has with VMware: use pod management networks and assign a POD IP to
each VR.

I don't know how it's implemented in KVM, maybe cloning KVM approach would
work too, could someone explain how it work on this thread?

I'd a bit fan of a potential #2 aproach because it could facilitate VR
monitoring and logging, although a migration path for an existing cloud
could be complex.

Cheers,


Pierre-Luc

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Tim Mackey <tm...@gmail.com>.
There isn't anything I can think of wrt reliability. If the usage is
limited to VR boot, then I don't see anything on the surface to limit
performance.

In other words, xenstore as a solution seems a reasonable approach.

-tim

On Mon, Jan 15, 2018 at 8:26 PM, Pierre-Luc Dion <pd...@cloudops.com> wrote:

> Hi Tim,
>
> As long as it work, since it's only used to for the initial instruction set
> at the VR boot so eth0 can be configure, I think xenstore would work just
> fine.
> unless you are saying we could just not rely on xenstore in terms of
> reliability?
>
>
> *Pierre-Luc DION*
> Architecte de Solution Cloud | Cloud Solutions Architect
> t 855.652.5683
>
> *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Jan 12, 2018 at 7:34 PM, Tim Mackey <tm...@gmail.com> wrote:
>
> > > We found that we can use xenstore-read / xenstore-write to send data
> from
> > dom0 to domU which are in our case  VRs or SVMs. Any reason not using
> this
> > approach ?
> >
> > xenstore has had some issues in the past. The most notable of which were
> > limitations on the number of event channels in use, followed by overall
> > performance impact. iirc, the event channel stuff was fully resolved with
> > XenServer 6.5, but they do speak to a need to test if there are any
> changes
> > to the maximum number of VMs which can be reliably supported. It also
> > limits legacy support (in case that matters).
> >
> > Architecturally I think this is a reasonable approach to the problem. One
> > other thing to note is that xapi replicates xenstore information to all
> > members of a pool. That might impact RVRs.
> >
> > -tim
> >
> > [1] "xenstore is not a high-performance facility and should beused only
> for
> > small amounts of control plane data."
> > https://xenbits.xen.org/docs/4.6-testing/misc/xenstore.txt
> >
> > On Fri, Jan 12, 2018 at 4:56 PM, Pierre-Luc Dion <pd...@cloudops.com>
> > wrote:
> >
> > > After some verification with Syed and Khosrow,
> > >
> > > We found that we can use xenstore-read / xenstore-write to send data
> from
> > > dom0 to domU which are in our case  VRs or SVMs. Any reason not using
> > this
> > > approach ?  that way we would not need a architectural change for
> > XenServer
> > > pods, and this would support HVM and PV virtual-router. more test
> > required,
> > > for sure, VR would need to have xentools pre-installed.
> > >
> > >
> > > *Pierre-Luc DION*
> > > Architecte de Solution Cloud | Cloud Solutions Architect
> > > t 855.652.5683
> > >
> > > *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> > > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > > w cloudops.com *|* tw @CloudOps_
> > >
> > > On Fri, Jan 12, 2018 at 4:07 PM, Syed Ahmed <sa...@cloudops.com>
> wrote:
> > >
> > > > KVM uses a VirtIO channel to send information about the IP address
> and
> > > > other params to the SystemVMs. We could use a similar strategy in
> > > XenServer
> > > > using XenStore. This would involve minimal changes to the code while
> > > > keeping backward compatibility.
> > > >
> > > >
> > > >
> > > > On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller
> <sweller@ena.com.invalid
> > >
> > > > wrote:
> > > >
> > > > > They do not. They receive a link-local ip address that is used for
> > host
> > > > > agent to VR communication. All VR commands are proxied through the
> > host
> > > > > agent. Host agent to VR communication is over SSH.
> > > > >
> > > > >
> > > > > ________________________________
> > > > > From: Rafael Weingärtner <ra...@gmail.com>
> > > > > Sent: Friday, January 12, 2018 1:42 PM
> > > > > To: dev
> > > > > Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
> > > > >
> > > > > but we are already using this design in vmware deployments (not
> sure
> > > > about
> > > > > KVM). The management network is already an isolated network only
> used
> > > by
> > > > > system vms and ACS. Unless we are attacked by some internal agent,
> we
> > > are
> > > > > safe from customer attack through management networks. Also, we can
> > (if
> > > > we
> > > > > don't do yet) restrict access only via these management interfaces
> in
> > > > > system VMs(VRs, SSVM, console proxy and others to come).
> > > > >
> > > > >
> > > > >
> > > > > Can someone confirm if VRs receive management IPs in KVM
> deployments?
> > > > >
> > > > > On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com>
> > > wrote:
> > > > >
> > > > > > The reason why we used link local in the first place was to
> isolate
> > > the
> > > > > VR
> > > > > > from directly accessing the management network. This provides
> > another
> > > > > layer
> > > > > > of security in case of a VR exploit. This will also have a side
> > > effect
> > > > of
> > > > > > making all VRs visible to each other. Are we okay accepting this?
> > > > > >
> > > > > > Thanks,
> > > > > > -Syed
> > > > > >
> > > > > > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com>
> > > > wrote:
> > > > > >
> > > > > > > dom0 already has a DHCP server listening for requests on
> internal
> > > > > > > management networks. I'd be wary trying to manage it from an
> > > external
> > > > > > > service like cloudstack lest it get reset upon XenServer patch.
> > > This
> > > > > > alone
> > > > > > > makes me favor option #2. I also think option #2 simplifies
> > network
> > > > > > design
> > > > > > > for users.
> > > > > > >
> > > > > > > Agreed on making this as consistent across flows as possible.
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > > > > > rafaelweingartner@gmail.com> wrote:
> > > > > > >
> > > > > > > > It looks reasonable to manage VRs via management IP network.
> We
> > > > > should
> > > > > > > > focus on using the same work flow for different deployment
> > > > scenarios.
> > > > > > > >
> > > > > > > >
> > > > > > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> > > > > pdion@cloudops.com>
> > > > > > > > wrote:
> > > > > > > >
> > > > > > > > > Hi,
> > > > > > > > >
> > > > > > > > > We need to start a architecture discussion about running
> > > SystemVM
> > > > > and
> > > > > > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > > > > > Meltdown-Spectre,
> > > > > > > > > one of the mitigation step is currently to run VMs as HVM
> on
> > > > > > XenServer
> > > > > > > to
> > > > > > > > > self contain a user space attack from a guest OS.
> > > > > > > > >
> > > > > > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce
> VMs
> > > to
> > > > > > start
> > > > > > > > has
> > > > > > > > > HVM. This is currently problematic for Virtual Routers and
> > > > SystemVM
> > > > > > > > because
> > > > > > > > > CloudStack use PV "OS boot Options" to preconfigure the VR
> > > eth0:
> > > > > > > > > cloud_link_local. While using HVM the "OS boot Options" is
> > not
> > > > > > > accessible
> > > > > > > > > to the VM so the VR fail to be properly configured.
> > > > > > > > >
> > > > > > > > > I currently see 2 potential approaches for this:
> > > > > > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR
> eth0
> > > > would
> > > > > > > > receive
> > > > > > > > > is network configuration at boot.
> > > > > > > > > 2. Change the current way of managing VR, SVMs on
> XenServer,
> > > > > > potentiall
> > > > > > > > do
> > > > > > > > > same has with VMware: use pod management networks and
> assign
> > a
> > > > POD
> > > > > IP
> > > > > > > to
> > > > > > > > > each VR.
> > > > > > > > >
> > > > > > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> > > > > approach
> > > > > > > > would
> > > > > > > > > work too, could someone explain how it work on this thread?
> > > > > > > > >
> > > > > > > > > I'd a bit fan of a potential #2 aproach because it could
> > > > facilitate
> > > > > > VR
> > > > > > > > > monitoring and logging, although a migration path for an
> > > existing
> > > > > > cloud
> > > > > > > > > could be complex.
> > > > > > > > >
> > > > > > > > > Cheers,
> > > > > > > > >
> > > > > > > > >
> > > > > > > > > Pierre-Luc
> > > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > --
> > > > > > > > Rafael Weingärtner
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Rafael Weingärtner
> > > > >
> > > >
> > >
> >
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Pierre-Luc Dion <pd...@cloudops.com>.
Hi Tim,

As long as it work, since it's only used to for the initial instruction set
at the VR boot so eth0 can be configure, I think xenstore would work just
fine.
unless you are saying we could just not rely on xenstore in terms of
reliability?


*Pierre-Luc DION*
Architecte de Solution Cloud | Cloud Solutions Architect
t 855.652.5683

*CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Jan 12, 2018 at 7:34 PM, Tim Mackey <tm...@gmail.com> wrote:

> > We found that we can use xenstore-read / xenstore-write to send data from
> dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
> approach ?
>
> xenstore has had some issues in the past. The most notable of which were
> limitations on the number of event channels in use, followed by overall
> performance impact. iirc, the event channel stuff was fully resolved with
> XenServer 6.5, but they do speak to a need to test if there are any changes
> to the maximum number of VMs which can be reliably supported. It also
> limits legacy support (in case that matters).
>
> Architecturally I think this is a reasonable approach to the problem. One
> other thing to note is that xapi replicates xenstore information to all
> members of a pool. That might impact RVRs.
>
> -tim
>
> [1] "xenstore is not a high-performance facility and should beused only for
> small amounts of control plane data."
> https://xenbits.xen.org/docs/4.6-testing/misc/xenstore.txt
>
> On Fri, Jan 12, 2018 at 4:56 PM, Pierre-Luc Dion <pd...@cloudops.com>
> wrote:
>
> > After some verification with Syed and Khosrow,
> >
> > We found that we can use xenstore-read / xenstore-write to send data from
> > dom0 to domU which are in our case  VRs or SVMs. Any reason not using
> this
> > approach ?  that way we would not need a architectural change for
> XenServer
> > pods, and this would support HVM and PV virtual-router. more test
> required,
> > for sure, VR would need to have xentools pre-installed.
> >
> >
> > *Pierre-Luc DION*
> > Architecte de Solution Cloud | Cloud Solutions Architect
> > t 855.652.5683
> >
> > *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> > 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> > w cloudops.com *|* tw @CloudOps_
> >
> > On Fri, Jan 12, 2018 at 4:07 PM, Syed Ahmed <sa...@cloudops.com> wrote:
> >
> > > KVM uses a VirtIO channel to send information about the IP address and
> > > other params to the SystemVMs. We could use a similar strategy in
> > XenServer
> > > using XenStore. This would involve minimal changes to the code while
> > > keeping backward compatibility.
> > >
> > >
> > >
> > > On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller <sweller@ena.com.invalid
> >
> > > wrote:
> > >
> > > > They do not. They receive a link-local ip address that is used for
> host
> > > > agent to VR communication. All VR commands are proxied through the
> host
> > > > agent. Host agent to VR communication is over SSH.
> > > >
> > > >
> > > > ________________________________
> > > > From: Rafael Weingärtner <ra...@gmail.com>
> > > > Sent: Friday, January 12, 2018 1:42 PM
> > > > To: dev
> > > > Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
> > > >
> > > > but we are already using this design in vmware deployments (not sure
> > > about
> > > > KVM). The management network is already an isolated network only used
> > by
> > > > system vms and ACS. Unless we are attacked by some internal agent, we
> > are
> > > > safe from customer attack through management networks. Also, we can
> (if
> > > we
> > > > don't do yet) restrict access only via these management interfaces in
> > > > system VMs(VRs, SSVM, console proxy and others to come).
> > > >
> > > >
> > > >
> > > > Can someone confirm if VRs receive management IPs in KVM deployments?
> > > >
> > > > On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com>
> > wrote:
> > > >
> > > > > The reason why we used link local in the first place was to isolate
> > the
> > > > VR
> > > > > from directly accessing the management network. This provides
> another
> > > > layer
> > > > > of security in case of a VR exploit. This will also have a side
> > effect
> > > of
> > > > > making all VRs visible to each other. Are we okay accepting this?
> > > > >
> > > > > Thanks,
> > > > > -Syed
> > > > >
> > > > > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com>
> > > wrote:
> > > > >
> > > > > > dom0 already has a DHCP server listening for requests on internal
> > > > > > management networks. I'd be wary trying to manage it from an
> > external
> > > > > > service like cloudstack lest it get reset upon XenServer patch.
> > This
> > > > > alone
> > > > > > makes me favor option #2. I also think option #2 simplifies
> network
> > > > > design
> > > > > > for users.
> > > > > >
> > > > > > Agreed on making this as consistent across flows as possible.
> > > > > >
> > > > > >
> > > > > >
> > > > > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > > > > rafaelweingartner@gmail.com> wrote:
> > > > > >
> > > > > > > It looks reasonable to manage VRs via management IP network. We
> > > > should
> > > > > > > focus on using the same work flow for different deployment
> > > scenarios.
> > > > > > >
> > > > > > >
> > > > > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> > > > pdion@cloudops.com>
> > > > > > > wrote:
> > > > > > >
> > > > > > > > Hi,
> > > > > > > >
> > > > > > > > We need to start a architecture discussion about running
> > SystemVM
> > > > and
> > > > > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > > > > Meltdown-Spectre,
> > > > > > > > one of the mitigation step is currently to run VMs as HVM on
> > > > > XenServer
> > > > > > to
> > > > > > > > self contain a user space attack from a guest OS.
> > > > > > > >
> > > > > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs
> > to
> > > > > start
> > > > > > > has
> > > > > > > > HVM. This is currently problematic for Virtual Routers and
> > > SystemVM
> > > > > > > because
> > > > > > > > CloudStack use PV "OS boot Options" to preconfigure the VR
> > eth0:
> > > > > > > > cloud_link_local. While using HVM the "OS boot Options" is
> not
> > > > > > accessible
> > > > > > > > to the VM so the VR fail to be properly configured.
> > > > > > > >
> > > > > > > > I currently see 2 potential approaches for this:
> > > > > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0
> > > would
> > > > > > > receive
> > > > > > > > is network configuration at boot.
> > > > > > > > 2. Change the current way of managing VR, SVMs on XenServer,
> > > > > potentiall
> > > > > > > do
> > > > > > > > same has with VMware: use pod management networks and assign
> a
> > > POD
> > > > IP
> > > > > > to
> > > > > > > > each VR.
> > > > > > > >
> > > > > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> > > > approach
> > > > > > > would
> > > > > > > > work too, could someone explain how it work on this thread?
> > > > > > > >
> > > > > > > > I'd a bit fan of a potential #2 aproach because it could
> > > facilitate
> > > > > VR
> > > > > > > > monitoring and logging, although a migration path for an
> > existing
> > > > > cloud
> > > > > > > > could be complex.
> > > > > > > >
> > > > > > > > Cheers,
> > > > > > > >
> > > > > > > >
> > > > > > > > Pierre-Luc
> > > > > > > >
> > > > > > >
> > > > > > >
> > > > > > >
> > > > > > > --
> > > > > > > Rafael Weingärtner
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Rafael Weingärtner
> > > >
> > >
> >
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Tim Mackey <tm...@gmail.com>.
> We found that we can use xenstore-read / xenstore-write to send data from
dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
approach ?

xenstore has had some issues in the past. The most notable of which were
limitations on the number of event channels in use, followed by overall
performance impact. iirc, the event channel stuff was fully resolved with
XenServer 6.5, but they do speak to a need to test if there are any changes
to the maximum number of VMs which can be reliably supported. It also
limits legacy support (in case that matters).

Architecturally I think this is a reasonable approach to the problem. One
other thing to note is that xapi replicates xenstore information to all
members of a pool. That might impact RVRs.

-tim

[1] "xenstore is not a high-performance facility and should beused only for
small amounts of control plane data."
https://xenbits.xen.org/docs/4.6-testing/misc/xenstore.txt

On Fri, Jan 12, 2018 at 4:56 PM, Pierre-Luc Dion <pd...@cloudops.com> wrote:

> After some verification with Syed and Khosrow,
>
> We found that we can use xenstore-read / xenstore-write to send data from
> dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
> approach ?  that way we would not need a architectural change for XenServer
> pods, and this would support HVM and PV virtual-router. more test required,
> for sure, VR would need to have xentools pre-installed.
>
>
> *Pierre-Luc DION*
> Architecte de Solution Cloud | Cloud Solutions Architect
> t 855.652.5683
>
> *CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
> 420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
> w cloudops.com *|* tw @CloudOps_
>
> On Fri, Jan 12, 2018 at 4:07 PM, Syed Ahmed <sa...@cloudops.com> wrote:
>
> > KVM uses a VirtIO channel to send information about the IP address and
> > other params to the SystemVMs. We could use a similar strategy in
> XenServer
> > using XenStore. This would involve minimal changes to the code while
> > keeping backward compatibility.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller <sw...@ena.com.invalid>
> > wrote:
> >
> > > They do not. They receive a link-local ip address that is used for host
> > > agent to VR communication. All VR commands are proxied through the host
> > > agent. Host agent to VR communication is over SSH.
> > >
> > >
> > > ________________________________
> > > From: Rafael Weingärtner <ra...@gmail.com>
> > > Sent: Friday, January 12, 2018 1:42 PM
> > > To: dev
> > > Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
> > >
> > > but we are already using this design in vmware deployments (not sure
> > about
> > > KVM). The management network is already an isolated network only used
> by
> > > system vms and ACS. Unless we are attacked by some internal agent, we
> are
> > > safe from customer attack through management networks. Also, we can (if
> > we
> > > don't do yet) restrict access only via these management interfaces in
> > > system VMs(VRs, SSVM, console proxy and others to come).
> > >
> > >
> > >
> > > Can someone confirm if VRs receive management IPs in KVM deployments?
> > >
> > > On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com>
> wrote:
> > >
> > > > The reason why we used link local in the first place was to isolate
> the
> > > VR
> > > > from directly accessing the management network. This provides another
> > > layer
> > > > of security in case of a VR exploit. This will also have a side
> effect
> > of
> > > > making all VRs visible to each other. Are we okay accepting this?
> > > >
> > > > Thanks,
> > > > -Syed
> > > >
> > > > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com>
> > wrote:
> > > >
> > > > > dom0 already has a DHCP server listening for requests on internal
> > > > > management networks. I'd be wary trying to manage it from an
> external
> > > > > service like cloudstack lest it get reset upon XenServer patch.
> This
> > > > alone
> > > > > makes me favor option #2. I also think option #2 simplifies network
> > > > design
> > > > > for users.
> > > > >
> > > > > Agreed on making this as consistent across flows as possible.
> > > > >
> > > > >
> > > > >
> > > > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > > > rafaelweingartner@gmail.com> wrote:
> > > > >
> > > > > > It looks reasonable to manage VRs via management IP network. We
> > > should
> > > > > > focus on using the same work flow for different deployment
> > scenarios.
> > > > > >
> > > > > >
> > > > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> > > pdion@cloudops.com>
> > > > > > wrote:
> > > > > >
> > > > > > > Hi,
> > > > > > >
> > > > > > > We need to start a architecture discussion about running
> SystemVM
> > > and
> > > > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > > > Meltdown-Spectre,
> > > > > > > one of the mitigation step is currently to run VMs as HVM on
> > > > XenServer
> > > > > to
> > > > > > > self contain a user space attack from a guest OS.
> > > > > > >
> > > > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs
> to
> > > > start
> > > > > > has
> > > > > > > HVM. This is currently problematic for Virtual Routers and
> > SystemVM
> > > > > > because
> > > > > > > CloudStack use PV "OS boot Options" to preconfigure the VR
> eth0:
> > > > > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > > > > accessible
> > > > > > > to the VM so the VR fail to be properly configured.
> > > > > > >
> > > > > > > I currently see 2 potential approaches for this:
> > > > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0
> > would
> > > > > > receive
> > > > > > > is network configuration at boot.
> > > > > > > 2. Change the current way of managing VR, SVMs on XenServer,
> > > > potentiall
> > > > > > do
> > > > > > > same has with VMware: use pod management networks and assign a
> > POD
> > > IP
> > > > > to
> > > > > > > each VR.
> > > > > > >
> > > > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> > > approach
> > > > > > would
> > > > > > > work too, could someone explain how it work on this thread?
> > > > > > >
> > > > > > > I'd a bit fan of a potential #2 aproach because it could
> > facilitate
> > > > VR
> > > > > > > monitoring and logging, although a migration path for an
> existing
> > > > cloud
> > > > > > > could be complex.
> > > > > > >
> > > > > > > Cheers,
> > > > > > >
> > > > > > >
> > > > > > > Pierre-Luc
> > > > > > >
> > > > > >
> > > > > >
> > > > > >
> > > > > > --
> > > > > > Rafael Weingärtner
> > > > > >
> > > > >
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Pierre-Luc Dion <pd...@cloudops.com>.
After some verification with Syed and Khosrow,

We found that we can use xenstore-read / xenstore-write to send data from
dom0 to domU which are in our case  VRs or SVMs. Any reason not using this
approach ?  that way we would not need a architectural change for XenServer
pods, and this would support HVM and PV virtual-router. more test required,
for sure, VR would need to have xentools pre-installed.


*Pierre-Luc DION*
Architecte de Solution Cloud | Cloud Solutions Architect
t 855.652.5683

*CloudOps* Votre partenaire infonuagique* | *Cloud Solutions Experts
420 rue Guy *|* Montreal *|* Quebec *|* H3J 1S6
w cloudops.com *|* tw @CloudOps_

On Fri, Jan 12, 2018 at 4:07 PM, Syed Ahmed <sa...@cloudops.com> wrote:

> KVM uses a VirtIO channel to send information about the IP address and
> other params to the SystemVMs. We could use a similar strategy in XenServer
> using XenStore. This would involve minimal changes to the code while
> keeping backward compatibility.
>
>
>
> On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller <sw...@ena.com.invalid>
> wrote:
>
> > They do not. They receive a link-local ip address that is used for host
> > agent to VR communication. All VR commands are proxied through the host
> > agent. Host agent to VR communication is over SSH.
> >
> >
> > ________________________________
> > From: Rafael Weingärtner <ra...@gmail.com>
> > Sent: Friday, January 12, 2018 1:42 PM
> > To: dev
> > Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
> >
> > but we are already using this design in vmware deployments (not sure
> about
> > KVM). The management network is already an isolated network only used by
> > system vms and ACS. Unless we are attacked by some internal agent, we are
> > safe from customer attack through management networks. Also, we can (if
> we
> > don't do yet) restrict access only via these management interfaces in
> > system VMs(VRs, SSVM, console proxy and others to come).
> >
> >
> >
> > Can someone confirm if VRs receive management IPs in KVM deployments?
> >
> > On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com> wrote:
> >
> > > The reason why we used link local in the first place was to isolate the
> > VR
> > > from directly accessing the management network. This provides another
> > layer
> > > of security in case of a VR exploit. This will also have a side effect
> of
> > > making all VRs visible to each other. Are we okay accepting this?
> > >
> > > Thanks,
> > > -Syed
> > >
> > > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com>
> wrote:
> > >
> > > > dom0 already has a DHCP server listening for requests on internal
> > > > management networks. I'd be wary trying to manage it from an external
> > > > service like cloudstack lest it get reset upon XenServer patch. This
> > > alone
> > > > makes me favor option #2. I also think option #2 simplifies network
> > > design
> > > > for users.
> > > >
> > > > Agreed on making this as consistent across flows as possible.
> > > >
> > > >
> > > >
> > > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > > rafaelweingartner@gmail.com> wrote:
> > > >
> > > > > It looks reasonable to manage VRs via management IP network. We
> > should
> > > > > focus on using the same work flow for different deployment
> scenarios.
> > > > >
> > > > >
> > > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> > pdion@cloudops.com>
> > > > > wrote:
> > > > >
> > > > > > Hi,
> > > > > >
> > > > > > We need to start a architecture discussion about running SystemVM
> > and
> > > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > > Meltdown-Spectre,
> > > > > > one of the mitigation step is currently to run VMs as HVM on
> > > XenServer
> > > > to
> > > > > > self contain a user space attack from a guest OS.
> > > > > >
> > > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> > > start
> > > > > has
> > > > > > HVM. This is currently problematic for Virtual Routers and
> SystemVM
> > > > > because
> > > > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > > > accessible
> > > > > > to the VM so the VR fail to be properly configured.
> > > > > >
> > > > > > I currently see 2 potential approaches for this:
> > > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0
> would
> > > > > receive
> > > > > > is network configuration at boot.
> > > > > > 2. Change the current way of managing VR, SVMs on XenServer,
> > > potentiall
> > > > > do
> > > > > > same has with VMware: use pod management networks and assign a
> POD
> > IP
> > > > to
> > > > > > each VR.
> > > > > >
> > > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> > approach
> > > > > would
> > > > > > work too, could someone explain how it work on this thread?
> > > > > >
> > > > > > I'd a bit fan of a potential #2 aproach because it could
> facilitate
> > > VR
> > > > > > monitoring and logging, although a migration path for an existing
> > > cloud
> > > > > > could be complex.
> > > > > >
> > > > > > Cheers,
> > > > > >
> > > > > >
> > > > > > Pierre-Luc
> > > > > >
> > > > >
> > > > >
> > > > >
> > > > > --
> > > > > Rafael Weingärtner
> > > > >
> > > >
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Syed Ahmed <sa...@cloudops.com>.
KVM uses a VirtIO channel to send information about the IP address and
other params to the SystemVMs. We could use a similar strategy in XenServer
using XenStore. This would involve minimal changes to the code while
keeping backward compatibility.



On Fri, Jan 12, 2018 at 3:07 PM, Simon Weller <sw...@ena.com.invalid>
wrote:

> They do not. They receive a link-local ip address that is used for host
> agent to VR communication. All VR commands are proxied through the host
> agent. Host agent to VR communication is over SSH.
>
>
> ________________________________
> From: Rafael Weingärtner <ra...@gmail.com>
> Sent: Friday, January 12, 2018 1:42 PM
> To: dev
> Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer
>
> but we are already using this design in vmware deployments (not sure about
> KVM). The management network is already an isolated network only used by
> system vms and ACS. Unless we are attacked by some internal agent, we are
> safe from customer attack through management networks. Also, we can (if we
> don't do yet) restrict access only via these management interfaces in
> system VMs(VRs, SSVM, console proxy and others to come).
>
>
>
> Can someone confirm if VRs receive management IPs in KVM deployments?
>
> On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com> wrote:
>
> > The reason why we used link local in the first place was to isolate the
> VR
> > from directly accessing the management network. This provides another
> layer
> > of security in case of a VR exploit. This will also have a side effect of
> > making all VRs visible to each other. Are we okay accepting this?
> >
> > Thanks,
> > -Syed
> >
> > On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com> wrote:
> >
> > > dom0 already has a DHCP server listening for requests on internal
> > > management networks. I'd be wary trying to manage it from an external
> > > service like cloudstack lest it get reset upon XenServer patch. This
> > alone
> > > makes me favor option #2. I also think option #2 simplifies network
> > design
> > > for users.
> > >
> > > Agreed on making this as consistent across flows as possible.
> > >
> > >
> > >
> > > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > > rafaelweingartner@gmail.com> wrote:
> > >
> > > > It looks reasonable to manage VRs via management IP network. We
> should
> > > > focus on using the same work flow for different deployment scenarios.
> > > >
> > > >
> > > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <
> pdion@cloudops.com>
> > > > wrote:
> > > >
> > > > > Hi,
> > > > >
> > > > > We need to start a architecture discussion about running SystemVM
> and
> > > > > Virtual-Router as HVM instances in XenServer. With recent
> > > > Meltdown-Spectre,
> > > > > one of the mitigation step is currently to run VMs as HVM on
> > XenServer
> > > to
> > > > > self contain a user space attack from a guest OS.
> > > > >
> > > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> > start
> > > > has
> > > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > > because
> > > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > > accessible
> > > > > to the VM so the VR fail to be properly configured.
> > > > >
> > > > > I currently see 2 potential approaches for this:
> > > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > > receive
> > > > > is network configuration at boot.
> > > > > 2. Change the current way of managing VR, SVMs on XenServer,
> > potentiall
> > > > do
> > > > > same has with VMware: use pod management networks and assign a POD
> IP
> > > to
> > > > > each VR.
> > > > >
> > > > > I don't know how it's implemented in KVM, maybe cloning KVM
> approach
> > > > would
> > > > > work too, could someone explain how it work on this thread?
> > > > >
> > > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> > VR
> > > > > monitoring and logging, although a migration path for an existing
> > cloud
> > > > > could be complex.
> > > > >
> > > > > Cheers,
> > > > >
> > > > >
> > > > > Pierre-Luc
> > > > >
> > > >
> > > >
> > > >
> > > > --
> > > > Rafael Weingärtner
> > > >
> > >
> >
>
>
>
> --
> Rafael Weingärtner
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Simon Weller <sw...@ena.com.INVALID>.
They do not. They receive a link-local ip address that is used for host agent to VR communication. All VR commands are proxied through the host agent. Host agent to VR communication is over SSH.


________________________________
From: Rafael Weingärtner <ra...@gmail.com>
Sent: Friday, January 12, 2018 1:42 PM
To: dev
Subject: Re: [DISCUSS] running sVM and VR as HVM on XenServer

but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com> wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com> wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <pd...@cloudops.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



--
Rafael Weingärtner

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Rafael Weingärtner <ra...@gmail.com>.
but we are already using this design in vmware deployments (not sure about
KVM). The management network is already an isolated network only used by
system vms and ACS. Unless we are attacked by some internal agent, we are
safe from customer attack through management networks. Also, we can (if we
don't do yet) restrict access only via these management interfaces in
system VMs(VRs, SSVM, console proxy and others to come).



Can someone confirm if VRs receive management IPs in KVM deployments?

On Fri, Jan 12, 2018 at 5:36 PM, Syed Ahmed <sa...@cloudops.com> wrote:

> The reason why we used link local in the first place was to isolate the VR
> from directly accessing the management network. This provides another layer
> of security in case of a VR exploit. This will also have a side effect of
> making all VRs visible to each other. Are we okay accepting this?
>
> Thanks,
> -Syed
>
> On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com> wrote:
>
> > dom0 already has a DHCP server listening for requests on internal
> > management networks. I'd be wary trying to manage it from an external
> > service like cloudstack lest it get reset upon XenServer patch. This
> alone
> > makes me favor option #2. I also think option #2 simplifies network
> design
> > for users.
> >
> > Agreed on making this as consistent across flows as possible.
> >
> >
> >
> > On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> > rafaelweingartner@gmail.com> wrote:
> >
> > > It looks reasonable to manage VRs via management IP network. We should
> > > focus on using the same work flow for different deployment scenarios.
> > >
> > >
> > > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <pd...@cloudops.com>
> > > wrote:
> > >
> > > > Hi,
> > > >
> > > > We need to start a architecture discussion about running SystemVM and
> > > > Virtual-Router as HVM instances in XenServer. With recent
> > > Meltdown-Spectre,
> > > > one of the mitigation step is currently to run VMs as HVM on
> XenServer
> > to
> > > > self contain a user space attack from a guest OS.
> > > >
> > > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to
> start
> > > has
> > > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > > because
> > > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > > cloud_link_local. While using HVM the "OS boot Options" is not
> > accessible
> > > > to the VM so the VR fail to be properly configured.
> > > >
> > > > I currently see 2 potential approaches for this:
> > > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > > receive
> > > > is network configuration at boot.
> > > > 2. Change the current way of managing VR, SVMs on XenServer,
> potentiall
> > > do
> > > > same has with VMware: use pod management networks and assign a POD IP
> > to
> > > > each VR.
> > > >
> > > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > > would
> > > > work too, could someone explain how it work on this thread?
> > > >
> > > > I'd a bit fan of a potential #2 aproach because it could facilitate
> VR
> > > > monitoring and logging, although a migration path for an existing
> cloud
> > > > could be complex.
> > > >
> > > > Cheers,
> > > >
> > > >
> > > > Pierre-Luc
> > > >
> > >
> > >
> > >
> > > --
> > > Rafael Weingärtner
> > >
> >
>



-- 
Rafael Weingärtner

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Syed Ahmed <sa...@cloudops.com>.
The reason why we used link local in the first place was to isolate the VR
from directly accessing the management network. This provides another layer
of security in case of a VR exploit. This will also have a side effect of
making all VRs visible to each other. Are we okay accepting this?

Thanks,
-Syed

On Fri, Jan 12, 2018 at 11:37 AM, Tim Mackey <tm...@gmail.com> wrote:

> dom0 already has a DHCP server listening for requests on internal
> management networks. I'd be wary trying to manage it from an external
> service like cloudstack lest it get reset upon XenServer patch. This alone
> makes me favor option #2. I also think option #2 simplifies network design
> for users.
>
> Agreed on making this as consistent across flows as possible.
>
>
>
> On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
> rafaelweingartner@gmail.com> wrote:
>
> > It looks reasonable to manage VRs via management IP network. We should
> > focus on using the same work flow for different deployment scenarios.
> >
> >
> > On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <pd...@cloudops.com>
> > wrote:
> >
> > > Hi,
> > >
> > > We need to start a architecture discussion about running SystemVM and
> > > Virtual-Router as HVM instances in XenServer. With recent
> > Meltdown-Spectre,
> > > one of the mitigation step is currently to run VMs as HVM on XenServer
> to
> > > self contain a user space attack from a guest OS.
> > >
> > > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start
> > has
> > > HVM. This is currently problematic for Virtual Routers and SystemVM
> > because
> > > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > > cloud_link_local. While using HVM the "OS boot Options" is not
> accessible
> > > to the VM so the VR fail to be properly configured.
> > >
> > > I currently see 2 potential approaches for this:
> > > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> > receive
> > > is network configuration at boot.
> > > 2. Change the current way of managing VR, SVMs on XenServer, potentiall
> > do
> > > same has with VMware: use pod management networks and assign a POD IP
> to
> > > each VR.
> > >
> > > I don't know how it's implemented in KVM, maybe cloning KVM approach
> > would
> > > work too, could someone explain how it work on this thread?
> > >
> > > I'd a bit fan of a potential #2 aproach because it could facilitate VR
> > > monitoring and logging, although a migration path for an existing cloud
> > > could be complex.
> > >
> > > Cheers,
> > >
> > >
> > > Pierre-Luc
> > >
> >
> >
> >
> > --
> > Rafael Weingärtner
> >
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Tim Mackey <tm...@gmail.com>.
dom0 already has a DHCP server listening for requests on internal
management networks. I'd be wary trying to manage it from an external
service like cloudstack lest it get reset upon XenServer patch. This alone
makes me favor option #2. I also think option #2 simplifies network design
for users.

Agreed on making this as consistent across flows as possible.



On Fri, Jan 12, 2018 at 9:44 AM, Rafael Weingärtner <
rafaelweingartner@gmail.com> wrote:

> It looks reasonable to manage VRs via management IP network. We should
> focus on using the same work flow for different deployment scenarios.
>
>
> On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <pd...@cloudops.com>
> wrote:
>
> > Hi,
> >
> > We need to start a architecture discussion about running SystemVM and
> > Virtual-Router as HVM instances in XenServer. With recent
> Meltdown-Spectre,
> > one of the mitigation step is currently to run VMs as HVM on XenServer to
> > self contain a user space attack from a guest OS.
> >
> > Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start
> has
> > HVM. This is currently problematic for Virtual Routers and SystemVM
> because
> > CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> > cloud_link_local. While using HVM the "OS boot Options" is not accessible
> > to the VM so the VR fail to be properly configured.
> >
> > I currently see 2 potential approaches for this:
> > 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would
> receive
> > is network configuration at boot.
> > 2. Change the current way of managing VR, SVMs on XenServer, potentiall
> do
> > same has with VMware: use pod management networks and assign a POD IP to
> > each VR.
> >
> > I don't know how it's implemented in KVM, maybe cloning KVM approach
> would
> > work too, could someone explain how it work on this thread?
> >
> > I'd a bit fan of a potential #2 aproach because it could facilitate VR
> > monitoring and logging, although a migration path for an existing cloud
> > could be complex.
> >
> > Cheers,
> >
> >
> > Pierre-Luc
> >
>
>
>
> --
> Rafael Weingärtner
>

Re: [DISCUSS] running sVM and VR as HVM on XenServer

Posted by Rafael Weingärtner <ra...@gmail.com>.
It looks reasonable to manage VRs via management IP network. We should
focus on using the same work flow for different deployment scenarios.


On Fri, Jan 12, 2018 at 12:13 PM, Pierre-Luc Dion <pd...@cloudops.com>
wrote:

> Hi,
>
> We need to start a architecture discussion about running SystemVM and
> Virtual-Router as HVM instances in XenServer. With recent Meltdown-Spectre,
> one of the mitigation step is currently to run VMs as HVM on XenServer to
> self contain a user space attack from a guest OS.
>
> Recent hotfix from Citrix XenServer (XS71ECU1009) enforce VMs to start has
> HVM. This is currently problematic for Virtual Routers and SystemVM because
> CloudStack use PV "OS boot Options" to preconfigure the VR eth0:
> cloud_link_local. While using HVM the "OS boot Options" is not accessible
> to the VM so the VR fail to be properly configured.
>
> I currently see 2 potential approaches for this:
> 1. Run a dhcpserver in dom0 managed by cloudstack so VR eth0 would receive
> is network configuration at boot.
> 2. Change the current way of managing VR, SVMs on XenServer, potentiall do
> same has with VMware: use pod management networks and assign a POD IP to
> each VR.
>
> I don't know how it's implemented in KVM, maybe cloning KVM approach would
> work too, could someone explain how it work on this thread?
>
> I'd a bit fan of a potential #2 aproach because it could facilitate VR
> monitoring and logging, although a migration path for an existing cloud
> could be complex.
>
> Cheers,
>
>
> Pierre-Luc
>



-- 
Rafael Weingärtner