You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@cloudstack.apache.org by Nick LIVENS <ni...@nuagenetworks.net> on 2016/03/24 09:18:55 UTC

[DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Hi all,

I'd like to propose a new plugin called the "VPC Inline LB" plugin.
The design document can be found at :
https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894

Looking forward to hear your reviews / thoughts.

Thanks!

Kind regards,
Nick Livens

Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Posted by Kris Sterckx <kr...@nuagenetworks.net>.
Thanks for this update Ilya,

Best regards,

Kris



On 11 April 2016 at 07:59, ilya <il...@gmail.com> wrote:

> Kris and Nick
>
> Noticed an update in the FS:
>
> >> The VPC Inline LB appliance therefore is a regular System VM, exactly
> the same as the Internal LB appliance today. Meaning it has 1 guest nic
> and 1 control (link-local / management) nic.
>
>
>
> This should be ok. The premise behind my concern was - if LB Inline VM
> was to get hack (re: SSL HeartBlead) and intruder has root level
> privileges, he could try to go further in the network in an attempt to
> get access to MGMT layer. Since we have link-local - its limited to 1
> hypervisor only.
>
> Assuming iptables/firewall on hypervisor blocks incoming traffic from VR
> link local address - we should be ok. I guess i need to double check on
> this.
>
> Regards
> ilya
>
>
>
> On 4/10/16 1:26 PM, Kris Sterckx wrote:
> > Hi all
> >
> >
> > Thanks for reviewing the FS. Based on the received comments I clarified
> > further in the FS that the Vpc Inline LB appliance solution is based on
> the
> > Internal LB appliance solution, only now extended with secondary IP's and
> > static NAT to Public IP.
> >
> > I also corrected the "management" nic to "control" nic. The text really
> > meant eth1, i.e the link-local nic on KVM.
> >
> > Pls find the updated text :
> >
> >
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894
> >
> > Architecture and Design description
> >
> > We will introduce a new CloudStack network plugin “VpcInlineLbVm” which
> is
> > based on the Internal LoadBalancer plugin and which just like the
> Internal
> > LB plugin is implementing load balancing based on at-run-time deployed
> > appliances based on the VR (Router VM) template (which defaults to the
> > System VM template), but the LB solution now extended with static NAT to
> > secondary IP's.
> >
> > The VPC Inline LB appliance therefore is a regular System VM, exactly the
> > same as the Internal LB appliance today. Meaning it has 1 guest nic and 1
> > control (link-local / management) nic.
> >
> > With the new proposed VpcInlineLbVm set as the Public LB provider of a
> VPC,
> > when a Public IP is acquired for this VPC and LB rules are configured on
> > this public IP, a VPC Inline LB appliance is deployed if not yet
> existing,
> > and an additional guest IP is allocated and set as secondary IP on the
> > appliance guest nic, upon which static NAT is configured from the Public
> IP
> > to the secondary guest IP.  (See below outline for the detailed
> algorithm.)
> >
> > *In summary*, the VPC Inline LB appliance is reusing the Internal LB
> > appliance but its solution now extended with Static NAT from Public IP's
> to
> > secondary (load balanced) IP's at the LB appliance guest nic.
> >
> >
> > Hi Ilya,
> >
> > Let me know pls whether that clarifies and brings new light to the
> > questions asked.
> >
> > Can you pls indicate, given the suggested approach of reusing the
> appliance
> > mechanism already used for Internal LB, whether this addresses the
> concern
> > or, when it doesn't, pls further clarify the issue seen in this approach.
> >
> > Thanks!
> >
> >
> > Hi Sanjeev, to your 1st question:
> >
> > Will this LB appliance be placed between guest vm's and the Nuage VSP
> > provider(Nuage VSP and lb appliance will have one nic in guest network)?
> >
> >> Please note that the LB appliance is a standard System VM, having 1 nic
> > in Guest network and 1 nic in Control. There is as such no relation
> between
> > this Appliance and the Nuage VSP.
> >
> > In the case where Nuage VSP is the Connectivity provider, the appliance
> has
> > a guest nic in a Nuage VSP managed (VXLAN) network, like all guest VM's
> > would have. But that is dependent on the provider selection.
> >
> > In the specific case of Nuage VSP, publicly load balanced traffic will
> > indeed flow as : (pls read-on to your 2nd question also) :
> >     -> incoming traffic on Public IP  (Nuage VSP managed)
> >     -> .. being Static NAT'ted to Secondary IP on Vpc Inline LB VM
> >  (NAT'ting is Nuage VSP managed)
> >     -> .. being load balanced to real-server guest VM IP's  (Vpc Inline
> LB
> > VM appliance managed)
> >     -> .. reaching the real-server guest VM IP
> >
> > To your 2nd question:
> >
> > Is there any specific reason for traffic filtering on lb appliance
> instead
> > of Nuage VSP ? If we configure firewall rules for LB services on the
> Nuage
> > VSP instead of the inline lb appliance (iptable rules  for lb traffic),
> > traffic can be filtered on the Nuage VSP before Natting?
> >
> >> Please note that the generic Static NAT delegation is applicable : the
> > realization of the Static NAT rules being set up, depends on the Static
> NAT
> > provider in the VPC offering. In case Nuage VSP is the provider for
> Static
> > NAT (which it would be in the case of a Nuage SDN backed deployment), the
> > NAT’ting is effectively done by the Nuage VSP.  If anyone else is the
> > provider, than this provider is being delegated to.
> >
> > Thanks!
> >
> >
> > Let me know whether this overall clarifies.
> >
> > Pls don't hesitate to ask and further questions.
> >
> >
> > Best regards,
> >
> >
> > Kris Sterckx
> >
> >
> >
> > On 25 March 2016 at 07:08, Sanjeev Neelarapu <
> > sanjeev.neelarapu@accelerite.com> wrote:
> >
> >> Hi Nick Livens,
> >>
> >> I have gone through the FS and following are my review comments:
> >>
> >> 1. Will this LB appliance be placed between guest vms and the Nuage VSP
> >> provider(Nuage VSP and lb appliance will have one nic in guest network)?
> >> 2. Is there any specific reason for traffic filtering on lb appliance
> >> instead of Nuage VPS ? If we configure firewall rules for LB services on
> >> the Nuage VSP instead of the inline lb appliance (iptable rules  for lb
> >> traffic), traffic can be filtered on the Nuage VSP before Natting?
> >>
> >> Best Regards,
> >> Sanjeev N
> >> Chief Product Engineer, Accelerite
> >> Off: +91 40 6722 9368 | EMail: sanjeev.neelarapu@accelerite.com
> >>
> >>
> >> -----Original Message-----
> >> From: ilya [mailto:ilya.mailing.lists@gmail.com]
> >> Sent: Thursday, March 24, 2016 10:16 PM
> >> To: dev@cloudstack.apache.org
> >> Subject: Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer
> (new
> >> plugin)
> >>
> >> Hi Nick,
> >>
> >> Being fan of SDN, I gave this proposal a thorough read.
> >>
> >> I do have only 1 comment - that you can perhaps can use to reconsider:
> >>
> >> "Each appliance will have 2 nics, one for management, and one in the
> guest
> >> network. "
> >>
> >> In general, 2 nics - one going to management and one going to guest - is
> >> looked very negatively upon by internal InfoSec team. This
> implementation
> >> will make an LB non-compliant from SOX or PCI perspective.
> >>
> >> Proposed alternate solution:
> >> Deploy a VM with 2 NICs but put them both on the same guest network (I
> >> believe the support 2 NICs on the *same* guest network has already been
> >> submitted upstream). 1 NIC for MGMT and 1 NIC for GUEST.
> >>
> >> Using SDNs ability to restrict communication flow (openvswitch or what
> >> not), only allow specific connections from CloudStack MS to Inline LB on
> >> MGMT NIC. You will need to block all external GUEST communication to
> MGMT
> >> NIC and only make it talk to CloudStack MS on specific ports.
> >>
> >> This approach should preserve the internal compliance and wont raise any
> >> red flags.
> >>
> >> Perhaps reach out to a client who requested this feature and ask what
> they
> >> think, maybe they have not thought this through.
> >>
> >> Regards
> >> ilya
> >>
> >> PS: If we were to entertain the idea of InLine LB, we would most likely
> >> ask for approach mentioned above.
> >>
> >>
> >>
> >>
> >> On 3/24/16 1:18 AM, Nick LIVENS wrote:
> >>> Hi all,
> >>>
> >>> I'd like to propose a new plugin called the "VPC Inline LB" plugin.
> >>> The design document can be found at :
> >>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340
> >>> 894
> >>>
> >>> Looking forward to hear your reviews / thoughts.
> >>>
> >>> Thanks!
> >>>
> >>> Kind regards,
> >>> Nick Livens
> >>>
> >>
> >>
> >>
> >> DISCLAIMER
> >> ==========
> >> This e-mail may contain privileged and confidential information which is
> >> the property of Accelerite, a Persistent Systems business. It is
> intended
> >> only for the use of the individual or entity to which it is addressed.
> If
> >> you are not the intended recipient, you are not authorized to read,
> retain,
> >> copy, print, distribute or use this message. If you have received this
> >> communication in error, please notify the sender and delete all copies
> of
> >> this message. Accelerite, a Persistent Systems business does not accept
> any
> >> liability for virus infected mails.
> >>
> >
>

Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Posted by ilya <il...@gmail.com>.
Kris and Nick

Noticed an update in the FS:

>> The VPC Inline LB appliance therefore is a regular System VM, exactly
the same as the Internal LB appliance today. Meaning it has 1 guest nic
and 1 control (link-local / management) nic.



This should be ok. The premise behind my concern was - if LB Inline VM
was to get hack (re: SSL HeartBlead) and intruder has root level
privileges, he could try to go further in the network in an attempt to
get access to MGMT layer. Since we have link-local - its limited to 1
hypervisor only.

Assuming iptables/firewall on hypervisor blocks incoming traffic from VR
link local address - we should be ok. I guess i need to double check on
this.

Regards
ilya



On 4/10/16 1:26 PM, Kris Sterckx wrote:
> Hi all
> 
> 
> Thanks for reviewing the FS. Based on the received comments I clarified
> further in the FS that the Vpc Inline LB appliance solution is based on the
> Internal LB appliance solution, only now extended with secondary IP's and
> static NAT to Public IP.
> 
> I also corrected the "management" nic to "control" nic. The text really
> meant eth1, i.e the link-local nic on KVM.
> 
> Pls find the updated text :
> 
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894
> 
> Architecture and Design description
> 
> We will introduce a new CloudStack network plugin “VpcInlineLbVm” which is
> based on the Internal LoadBalancer plugin and which just like the Internal
> LB plugin is implementing load balancing based on at-run-time deployed
> appliances based on the VR (Router VM) template (which defaults to the
> System VM template), but the LB solution now extended with static NAT to
> secondary IP's.
> 
> The VPC Inline LB appliance therefore is a regular System VM, exactly the
> same as the Internal LB appliance today. Meaning it has 1 guest nic and 1
> control (link-local / management) nic.
> 
> With the new proposed VpcInlineLbVm set as the Public LB provider of a VPC,
> when a Public IP is acquired for this VPC and LB rules are configured on
> this public IP, a VPC Inline LB appliance is deployed if not yet existing,
> and an additional guest IP is allocated and set as secondary IP on the
> appliance guest nic, upon which static NAT is configured from the Public IP
> to the secondary guest IP.  (See below outline for the detailed algorithm.)
> 
> *In summary*, the VPC Inline LB appliance is reusing the Internal LB
> appliance but its solution now extended with Static NAT from Public IP's to
> secondary (load balanced) IP's at the LB appliance guest nic.
> 
> 
> Hi Ilya,
> 
> Let me know pls whether that clarifies and brings new light to the
> questions asked.
> 
> Can you pls indicate, given the suggested approach of reusing the appliance
> mechanism already used for Internal LB, whether this addresses the concern
> or, when it doesn't, pls further clarify the issue seen in this approach.
> 
> Thanks!
> 
> 
> Hi Sanjeev, to your 1st question:
> 
> Will this LB appliance be placed between guest vm's and the Nuage VSP
> provider(Nuage VSP and lb appliance will have one nic in guest network)?
> 
>> Please note that the LB appliance is a standard System VM, having 1 nic
> in Guest network and 1 nic in Control. There is as such no relation between
> this Appliance and the Nuage VSP.
> 
> In the case where Nuage VSP is the Connectivity provider, the appliance has
> a guest nic in a Nuage VSP managed (VXLAN) network, like all guest VM's
> would have. But that is dependent on the provider selection.
> 
> In the specific case of Nuage VSP, publicly load balanced traffic will
> indeed flow as : (pls read-on to your 2nd question also) :
>     -> incoming traffic on Public IP  (Nuage VSP managed)
>     -> .. being Static NAT'ted to Secondary IP on Vpc Inline LB VM
>  (NAT'ting is Nuage VSP managed)
>     -> .. being load balanced to real-server guest VM IP's  (Vpc Inline LB
> VM appliance managed)
>     -> .. reaching the real-server guest VM IP
> 
> To your 2nd question:
> 
> Is there any specific reason for traffic filtering on lb appliance instead
> of Nuage VSP ? If we configure firewall rules for LB services on the Nuage
> VSP instead of the inline lb appliance (iptable rules  for lb traffic),
> traffic can be filtered on the Nuage VSP before Natting?
> 
>> Please note that the generic Static NAT delegation is applicable : the
> realization of the Static NAT rules being set up, depends on the Static NAT
> provider in the VPC offering. In case Nuage VSP is the provider for Static
> NAT (which it would be in the case of a Nuage SDN backed deployment), the
> NAT’ting is effectively done by the Nuage VSP.  If anyone else is the
> provider, than this provider is being delegated to.
> 
> Thanks!
> 
> 
> Let me know whether this overall clarifies.
> 
> Pls don't hesitate to ask and further questions.
> 
> 
> Best regards,
> 
> 
> Kris Sterckx
> 
> 
> 
> On 25 March 2016 at 07:08, Sanjeev Neelarapu <
> sanjeev.neelarapu@accelerite.com> wrote:
> 
>> Hi Nick Livens,
>>
>> I have gone through the FS and following are my review comments:
>>
>> 1. Will this LB appliance be placed between guest vms and the Nuage VSP
>> provider(Nuage VSP and lb appliance will have one nic in guest network)?
>> 2. Is there any specific reason for traffic filtering on lb appliance
>> instead of Nuage VPS ? If we configure firewall rules for LB services on
>> the Nuage VSP instead of the inline lb appliance (iptable rules  for lb
>> traffic), traffic can be filtered on the Nuage VSP before Natting?
>>
>> Best Regards,
>> Sanjeev N
>> Chief Product Engineer, Accelerite
>> Off: +91 40 6722 9368 | EMail: sanjeev.neelarapu@accelerite.com
>>
>>
>> -----Original Message-----
>> From: ilya [mailto:ilya.mailing.lists@gmail.com]
>> Sent: Thursday, March 24, 2016 10:16 PM
>> To: dev@cloudstack.apache.org
>> Subject: Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new
>> plugin)
>>
>> Hi Nick,
>>
>> Being fan of SDN, I gave this proposal a thorough read.
>>
>> I do have only 1 comment - that you can perhaps can use to reconsider:
>>
>> "Each appliance will have 2 nics, one for management, and one in the guest
>> network. "
>>
>> In general, 2 nics - one going to management and one going to guest - is
>> looked very negatively upon by internal InfoSec team. This implementation
>> will make an LB non-compliant from SOX or PCI perspective.
>>
>> Proposed alternate solution:
>> Deploy a VM with 2 NICs but put them both on the same guest network (I
>> believe the support 2 NICs on the *same* guest network has already been
>> submitted upstream). 1 NIC for MGMT and 1 NIC for GUEST.
>>
>> Using SDNs ability to restrict communication flow (openvswitch or what
>> not), only allow specific connections from CloudStack MS to Inline LB on
>> MGMT NIC. You will need to block all external GUEST communication to MGMT
>> NIC and only make it talk to CloudStack MS on specific ports.
>>
>> This approach should preserve the internal compliance and wont raise any
>> red flags.
>>
>> Perhaps reach out to a client who requested this feature and ask what they
>> think, maybe they have not thought this through.
>>
>> Regards
>> ilya
>>
>> PS: If we were to entertain the idea of InLine LB, we would most likely
>> ask for approach mentioned above.
>>
>>
>>
>>
>> On 3/24/16 1:18 AM, Nick LIVENS wrote:
>>> Hi all,
>>>
>>> I'd like to propose a new plugin called the "VPC Inline LB" plugin.
>>> The design document can be found at :
>>> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340
>>> 894
>>>
>>> Looking forward to hear your reviews / thoughts.
>>>
>>> Thanks!
>>>
>>> Kind regards,
>>> Nick Livens
>>>
>>
>>
>>
>> DISCLAIMER
>> ==========
>> This e-mail may contain privileged and confidential information which is
>> the property of Accelerite, a Persistent Systems business. It is intended
>> only for the use of the individual or entity to which it is addressed. If
>> you are not the intended recipient, you are not authorized to read, retain,
>> copy, print, distribute or use this message. If you have received this
>> communication in error, please notify the sender and delete all copies of
>> this message. Accelerite, a Persistent Systems business does not accept any
>> liability for virus infected mails.
>>
> 

Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Posted by Kris Sterckx <kr...@nuagenetworks.net>.
Hi all


Thanks for reviewing the FS. Based on the received comments I clarified
further in the FS that the Vpc Inline LB appliance solution is based on the
Internal LB appliance solution, only now extended with secondary IP's and
static NAT to Public IP.

I also corrected the "management" nic to "control" nic. The text really
meant eth1, i.e the link-local nic on KVM.

Pls find the updated text :

https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894

Architecture and Design description

We will introduce a new CloudStack network plugin “VpcInlineLbVm” which is
based on the Internal LoadBalancer plugin and which just like the Internal
LB plugin is implementing load balancing based on at-run-time deployed
appliances based on the VR (Router VM) template (which defaults to the
System VM template), but the LB solution now extended with static NAT to
secondary IP's.

The VPC Inline LB appliance therefore is a regular System VM, exactly the
same as the Internal LB appliance today. Meaning it has 1 guest nic and 1
control (link-local / management) nic.

With the new proposed VpcInlineLbVm set as the Public LB provider of a VPC,
when a Public IP is acquired for this VPC and LB rules are configured on
this public IP, a VPC Inline LB appliance is deployed if not yet existing,
and an additional guest IP is allocated and set as secondary IP on the
appliance guest nic, upon which static NAT is configured from the Public IP
to the secondary guest IP.  (See below outline for the detailed algorithm.)

*In summary*, the VPC Inline LB appliance is reusing the Internal LB
appliance but its solution now extended with Static NAT from Public IP's to
secondary (load balanced) IP's at the LB appliance guest nic.


Hi Ilya,

Let me know pls whether that clarifies and brings new light to the
questions asked.

Can you pls indicate, given the suggested approach of reusing the appliance
mechanism already used for Internal LB, whether this addresses the concern
or, when it doesn't, pls further clarify the issue seen in this approach.

Thanks!


Hi Sanjeev, to your 1st question:

Will this LB appliance be placed between guest vm's and the Nuage VSP
provider(Nuage VSP and lb appliance will have one nic in guest network)?

> Please note that the LB appliance is a standard System VM, having 1 nic
in Guest network and 1 nic in Control. There is as such no relation between
this Appliance and the Nuage VSP.

In the case where Nuage VSP is the Connectivity provider, the appliance has
a guest nic in a Nuage VSP managed (VXLAN) network, like all guest VM's
would have. But that is dependent on the provider selection.

In the specific case of Nuage VSP, publicly load balanced traffic will
indeed flow as : (pls read-on to your 2nd question also) :
    -> incoming traffic on Public IP  (Nuage VSP managed)
    -> .. being Static NAT'ted to Secondary IP on Vpc Inline LB VM
 (NAT'ting is Nuage VSP managed)
    -> .. being load balanced to real-server guest VM IP's  (Vpc Inline LB
VM appliance managed)
    -> .. reaching the real-server guest VM IP

To your 2nd question:

Is there any specific reason for traffic filtering on lb appliance instead
of Nuage VSP ? If we configure firewall rules for LB services on the Nuage
VSP instead of the inline lb appliance (iptable rules  for lb traffic),
traffic can be filtered on the Nuage VSP before Natting?

> Please note that the generic Static NAT delegation is applicable : the
realization of the Static NAT rules being set up, depends on the Static NAT
provider in the VPC offering. In case Nuage VSP is the provider for Static
NAT (which it would be in the case of a Nuage SDN backed deployment), the
NAT’ting is effectively done by the Nuage VSP.  If anyone else is the
provider, than this provider is being delegated to.

Thanks!


Let me know whether this overall clarifies.

Pls don't hesitate to ask and further questions.


Best regards,


Kris Sterckx



On 25 March 2016 at 07:08, Sanjeev Neelarapu <
sanjeev.neelarapu@accelerite.com> wrote:

> Hi Nick Livens,
>
> I have gone through the FS and following are my review comments:
>
> 1. Will this LB appliance be placed between guest vms and the Nuage VSP
> provider(Nuage VSP and lb appliance will have one nic in guest network)?
> 2. Is there any specific reason for traffic filtering on lb appliance
> instead of Nuage VPS ? If we configure firewall rules for LB services on
> the Nuage VSP instead of the inline lb appliance (iptable rules  for lb
> traffic), traffic can be filtered on the Nuage VSP before Natting?
>
> Best Regards,
> Sanjeev N
> Chief Product Engineer, Accelerite
> Off: +91 40 6722 9368 | EMail: sanjeev.neelarapu@accelerite.com
>
>
> -----Original Message-----
> From: ilya [mailto:ilya.mailing.lists@gmail.com]
> Sent: Thursday, March 24, 2016 10:16 PM
> To: dev@cloudstack.apache.org
> Subject: Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new
> plugin)
>
> Hi Nick,
>
> Being fan of SDN, I gave this proposal a thorough read.
>
> I do have only 1 comment - that you can perhaps can use to reconsider:
>
> "Each appliance will have 2 nics, one for management, and one in the guest
> network. "
>
> In general, 2 nics - one going to management and one going to guest - is
> looked very negatively upon by internal InfoSec team. This implementation
> will make an LB non-compliant from SOX or PCI perspective.
>
> Proposed alternate solution:
> Deploy a VM with 2 NICs but put them both on the same guest network (I
> believe the support 2 NICs on the *same* guest network has already been
> submitted upstream). 1 NIC for MGMT and 1 NIC for GUEST.
>
> Using SDNs ability to restrict communication flow (openvswitch or what
> not), only allow specific connections from CloudStack MS to Inline LB on
> MGMT NIC. You will need to block all external GUEST communication to MGMT
> NIC and only make it talk to CloudStack MS on specific ports.
>
> This approach should preserve the internal compliance and wont raise any
> red flags.
>
> Perhaps reach out to a client who requested this feature and ask what they
> think, maybe they have not thought this through.
>
> Regards
> ilya
>
> PS: If we were to entertain the idea of InLine LB, we would most likely
> ask for approach mentioned above.
>
>
>
>
> On 3/24/16 1:18 AM, Nick LIVENS wrote:
> > Hi all,
> >
> > I'd like to propose a new plugin called the "VPC Inline LB" plugin.
> > The design document can be found at :
> > https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340
> > 894
> >
> > Looking forward to hear your reviews / thoughts.
> >
> > Thanks!
> >
> > Kind regards,
> > Nick Livens
> >
>
>
>
> DISCLAIMER
> ==========
> This e-mail may contain privileged and confidential information which is
> the property of Accelerite, a Persistent Systems business. It is intended
> only for the use of the individual or entity to which it is addressed. If
> you are not the intended recipient, you are not authorized to read, retain,
> copy, print, distribute or use this message. If you have received this
> communication in error, please notify the sender and delete all copies of
> this message. Accelerite, a Persistent Systems business does not accept any
> liability for virus infected mails.
>

RE: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Posted by Sanjeev Neelarapu <sa...@accelerite.com>.
Hi Nick Livens,

I have gone through the FS and following are my review comments:

1. Will this LB appliance be placed between guest vms and the Nuage VSP provider(Nuage VSP and lb appliance will have one nic in guest network)?
2. Is there any specific reason for traffic filtering on lb appliance instead of Nuage VPS ? If we configure firewall rules for LB services on the Nuage VSP instead of the inline lb appliance (iptable rules  for lb traffic), traffic can be filtered on the Nuage VSP before Natting?

Best Regards,
Sanjeev N
Chief Product Engineer, Accelerite
Off: +91 40 6722 9368 | EMail: sanjeev.neelarapu@accelerite.com 


-----Original Message-----
From: ilya [mailto:ilya.mailing.lists@gmail.com] 
Sent: Thursday, March 24, 2016 10:16 PM
To: dev@cloudstack.apache.org
Subject: Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Hi Nick,

Being fan of SDN, I gave this proposal a thorough read.

I do have only 1 comment - that you can perhaps can use to reconsider:

"Each appliance will have 2 nics, one for management, and one in the guest network. "

In general, 2 nics - one going to management and one going to guest - is looked very negatively upon by internal InfoSec team. This implementation will make an LB non-compliant from SOX or PCI perspective.

Proposed alternate solution:
Deploy a VM with 2 NICs but put them both on the same guest network (I believe the support 2 NICs on the *same* guest network has already been submitted upstream). 1 NIC for MGMT and 1 NIC for GUEST.

Using SDNs ability to restrict communication flow (openvswitch or what not), only allow specific connections from CloudStack MS to Inline LB on MGMT NIC. You will need to block all external GUEST communication to MGMT NIC and only make it talk to CloudStack MS on specific ports.

This approach should preserve the internal compliance and wont raise any red flags.

Perhaps reach out to a client who requested this feature and ask what they think, maybe they have not thought this through.

Regards
ilya

PS: If we were to entertain the idea of InLine LB, we would most likely ask for approach mentioned above.




On 3/24/16 1:18 AM, Nick LIVENS wrote:
> Hi all,
> 
> I'd like to propose a new plugin called the "VPC Inline LB" plugin.
> The design document can be found at :
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340
> 894
> 
> Looking forward to hear your reviews / thoughts.
> 
> Thanks!
> 
> Kind regards,
> Nick Livens
> 



DISCLAIMER
==========
This e-mail may contain privileged and confidential information which is the property of Accelerite, a Persistent Systems business. It is intended only for the use of the individual or entity to which it is addressed. If you are not the intended recipient, you are not authorized to read, retain, copy, print, distribute or use this message. If you have received this communication in error, please notify the sender and delete all copies of this message. Accelerite, a Persistent Systems business does not accept any liability for virus infected mails.

Re: [DISCUSS] Request for comments : VPC Inline LoadBalancer (new plugin)

Posted by ilya <il...@gmail.com>.
Hi Nick,

Being fan of SDN, I gave this proposal a thorough read.

I do have only 1 comment - that you can perhaps can use to reconsider:

"Each appliance will have 2 nics, one for management, and one in the
guest network. "

In general, 2 nics - one going to management and one going to guest - is
looked very negatively upon by internal InfoSec team. This
implementation will make an LB non-compliant from SOX or PCI perspective.

Proposed alternate solution:
Deploy a VM with 2 NICs but put them both on the same guest network (I
believe the support 2 NICs on the *same* guest network has already been
submitted upstream). 1 NIC for MGMT and 1 NIC for GUEST.

Using SDNs ability to restrict communication flow (openvswitch or what
not), only allow specific connections from CloudStack MS to Inline LB on
MGMT NIC. You will need to block all external GUEST communication to
MGMT NIC and only make it talk to CloudStack MS on specific ports.

This approach should preserve the internal compliance and wont raise any
red flags.

Perhaps reach out to a client who requested this feature and ask what
they think, maybe they have not thought this through.

Regards
ilya

PS: If we were to entertain the idea of InLine LB, we would most likely
ask for approach mentioned above.




On 3/24/16 1:18 AM, Nick LIVENS wrote:
> Hi all,
> 
> I'd like to propose a new plugin called the "VPC Inline LB" plugin.
> The design document can be found at :
> https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=61340894
> 
> Looking forward to hear your reviews / thoughts.
> 
> Thanks!
> 
> Kind regards,
> Nick Livens
>