You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Grégoire Lamodière <g....@dimsi.fr> on 2017/07/04 20:15:14 UTC

Network architecture

Dear All,

In the process of implementing a new CS advanced zone (4.9.2), I am wondering about the best network architecture to implement.
Any idea / advice would be highly appreciated.

1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe
2/ The PR Store is nfs based 10 Gbe
3/ The sec Store is nfs based 10 Gbe
4/ Maximum network offering is 1 Gbit to Internet
5/ Hypervisor Xen 7
6/ Hardware Hp Blade c7000

Right now, my choice would be :

1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public
2/ Use 1 10Gbe for storage network (operations on sec Store)
3/ Use 1 10 Gbe for guest traffic (and pr store traffic by design)

This architecture sounds good in terms of performance (using 10 Gbe where it makes sense, redundancy on mgmt + public with bound).

Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to manage Storage and guest on the same physical network. This choice would give us faileover on storage and guest traffic, but I am wondering if performances would be badly affected.

Do you have any feedback on this ?

Thanks all.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


RE: Network architecture

Posted by Grégoire Lamodière <g....@dimsi.fr>.
Hi Rubens, 

Thank you for your feedback.
Right now, we are not so happy with Xen in terms of stability, upgrade process and HA.

Moving to KVM is an important decision to us, as it makes big changes to our daily operations, but if it improved stability and performances, then we'll do.

Does anyone have any feedback for instances backup using kvm ? 
In Xen world, we had many options to perform live and incremental backup (backup solutions such as PHD, XenOrchestra, scripts using Snapshots, etc.).

About the snapshots, is the freeze behavior expected ? Does it means that each user performing snapshot will get his instance freezed during the snapshot ? if so, this is a huge issue, isnt'it ? 

Thanks all.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71


-----Message d'origine-----
De : Rubens Malheiro [mailto:rubens.malheiro@gmail.com] 
Envoyé : jeudi 6 juillet 2017 17:10
À : users@cloudstack.apache.org
Objet : Re: Network architecture

I'll give you an opniao excuse my English I use Translate.

I recently moved a whole pod with 6 Xen machines to KVM I'll say it was much quieter and seems to be more stable on both Windows and Vms in LINUX

But it is necessary to convert the machines in vhd to qcow before deploying.

Works well.

What is really bad are the snapshoots that can be enabled in CLOUDSTACK but it takes time and the VM is frozen.

I had to migrate XEN because no version recognizes my new 10GB cards

Sorry, my english is more of an opinion.

On Wed, Jul 5, 2017 at 7:36 PM, Grégoire Lamodière <g....@dimsi.fr>
wrote:

> Dear Paul / Remi,
>
> Thank you for your feedback and the bounding advice.
> We'll go on this direction.
>
> @Remi, you are right about KVM.
> Right now, we still use XenServer because Snapshots and backup solutions.
> If KVM does the job properly, we might make a try on this new zone.
> Do you have any feedback migrating instances from a xenserver zone to 
> a kvm zone ? (should we only un-install xentools, export vm as 
> template and download in the new zone ? Or is it a more complexe 
> process  ?)
>
> Thanks again.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -----Message d'origine-----
> De : Paul Angus [mailto:paul.angus@shapeblue.com] Envoyé : mercredi 5 
> juillet 2017 21:05 À : users@cloudstack.apache.org Objet : RE: Network 
> architecture
>
> Hi Grégoire,
>
> With those NICs (and without any other background).  I'd go with 
> bonding your 1G NICs together and your 10G NICs together, put primary 
> and secondary storage over the 10G.  Mgmt traffic is minimal and 
> spread over all of your hosts, so would be public traffic, so these 
> would be fine over the bonded 1Gbs links.  Finally guest traffic, this 
> would normally be fine over the 1Gb links, especially if you throttle 
> the traffic a little, unless you know that you'll have especially high guest traffic.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.angus@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -----Original Message-----
> From: Grégoire Lamodière [mailto:g.lamodiere@dimsi.fr]
> Sent: 04 July 2017 21:15
> To: users@cloudstack.apache.org
> Subject: Network architecture
>
> Dear All,
>
> In the process of implementing a new CS advanced zone (4.9.2), I am 
> wondering about the best network architecture to implement.
> Any idea / advice would be highly appreciated.
>
> 1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR 
> Store is nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ 
> Maximum network offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ 
> Hardware Hp Blade c7000
>
> Right now, my choice would be :
>
> 1/ Bound the 2 gigabit networks cards and use the bound for mgmt + 
> public 2/ Use 1 10Gbe for storage network (operations on sec Store) 3/ 
> Use 1 10 Gbe for guest traffic (and pr store traffic by design)
>
> This architecture sounds good in terms of performance (using 10 Gbe 
> where it makes sense, redundancy on mgmt + public with bound).
>
> Another option would be to bound the 2 10 Gbe interfaces, and use Xen 
> Label to manage Storage and guest on the same physical network. This 
> choice would give us faileover on storage and guest traffic, but I am 
> wondering if performances would be badly affected.
>
> Do you have any feedback on this ?
>
> Thanks all.
>
> Best Regards.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>
>

Re: Network architecture

Posted by Rubens Malheiro <ru...@gmail.com>.
I'll give you an opniao excuse my English I use Translate.

I recently moved a whole pod with 6 Xen machines to KVM
I'll say it was much quieter and seems to be more stable on both Windows
and Vms in LINUX

But it is necessary to convert the machines in vhd to qcow before deploying.

Works well.

What is really bad are the snapshoots that can be enabled in CLOUDSTACK but
it takes time and the VM is frozen.

I had to migrate XEN because no version recognizes my new 10GB cards

Sorry, my english is more of an opinion.

On Wed, Jul 5, 2017 at 7:36 PM, Grégoire Lamodière <g....@dimsi.fr>
wrote:

> Dear Paul / Remi,
>
> Thank you for your feedback and the bounding advice.
> We'll go on this direction.
>
> @Remi, you are right about KVM.
> Right now, we still use XenServer because Snapshots and backup solutions.
> If KVM does the job properly, we might make a try on this new zone.
> Do you have any feedback migrating instances from a xenserver zone to a
> kvm zone ? (should we only un-install xentools, export vm as template and
> download in the new zone ? Or is it a more complexe process  ?)
>
> Thanks again.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
> -----Message d'origine-----
> De : Paul Angus [mailto:paul.angus@shapeblue.com]
> Envoyé : mercredi 5 juillet 2017 21:05
> À : users@cloudstack.apache.org
> Objet : RE: Network architecture
>
> Hi Grégoire,
>
> With those NICs (and without any other background).  I'd go with bonding
> your 1G NICs together and your 10G NICs together, put primary and secondary
> storage over the 10G.  Mgmt traffic is minimal and spread over all of your
> hosts, so would be public traffic, so these would be fine over the bonded
> 1Gbs links.  Finally guest traffic, this would normally be fine over the
> 1Gb links, especially if you throttle the traffic a little, unless you know
> that you'll have especially high guest traffic.
>
>
>
> Kind regards,
>
> Paul Angus
>
> paul.angus@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
>
>
>
>
> -----Original Message-----
> From: Grégoire Lamodière [mailto:g.lamodiere@dimsi.fr]
> Sent: 04 July 2017 21:15
> To: users@cloudstack.apache.org
> Subject: Network architecture
>
> Dear All,
>
> In the process of implementing a new CS advanced zone (4.9.2), I am
> wondering about the best network architecture to implement.
> Any idea / advice would be highly appreciated.
>
> 1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR
> Store is nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ Maximum
> network offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ Hardware Hp
> Blade c7000
>
> Right now, my choice would be :
>
> 1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public
> 2/ Use 1 10Gbe for storage network (operations on sec Store) 3/ Use 1 10
> Gbe for guest traffic (and pr store traffic by design)
>
> This architecture sounds good in terms of performance (using 10 Gbe where
> it makes sense, redundancy on mgmt + public with bound).
>
> Another option would be to bound the 2 10 Gbe interfaces, and use Xen
> Label to manage Storage and guest on the same physical network. This choice
> would give us faileover on storage and guest traffic, but I am wondering if
> performances would be badly affected.
>
> Do you have any feedback on this ?
>
> Thanks all.
>
> Best Regards.
>
> ---
> Grégoire Lamodière
> T/ + 33 6 76 27 03 31
> F/ + 33 1 75 43 89 71
>
>
>

RE: Network architecture

Posted by Grégoire Lamodière <g....@dimsi.fr>.
Dear Paul / Remi, 

Thank you for your feedback and the bounding advice.
We'll go on this direction.

@Remi, you are right about KVM.
Right now, we still use XenServer because Snapshots and backup solutions.
If KVM does the job properly, we might make a try on this new zone.
Do you have any feedback migrating instances from a xenserver zone to a kvm zone ? (should we only un-install xentools, export vm as template and download in the new zone ? Or is it a more complexe process  ?)

Thanks again.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71

-----Message d'origine-----
De : Paul Angus [mailto:paul.angus@shapeblue.com] 
Envoyé : mercredi 5 juillet 2017 21:05
À : users@cloudstack.apache.org
Objet : RE: Network architecture

Hi Grégoire,

With those NICs (and without any other background).  I'd go with bonding your 1G NICs together and your 10G NICs together, put primary and secondary storage over the 10G.  Mgmt traffic is minimal and spread over all of your hosts, so would be public traffic, so these would be fine over the bonded 1Gbs links.  Finally guest traffic, this would normally be fine over the 1Gb links, especially if you throttle the traffic a little, unless you know that you'll have especially high guest traffic.



Kind regards,

Paul Angus

paul.angus@shapeblue.com
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK @shapeblue
  
 


-----Original Message-----
From: Grégoire Lamodière [mailto:g.lamodiere@dimsi.fr]
Sent: 04 July 2017 21:15
To: users@cloudstack.apache.org
Subject: Network architecture

Dear All,

In the process of implementing a new CS advanced zone (4.9.2), I am wondering about the best network architecture to implement.
Any idea / advice would be highly appreciated.

1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR Store is nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ Maximum network offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ Hardware Hp Blade c7000

Right now, my choice would be :

1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public 2/ Use 1 10Gbe for storage network (operations on sec Store) 3/ Use 1 10 Gbe for guest traffic (and pr store traffic by design)

This architecture sounds good in terms of performance (using 10 Gbe where it makes sense, redundancy on mgmt + public with bound).

Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to manage Storage and guest on the same physical network. This choice would give us faileover on storage and guest traffic, but I am wondering if performances would be badly affected.

Do you have any feedback on this ?

Thanks all.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



RE: Network architecture

Posted by Paul Angus <pa...@shapeblue.com>.
Hi Grégoire,

With those NICs (and without any other background).  I'd go with bonding your 1G NICs together and your 10G NICs together, put primary and secondary storage over the 10G.  Mgmt traffic is minimal and spread over all of your hosts, so would be public traffic, so these would be fine over the bonded 1Gbs links.  Finally guest traffic, this would normally be fine over the 1Gb links, especially if you throttle the traffic a little, unless you know that you'll have especially high guest traffic.



Kind regards,

Paul Angus

paul.angus@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


-----Original Message-----
From: Grégoire Lamodière [mailto:g.lamodiere@dimsi.fr] 
Sent: 04 July 2017 21:15
To: users@cloudstack.apache.org
Subject: Network architecture

Dear All,

In the process of implementing a new CS advanced zone (4.9.2), I am wondering about the best network architecture to implement.
Any idea / advice would be highly appreciated.

1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe 2/ The PR Store is nfs based 10 Gbe 3/ The sec Store is nfs based 10 Gbe 4/ Maximum network offering is 1 Gbit to Internet 5/ Hypervisor Xen 7 6/ Hardware Hp Blade c7000

Right now, my choice would be :

1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public 2/ Use 1 10Gbe for storage network (operations on sec Store) 3/ Use 1 10 Gbe for guest traffic (and pr store traffic by design)

This architecture sounds good in terms of performance (using 10 Gbe where it makes sense, redundancy on mgmt + public with bound).

Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to manage Storage and guest on the same physical network. This choice would give us faileover on storage and guest traffic, but I am wondering if performances would be badly affected.

Do you have any feedback on this ?

Thanks all.

Best Regards.

---
Grégoire Lamodière
T/ + 33 6 76 27 03 31
F/ + 33 1 75 43 89 71



Re: Network architecture

Posted by Remi Bergsma <RB...@schubergphilis.com>.
Hi,

My advise is to make it as resilient as possible while keeping it simple. Using a single 10g nic towards primary storage means all your VMs will go down/are halted/risk corruption when the switch is rebooted for maintenance, or dies etc. I’d always use a mlag/port channel with 2x10G towards different switches. Then you can also use them active/active if your switches support it. We’re using Arista, and that can handle this well. Having redundancy on public without having redundancy on the backend doesn’t really help in my opinion.

Is there a specific reason to use XenServer? KVM is very mature these days and I’d recommend it over XenServer. I’ve hunderds of both running and in my experience KVM is faster on the same hardware and has less issues to deal with. XenServer will work, for sure. I just think KVM (for example on CentOS7) will give you a better experience.

Regards,
Remi



On 04/07/2017, 22:15, "Grégoire Lamodière" <g....@dimsi.fr> wrote:

    Dear All,
    
    In the process of implementing a new CS advanced zone (4.9.2), I am wondering about the best network architecture to implement.
    Any idea / advice would be highly appreciated.
    
    1/ Each host has 4 networks adapters, 2 x 1 Gbe, 2 x 10 Gbe
    2/ The PR Store is nfs based 10 Gbe
    3/ The sec Store is nfs based 10 Gbe
    4/ Maximum network offering is 1 Gbit to Internet
    5/ Hypervisor Xen 7
    6/ Hardware Hp Blade c7000
    
    Right now, my choice would be :
    
    1/ Bound the 2 gigabit networks cards and use the bound for mgmt + public
    2/ Use 1 10Gbe for storage network (operations on sec Store)
    3/ Use 1 10 Gbe for guest traffic (and pr store traffic by design)
    
    This architecture sounds good in terms of performance (using 10 Gbe where it makes sense, redundancy on mgmt + public with bound).
    
    Another option would be to bound the 2 10 Gbe interfaces, and use Xen Label to manage Storage and guest on the same physical network. This choice would give us faileover on storage and guest traffic, but I am wondering if performances would be badly affected.
    
    Do you have any feedback on this ?
    
    Thanks all.
    
    Best Regards.
    
    ---
    Grégoire Lamodière
    T/ + 33 6 76 27 03 31
    F/ + 33 1 75 43 89 71