You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Tanner Danzey <ar...@gmail.com> on 2014/02/04 18:50:17 UTC

Cloudstack 4.2.1, KVM & Advanced networking headaches

Hello all. My name is Tanner Danzey. Just a little introduction since this
will be my first post to this list. I am from Fargo, North Dakota and I
have had fairly significant Linux & Linux server experience for about four
years.

My coworker and I are working on rolling a Cloudstack cloud using Ubuntu
13.10, Ceph, Rados Gateway (S3) and KVM and commodity hardware. We
currently have our Ceph and Rados setup under lock and key as far as our
testing can tell. Installing the management server is a breeze, so that's
not an issue either. Our issue seems to be of a networking nature.

Our desire to use advanced networking has been made complicated by our
other desires. Originally we planned to use many LACP & trunk
configurations to achieve highest bandwidth and redundancy possible, but
our discovery that the switch we are using (two Catalyst 2960 48 port
switches in stacked configuration) only allows for six port channels
complicated that plan. Now we are still using bonded adaptors but in
active-backup mode so that we do not need switch side configuration tricks
or port channels. I have attached an example of our KVM hypervisor
configuration.

We have interface bond0 in active-backup mode bridged with em1 and em2.
Both of those ports are connected to switch trunk ports. Here's where
things get silly. Our KVM nodes are able to be pinged and managed on the
management IP assigned and when we create a zone and assign the bridges
their respective traffic. However, they are not connected to the public
network. Some variations of this configuration even result in no
connectivity other than link local.

Essentially, we are trying to find the best way to approach this situation.
We are open to using Open VSwitch. Our VLANs will be 50 as public, 100 as
management / storage, and 200-300 as guest VLANs unless there is a more
pragmatic arrangement.

Our plan after this deployment is to help brush up documentation where
possible regarding some of these edge-case scenarios and potentially do a
thorough writeup. Any and all help is appreciated, and if you require
compensation for the help I'm sure something can be arranged.

Thanks in advance, sorry for the overly long message :)

Re: Cloudstack 4.2.1, KVM & Advanced networking headaches

Posted by Zack Payton <zp...@gmail.com>.
Tanner,

I myself am very new to cloudstack so take what I say with a few grains of
salt.
I am using KVM with bonded interfaces as well though mine are in
active/active mode (switch supports it).  Additionally, I used Centos
rather than Ubuntu.

I have never been able to get CloudStack KVM to work when I tried to do the
management on the same bond as the guest/public networks.  For my
configuration, I set up 1 dedicated interface for management (on the switch
this was configured as an access port) and then configured it to bridge via
cloudbr0.  Then, I set up my bond0 interface to bridge to cloudbr1.  On
advanced zone configuration, I set my management network KVM label to be
cloudbr0 and then both public and guest kvm labels to be cloudbr1 (note
that on the switch the aggregated port connecting to the bond0 physical
port was configured as an 802.1q trunk).

After that, my biggest difficulties were to get the VLANs for the guest up
and running, as well as ensuring that the public network had access to a
DNS server, and internet routing so that the SSVM could download the
default template as well as connect to the secondary storage.

If you have a 3rd NIC card in those machines, my advice would be to that
for management and leave the guest/public networks to the bonded interface.
 If you don't have a 3rd NIC, forget about the bonding and just use one NIC
for guest/public and one for management.  I'm sure that there is a way to
get around this but after an excruciating number of attempts, this has been
the only way that I've found to make it work.

I'm sure that the dudes on this list might have some better advice though
because as I said, I'm rather new to CloudStack.

Good Luck,
Z


On Tue, Feb 4, 2014 at 9:50 AM, Tanner Danzey <ar...@gmail.com> wrote:

> Hello all. My name is Tanner Danzey. Just a little introduction since this
> will be my first post to this list. I am from Fargo, North Dakota and I
> have had fairly significant Linux & Linux server experience for about four
> years.
>
> My coworker and I are working on rolling a Cloudstack cloud using Ubuntu
> 13.10, Ceph, Rados Gateway (S3) and KVM and commodity hardware. We
> currently have our Ceph and Rados setup under lock and key as far as our
> testing can tell. Installing the management server is a breeze, so that's
> not an issue either. Our issue seems to be of a networking nature.
>
> Our desire to use advanced networking has been made complicated by our
> other desires. Originally we planned to use many LACP & trunk
> configurations to achieve highest bandwidth and redundancy possible, but
> our discovery that the switch we are using (two Catalyst 2960 48 port
> switches in stacked configuration) only allows for six port channels
> complicated that plan. Now we are still using bonded adaptors but in
> active-backup mode so that we do not need switch side configuration tricks
> or port channels. I have attached an example of our KVM hypervisor
> configuration.
>
> We have interface bond0 in active-backup mode bridged with em1 and em2.
> Both of those ports are connected to switch trunk ports. Here's where
> things get silly. Our KVM nodes are able to be pinged and managed on the
> management IP assigned and when we create a zone and assign the bridges
> their respective traffic. However, they are not connected to the public
> network. Some variations of this configuration even result in no
> connectivity other than link local.
>
> Essentially, we are trying to find the best way to approach this
> situation. We are open to using Open VSwitch. Our VLANs will be 50 as
> public, 100 as management / storage, and 200-300 as guest VLANs unless
> there is a more pragmatic arrangement.
>
> Our plan after this deployment is to help brush up documentation where
> possible regarding some of these edge-case scenarios and potentially do a
> thorough writeup. Any and all help is appreciated, and if you require
> compensation for the help I'm sure something can be arranged.
>
> Thanks in advance, sorry for the overly long message :)
>