You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Valery Ciareszka <va...@gmail.com> on 2013/04/17 12:03:05 UTC

cloudstack advanced networking problems

Hi all,

I have the  following problem:
environment: CS 4.0.1, KVM, Centos 6.4 (management+node1+node2),
OpenIndiana NFS server as primary and secondary storage
I have advanced networking in zone. I split management/public/guest traffic
into different vlans, and use kvm network labels (bridge names):

# cat /etc/cloud/agent/agent.properties |grep device
guest.network.device=cloudbrguest
private.network.device=cloudbrmanage
public.network.device=cloudbrpublic

# brctl show|grep cloudbr
cloudbrguest            8000.90e2ba39f499       yes             eth3.211
cloudbrmanage           8000.90e2ba39f499       yes             eth3.210
cloudbrpublic           8000.90e2ba39f499       yes             eth3.221
cloudbrstor             8000.002590881420       yes             eth0

Everything works fine when all VMs are on the same node. But when VM is
deployed on different node, it does not "see" virtual router.
I've made a scheme of network: http://thesuki.org/temp/bridgevlan.png
Let's assume VM1 is virtalrouter for network with vlanid 1234 and VM2 is VM
with CentOS

When new client deploys first VM, guest network is provisioned in separate
vlan id (vlan 1234 on the scheme). Cloudstack creates 802.1q in q interface
- eth3.211.1234 + virtual bridge CloudVirBr1234 and puts interface eth3.211
into CloudVirBr1234, then it creates vm for virtualrouter and plugs its
vnet interface into that CloudVirBr1234.

When VM is deployed on node2 in the same network (1234), the same things
are done on it with its interfaces (eth3.211.1234 + virtual bridge
CloudVirBr1234)

But if I try to ping 10.0.0.1 from 10.0.0.2 I can't see packets on
VM1(10.0.0.1). I can see them on node1 on interface eth3 (tcpdump -nei
eth3), I see them on node1 on interface eth3.211 (tcpdump -nei eth3.211),
but I don't see them on node1/eth3.211.1234 (tcpdump -nei eth3.211.1234) +
ifconfig shows that 0 bytes were ever received by that interface:
eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
          inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1509 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:94830 (92.6 KiB)

If I remove eth3.211 from cloudbrguest bridge on both nodes (red arrows on
scheme)  - run  "brctl delif cloudbrguest eth3.211 on both hosts"  I can
ping 10.0.0.1 from 10.0.0.2 and vice versa. I can see packets from 10.0.0.2
on node1/eth3.211.1234:

eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
          inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1555 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1412 (1.3 KiB)  TX bytes:98218 (95.9 KiB)

I tried to permanently remove eth3.211 from bridge cloudbrguest, but it
breaks cloudstack agent configuration after reboot - there should be
physical interface connected into cloudbrguest so that it would know on
which interface to create 802.1q in q vlans.

I would appreciate any help.

-- 
Regards,
Valery

http://protocol.by/slayer

Re: cloudstack advanced networking problems

Posted by Valery Ciareszka <va...@gmail.com>.
I was finally able to fix this with ebtables as described here:


http://www.spinics.net/lists/vlan/msg00607.html

"ebtables -t broute -A BROUTING -p 802_1Q -i eth3.211 -j DROP" on both nodes

On Wed, Apr 17, 2013 at 1:03 PM, Valery Ciareszka
<va...@gmail.com> wrote:

Hi all,
>
> I have the  following problem:
> environment: CS 4.0.1, KVM, Centos 6.4 (management+node1+node2),
> OpenIndiana NFS server as primary and secondary storage
> I have advanced networking in zone. I split management/public/guest
> traffic into different vlans, and use kvm network labels (bridge names):
>
> # cat /etc/cloud/agent/agent.properties |grep device
> guest.network.device=cloudbrguest
> private.network.device=cloudbrmanage
> public.network.device=cloudbrpublic
>
> # brctl show|grep cloudbr
> cloudbrguest            8000.90e2ba39f499       yes             eth3.211
> cloudbrmanage           8000.90e2ba39f499       yes             eth3.210
> cloudbrpublic           8000.90e2ba39f499       yes             eth3.221
> cloudbrstor             8000.002590881420       yes             eth0
>
> Everything works fine when all VMs are on the same node. But when VM is
> deployed on different node, it does not "see" virtual router.
> I've made a scheme of network: http://thesuki.org/temp/bridgevlan.png
> Let's assume VM1 is virtalrouter for network with vlanid 1234 and VM2 is
> VM with CentOS
>
> When new client deploys first VM, guest network is provisioned in separate
> vlan id (vlan 1234 on the scheme). Cloudstack creates 802.1q in q interface
> - eth3.211.1234 + virtual bridge CloudVirBr1234 and puts interface eth3.211
> into CloudVirBr1234, then it creates vm for virtualrouter and plugs its
> vnet interface into that CloudVirBr1234.
>
> When VM is deployed on node2 in the same network (1234), the same things
> are done on it with its interfaces (eth3.211.1234 + virtual bridge
> CloudVirBr1234)
>
> But if I try to ping 10.0.0.1 from 10.0.0.2 I can't see packets on
> VM1(10.0.0.1). I can see them on node1 on interface eth3 (tcpdump -nei
> eth3), I see them on node1 on interface eth3.211 (tcpdump -nei eth3.211),
> but I don't see them on node1/eth3.211.1234 (tcpdump -nei eth3.211.1234) +
> ifconfig shows that 0 bytes were ever received by that interface:
> eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
>           inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:1509 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:0 (0.0 b)  TX bytes:94830 (92.6 KiB)
>
> If I remove eth3.211 from cloudbrguest bridge on both nodes (red arrows on
> scheme)  - run  "brctl delif cloudbrguest eth3.211 on both hosts"  I can
> ping 10.0.0.1 from 10.0.0.2 and vice versa. I can see packets from 10.0.0.2
> on node1/eth3.211.1234:
>
> eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
>           inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:17 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:1555 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:1412 (1.3 KiB)  TX bytes:98218 (95.9 KiB)
>
> I tried to permanently remove eth3.211 from bridge cloudbrguest, but it
> breaks cloudstack agent configuration after reboot - there should be
> physical interface connected into cloudbrguest so that it would know on
> which interface to create 802.1q in q vlans.
>
> I would appreciate any help.
>
> --
> Regards,
> Valery
>
> http://protocol.by/slayer
>



-- 
Regards,
Valery

http://protocol.by/slayer

Re: cloudstack advanced networking problems

Posted by Valery Ciareszka <va...@gmail.com>.
Hi Sébastien,

physical switch is ok - 211 vlan is declared there, I can see packets from
node2.eth3.211.1234 on node1.eth3.211
and if i pull eth3.211 out of cloudbrguest, everything is working, so the
problem is not on the switch but somewhere in linux vlan/bridge internals.

@all:

Did anybody try similar configuration (vlan for guest network on phisycal
NIC + vlan in vlan) ?


On Wed, Apr 17, 2013 at 1:12 PM, COCHE Sébastien <SC...@sigma.fr> wrote:

> Hi Valery,
>
> I had the same issue, in the past, and the problem was that I forgot to
> declare the vlan on the physical switch.
> So when a VM was created on the same hypervisor that the vrouter, it
> worked fine, but not when the hypervisor was different.
> Make sure that you created all the vlan, that you declared in CS pool, on
> the physical switch.
>
> Regards
>
>
> -----Message d'origine-----
> De : Valery Ciareszka [mailto:valery.tereshko@gmail.com]
> Envoyé : mercredi 17 avril 2013 12:03
> À : users
> Objet : cloudstack advanced networking problems
>
> Hi all,
>
> I have the  following problem:
> environment: CS 4.0.1, KVM, Centos 6.4 (management+node1+node2),
> OpenIndiana NFS server as primary and secondary storage I have advanced
> networking in zone. I split management/public/guest traffic into different
> vlans, and use kvm network labels (bridge names):
>
> # cat /etc/cloud/agent/agent.properties |grep device
> guest.network.device=cloudbrguest private.network.device=cloudbrmanage
> public.network.device=cloudbrpublic
>
> # brctl show|grep cloudbr
> cloudbrguest            8000.90e2ba39f499       yes             eth3.211
> cloudbrmanage           8000.90e2ba39f499       yes             eth3.210
> cloudbrpublic           8000.90e2ba39f499       yes             eth3.221
> cloudbrstor             8000.002590881420       yes             eth0
>
> Everything works fine when all VMs are on the same node. But when VM is
> deployed on different node, it does not "see" virtual router.
> I've made a scheme of network: http://thesuki.org/temp/bridgevlan.png
> Let's assume VM1 is virtalrouter for network with vlanid 1234 and VM2 is
> VM with CentOS
>
> When new client deploys first VM, guest network is provisioned in separate
> vlan id (vlan 1234 on the scheme). Cloudstack creates 802.1q in q interface
> - eth3.211.1234 + virtual bridge CloudVirBr1234 and puts interface
> eth3.211 into CloudVirBr1234, then it creates vm for virtualrouter and
> plugs its vnet interface into that CloudVirBr1234.
>
> When VM is deployed on node2 in the same network (1234), the same things
> are done on it with its interfaces (eth3.211.1234 + virtual bridge
> CloudVirBr1234)
>
> But if I try to ping 10.0.0.1 from 10.0.0.2 I can't see packets on
> VM1(10.0.0.1). I can see them on node1 on interface eth3 (tcpdump -nei
> eth3), I see them on node1 on interface eth3.211 (tcpdump -nei eth3.211),
> but I don't see them on node1/eth3.211.1234 (tcpdump -nei eth3.211.1234) +
> ifconfig shows that 0 bytes were ever received by that interface:
> eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
>           inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:0 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:1509 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:0 (0.0 b)  TX bytes:94830 (92.6 KiB)
>
> If I remove eth3.211 from cloudbrguest bridge on both nodes (red arrows on
> scheme)  - run  "brctl delif cloudbrguest eth3.211 on both hosts"  I can
> ping 10.0.0.1 from 10.0.0.2 and vice versa. I can see packets from 10.0.0.2
> on node1/eth3.211.1234:
>
> eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
>           inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
>           UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
>           RX packets:17 errors:0 dropped:0 overruns:0 frame:0
>           TX packets:1555 errors:0 dropped:0 overruns:0 carrier:0
>           collisions:0 txqueuelen:0
>           RX bytes:1412 (1.3 KiB)  TX bytes:98218 (95.9 KiB)
>
> I tried to permanently remove eth3.211 from bridge cloudbrguest, but it
> breaks cloudstack agent configuration after reboot - there should be
> physical interface connected into cloudbrguest so that it would know on
> which interface to create 802.1q in q vlans.
>
> I would appreciate any help.
>
> --
> Regards,
> Valery
>
> http://protocol.by/slayer
>



-- 
Regards,
Valery

http://protocol.by/slayer

RE: cloudstack advanced networking problems

Posted by COCHE Sébastien <SC...@sigma.fr>.
Hi Valery,

I had the same issue, in the past, and the problem was that I forgot to declare the vlan on the physical switch.
So when a VM was created on the same hypervisor that the vrouter, it worked fine, but not when the hypervisor was different.
Make sure that you created all the vlan, that you declared in CS pool, on the physical switch.

Regards


-----Message d'origine-----
De : Valery Ciareszka [mailto:valery.tereshko@gmail.com] 
Envoyé : mercredi 17 avril 2013 12:03
À : users
Objet : cloudstack advanced networking problems

Hi all,

I have the  following problem:
environment: CS 4.0.1, KVM, Centos 6.4 (management+node1+node2), OpenIndiana NFS server as primary and secondary storage I have advanced networking in zone. I split management/public/guest traffic into different vlans, and use kvm network labels (bridge names):

# cat /etc/cloud/agent/agent.properties |grep device guest.network.device=cloudbrguest private.network.device=cloudbrmanage
public.network.device=cloudbrpublic

# brctl show|grep cloudbr
cloudbrguest            8000.90e2ba39f499       yes             eth3.211
cloudbrmanage           8000.90e2ba39f499       yes             eth3.210
cloudbrpublic           8000.90e2ba39f499       yes             eth3.221
cloudbrstor             8000.002590881420       yes             eth0

Everything works fine when all VMs are on the same node. But when VM is deployed on different node, it does not "see" virtual router.
I've made a scheme of network: http://thesuki.org/temp/bridgevlan.png
Let's assume VM1 is virtalrouter for network with vlanid 1234 and VM2 is VM with CentOS

When new client deploys first VM, guest network is provisioned in separate vlan id (vlan 1234 on the scheme). Cloudstack creates 802.1q in q interface
- eth3.211.1234 + virtual bridge CloudVirBr1234 and puts interface eth3.211 into CloudVirBr1234, then it creates vm for virtualrouter and plugs its vnet interface into that CloudVirBr1234.

When VM is deployed on node2 in the same network (1234), the same things are done on it with its interfaces (eth3.211.1234 + virtual bridge
CloudVirBr1234)

But if I try to ping 10.0.0.1 from 10.0.0.2 I can't see packets on VM1(10.0.0.1). I can see them on node1 on interface eth3 (tcpdump -nei eth3), I see them on node1 on interface eth3.211 (tcpdump -nei eth3.211), but I don't see them on node1/eth3.211.1234 (tcpdump -nei eth3.211.1234) + ifconfig shows that 0 bytes were ever received by that interface:
eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
          inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:0 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1509 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:0 (0.0 b)  TX bytes:94830 (92.6 KiB)

If I remove eth3.211 from cloudbrguest bridge on both nodes (red arrows on
scheme)  - run  "brctl delif cloudbrguest eth3.211 on both hosts"  I can ping 10.0.0.1 from 10.0.0.2 and vice versa. I can see packets from 10.0.0.2 on node1/eth3.211.1234:

eth3.211.1234 Link encap:Ethernet  HWaddr 90:E2:BA:39:F4:99
          inet6 addr: fe80::92e2:baff:fe39:f499/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:17 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1555 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:1412 (1.3 KiB)  TX bytes:98218 (95.9 KiB)

I tried to permanently remove eth3.211 from bridge cloudbrguest, but it breaks cloudstack agent configuration after reboot - there should be physical interface connected into cloudbrguest so that it would know on which interface to create 802.1q in q vlans.

I would appreciate any help.

--
Regards,
Valery

http://protocol.by/slayer