You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Mr Jazze <mr...@gmail.com> on 2020/04/12 00:43:00 UTC

Re: VXLAN Connectivity

ubtkvm2:~$ *netstat -gn*

IPv6/IPv4 Group Memberships

Interface       RefCnt Group

--------------- ------ ---------------------

lo                          1      224.0.0.1

eth0                     1      224.0.0.1

*eth1              1    239.0.7.227*

*eth1              1    239.0.7.220*

eth1                     1      224.0.0.1

eth0.1001           1      224.0.0.1

cloudbr0             1      224.0.0.1

eth1.1003           1      224.0.0.1

cloudbr2             1      224.0.0.1

eth0.1002           1      224.0.0.1

cloudbr1             1      224.0.0.1

cloud0                 1      224.0.0.1

vxlan2012           1      224.0.0.1

brvx-2012           1      224.0.0.1

vxlan2019           1      224.0.0.1

brvx-2019           1      224.0.0.1

vnet0                   1      224.0.0.1

vnet1                   1      224.0.0.1

vnet2                   1      224.0.0.1

vnet3                   1      224.0.0.1

vnet4                   1      224.0.0.1

On Thu, Mar 19, 2020 at 10:50 PM Mr Jazze <mr...@gmail.com> wrote:

> Hi @li jerry,
>
> There is no physical switch involved as the whole setup is configured in a
> nested Hyper-V environment where yes the virtual switch is configured with
> MTU 9000 and Trunk VLANs
>
> Here is overview:
>
> External vSwitch = CloudStack (MTU 9000)
> All ethX interfaces are vlan ports off of the vswitch
>
> auto eth0.1001
> iface eth0.1001 inet manual
> mtu 9000
>
> auto eth0.1002
> iface eth0.1002 inet manual
> mtu 9000
>
> auto eth1.1003
> iface eth1.1003 inet manual
> mtu 9000
>
> # MANAGEMENT BRIDGE
> auto cloudbr0
> iface cloudbr0 inet static
>         address 192.168.101.11
>         netmask 255.255.255.0
>         gateway 192.168.101.1
>         dns-nameservers 192.168.101.1
>         bridge_ports eth0.1001
>         bridge_fd 5
>         bridge_stp off
>         bridge_maxwait 1
>
> # PUBLIC BRIDGE
> auto cloudbr1
> iface cloudbr1 inet manual
>         bridge_ports eth0.1002
>         bridge_fd 5
>         bridge_stp off
>         bridge_maxwait 1
>
> # GUEST (PRIVATE) BRIDGE
> auto cloudbr2
> iface cloudbr2 inet static
>         address 192.168.254.11
>         netmask 255.255.255.0
>         bridge_ports eth1.1003
>         bridge_fd 5
>         bridge_stp off
>         bridge_maxwait 1
>
> cloudbr0, cloudbr1 and cloudbr2 = were assigned to their appropriate
> traffic labels
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode
> DEFAULT group default qlen 1
>     link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode
> DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc mq state UP mode
> DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
> 4: eth1.1003@eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
> noqueue master cloudbr2 state UP mode DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
> 5: eth0.1001@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
> noqueue master cloudbr0 state UP mode DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 6: cloudbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 7: cloudbr2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:80 brd ff:ff:ff:ff:ff:ff
> 8: eth0.1002@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc
> noqueue master cloudbr1 state UP mode DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 9: cloudbr1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
>     link/ether 00:15:5d:0a:0d:7e brd ff:ff:ff:ff:ff:ff
> 10: cloud0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state
> UP mode DEFAULT group default qlen 1000
>     link/ether fe:00:a9:fe:44:96 brd ff:ff:ff:ff:ff:ff
> 11: vnet0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master cloud0 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether fe:00:a9:fe:44:96 brd ff:ff:ff:ff:ff:ff
> 13: vnet2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast
> master cloudbr0 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether fe:00:0e:00:00:1c brd ff:ff:ff:ff:ff:ff
> 15: vnet4: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc pfifo_fast
> master cloudbr1 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether fe:00:87:00:00:84 brd ff:ff:ff:ff:ff:ff
> 17: vnet6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> master cloud0 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether fe:00:a9:fe:62:dc brd ff:ff:ff:ff:ff:ff
> 18: vnet7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 9000 qdisc htb master
> cloudbr1 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether fe:00:ee:00:00:86 brd ff:ff:ff:ff:ff:ff
> 19: vxlan2005: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue
> master brvx-2005 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether 2e:08:0e:8f:da:2b brd ff:ff:ff:ff:ff:ff
> 20: brvx-2005: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc noqueue
> state UP mode DEFAULT group default qlen 1000
>     link/ether 2e:08:0e:8f:da:2b brd ff:ff:ff:ff:ff:ff
> 21: vnet8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 8950 qdisc htb master
> brvx-2005 state UNKNOWN mode DEFAULT group default qlen 1000
>     link/ether fe:00:1c:e9:00:06 brd ff:ff:ff:ff:ff:ff
>
> QUESTION: I understand the requirement of the MTU and as you can see from
> above output its being set. But does this same requirement apply to router?
>
>
> On Tue, Mar 17, 2020 at 8:15 PM li jerry <di...@hotmail.com> wrote:
>
>> Please check your switch port MTU value (9000), the default is 1500
>>
>> Vxlan needs to modify the package header. The default MTU value will not
>> be able to transfer data.
>>
>> -----邮件原件-----
>> 发件人: Mr Jazze <mr...@gmail.com>
>> 发送时间: 2020年3月18日 6:31
>> 收件人: CloudStack Mailing-List <us...@cloudstack.apache.org>
>> 主题: VXLAN Connectivity
>>
>> Hello Again,
>>
>> I've reconfigured my test environment to use VXLAN instead of OVS which
>> went no where. I've of course deployed Advance Mode and put all the pieces
>> in place which yielded a somewhat functional cloud. I was able to deploy
>> Windows Server 2016 virtual machine. Initially, this VM didn't acquire it's
>> DHCP address from VPC router. I noticed VM was running on 2nd host and
>> router was running on 1st host, so I migrated VM to the same host as
>> router; then it was able to acquire DHCP address and ping 1.1.1.1. Then,
>> while trying to troubleshoot why there was no connectivity across hosts the
>> router took a dump and I had to destroy it to get another router deployed,
>> now VM is unable to get IP address regardless of which host.
>>
>> Does anyone have any experience with similar issue with VXLAN
>> connectivity and/or advice on how to resolve?
>>
>> --
>>
>> ======================
>>
>> My Search to Build a Private Cloud!
>>
>
>
> --
>
> ======================
>
> My Search to Build a Private Cloud!
>


-- 

======================

My Search to Build a Private Cloud!