You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Antoine Boucher <an...@haltondc.com> on 2023/02/26 20:47:28 UTC
SSVM routing issue on /23 storage network
Hello,
I'm having a networking issue on SSVMs, I have the following networks defined in “Zone 1”.
Management: 10.101.0.0/22
Storage: 10.101.6.0/23
All worked well until we decided to configure new storage devices on 10.101.7.x, the hosts and management server have no issue but the SSVM is not able to reach it. Here are the defined interfaces of the SSVM and the routing table of the SSVM:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
altname enp0s3
altname ens3
inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
valid_lft forever preferred_lft forever
3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
altname enp0s4
altname ens4
inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
valid_lft forever preferred_lft forever
4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
altname enp0s5
altname ens5
inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
valid_lft forever preferred_lft forever
5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state UP group default qlen 1000
link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
altname enp0s6
altname ens6
inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
valid_lft forever preferred_lft forever
default via 148.59.36.49 dev eth2
10.0.0.0/8 via 10.101.0.1 dev eth1
10.91.0.0/23 via 10.101.0.1 dev eth1
10.91.6.0/24 via 10.101.0.1 dev eth1
10.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.253
10.101.6.0/23 via 10.101.0.1 dev eth1
148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61
169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208
172.16.0.0/12 via 10.101.0.1 dev eth1
192.168.0.0/16 via 10.101.0.1 dev eth1
Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be using eth3? The router seems to be bypassing the routing rules for 10.101.6.x since I see no traffic going through the gateway but I see traffic going through the gateway when the destination is 10.101.7.x
If I modify the routing for 10.101.6.0/23 to eth3 all is well.
Is this by design?
Regards,
Antoine Boucher
Re: SSVM routing issue
Posted by Antoine Boucher <an...@haltondc.com>.
HI Jithin,
Thank you for your response. I’m not sure why more people are not hitting the same issue? My storage network is also used for my primary storage. If I remove the tag I’m assuming that host will fail to reach my primary storage on vlan 53.
We are going to migrate to a new zone using vxlan soon. What are the best practices for the cloud bridges? 0, 1 and cloudbrx for storage? We use cloudbr0 for private and 1 for guest. I’m assuming that is the kvm host cs agent is properly configure all should be fine?
I will file a new bug report soon.
Regards,
Antoine
> On Jun 21, 2023, at 12:12 AM, Jithin Raju <ji...@shapeblue.com> wrote:
>
> Hi Antonine,
>
>> I also tried to define the storage traffic type with VLAN 53; the VLAN/VNI column shows blank,
>
> I see the same issue with CS 4.18 too, this appears to be a bug could you report this in github? In my testing the VLAN is present in cloud. dc_storage_network_ip_range record and the same is present in the API response for API listStorageNetworkIpRange as well. Just that it is not displayed in the UI.
>
> Could you try removing the VLAN tag 53 from the bridge Cloudbr53 and let cloudstack configure the storage VLAN?
>
>> Is the storage definition only or mainly used for the SSVM?
>
> Only for SSVM.
>
> -Jithin
>
> From: Antoine Boucher <an...@haltondc.com>
> Date: Wednesday, 21 June 2023 at 12:14 AM
> To: stanley.burkee@gmail.com <st...@gmail.com>, users <us...@cloudstack.apache.org>
> Subject: Re: SSVM routing issue
> Hi Stanley,
>
> You will find the answers below.
>
>
>
> Antoine Boucher
> AntoineB@haltondc.com
> [o] +1-226-505-9734
> www.haltondc.com<http://www.haltondc.com>
>
> “Data security made simple”
>
>
>
>
>
>> On May 24, 2023, at 8:59 PM, Stanley Burkee <st...@gmail.com> wrote:
>>
>> Hi Antoine,
>>
>>
>> Please share the cloudstack version you are using. Also check if you have
>> connectivity between your management network & storage network.
>
> 4.17.2.0
>
>
>
>
>>
>> Please share the management server logs & your zone cloudbr0 & other
>> interfaces configurations.
>
> Here is my CentOS network config: (Management Server and Some Clusters)
>
> [root@nimbus network-scripts]# cat ifcfg*
> DEVICE=bond0
> ONBOOT=yes
> BONDING_OPTS="mode=6"
> BRIDGE=cloudbr0
> NM_CONTROLLED=no
>
>
> DEVICE=bond0.53
> VLAN=yes
> BOOTPROTO=static
> ONBOOT=yes
> TYPE=Unknown
> BRIDGE=cloudbr53
>
>
> DEVICE=bond1
> ONBOOT=yes
> BONDING_OPTS="mode=6"
> BRIDGE=cloudbr1
> NM_CONTROLLED=no
>
>
> DEVICE=cloudbr0
> ONBOOT=yes
> TYPE=Bridge
> IPADDR=10.101.2.40
> NETMASK=255.255.252.0
> GATEWAY=10.101.0.1
> DOMAIN="haltondc.net"
> DEFROUTE=yes
> NM_CONTROLLED=no
> DELAY=0
>
>
> DEVICE=cloudbr1
> ONBOOT=yes
> TYPE=Bridge
> NM_CONTROLLED=no
> DELAY=0
>
>
> DEVICE=cloudbr53
> ONBOOT=yes
> TYPE=Bridge
> VLAN=yes
> IPADDR=10.101.6.40
> #GATEWAY=10.101.6.1
> NETMASK=255.255.254.0
> NM_CONTROLLED=no
> DELAY=0
>
>
> DEVICE=eno1
> TYPE=Ethernet
> USERCTL=no
> MASTER=bond1
> SLAVE=yes
> BOOTPROTO=none
> NM_CONTROLLED=no
> ONBOOT=yes
>
> DEVICE=eno2
> TYPE=Ethernet
> USERCTL=no
> MASTER=bond1
> SLAVE=yes
> BOOTPROTO=none
> NM_CONTROLLED=no
> ONBOOT=yes
> DEVICE=eno3
> TYPE=Ethernet
> USERCTL=no
> MASTER=bond1
> SLAVE=yes
> BOOTPROTO=none
> NM_CONTROLLED=no
> ONBOOT=yes
> DEVICE=eno4
> TYPE=Ethernet
> USERCTL=no
> #MASTER=bond1
> #SLAVE=yes
> #BOOTPROTO=none
> NM_CONTROLLED=no
> ONBOOT=no
> DEVICE=ens2f0
> TYPE=Ethernet
> USERCTL=no
> MASTER=bond0
> SLAVE=yes
> BOOTPROTO=none
> NM_CONTROLLED=no
> ONBOOT=yes
>
>
> DEVICE=ens2f1
> TYPE=Ethernet
> USERCTL=no
> MASTER=bond0
> SLAVE=yes
> BOOTPROTO=none
> NM_CONTROLLED=no
> ONBOOT=yes
>
>
> DEVICE=lo
> IPADDR=127.0.0.1
> NETMASK=255.0.0.0
> NETWORK=127.0.0.0
> # If you're having problems with gated making 127.0.0.0/8 a martian,
> # you can change this to something else (255.255.255.255, for example)
> BROADCAST=127.255.255.255
> ONBOOT=yes
> NAME=loopback
>
>
> Here is my Ubuntu 20/22 network config: (Most Clusters)
>
> root@cs-kvm01:~# cat /etc/netplan/00-installer-config.yaml
> network:
> version: 2
> ethernets:
> eno1: {}
> eno2: {}
> ens2f0:
> mtu: 1500
> ens2f1:
> mtu: 1500
> bonds:
> bond0:
> interfaces:
> - ens2f0
> - ens2f1
> mtu: 1500
> parameters:
> mode: balance-alb
> bond1:
> interfaces:
> - eno1
> - eno2
> nameservers:
> addresses: []
> search: []
> parameters:
> mode: balance-alb
> vlans:
> bond0.53:
> id: 53
> link: bond0
> mtu: 1500
> bridges:
> cloudbr0:
> interfaces: [bond0]
> mtu: 1500
> addresses:
> - 10.101.2.42/22
> gateway4: 10.101.0.1
> nameservers:
> addresses:
> - 10.101.0.1
> search:
> - haltondc.net
> - haltondc.com
> dhcp4: no
> dhcp6: no
> cloudbr1:
> interfaces: [bond1]
> mtu: 1500
> dhcp4: no
> dhcp6: no
> cloudbr53:
> interfaces: [bond0.53]
> mtu: 1500
> addresses:
> - 10.101.6.42/23
> dhcp4: no
> dhcp6: no
>
>
>
>>
>> Thanks.
>>
>> Best Regards
>> Stanley
>>
>> On Mon, 15 May 2023, 6:00 pm Antoine Boucher, <antoineb@haltondc.com <ma...@haltondc.com>> wrote:
>>
>>> Hello,
>>>
>>> Would anyone have clues on my on going SSVM issue below?
>>>
>>> However, I can work around the issue by deleting my Storage Network
>>> traffic definition and recreating the SSVM..
>>>
>>> What would be the impact of deleting the Storage Network traffic
>>> definition on other part of the system? My Primary Storage configuration
>>> seems to all be done part of my hosts static configuration.
>>>
>>> Regards,
>>> Antoine
>>>
>>>
>>>> On May 11, 2023, at 10:27 AM, Antoine Boucher <an...@haltondc.com>
>>> wrote:
>>>>
>>>> Good morning/afternoon/evening,
>>>>
>>>> I am following up with my SSVM routing issue when a Storage Network is
>>> defined.
>>>>
>>>> I have a zone with Xen and KVM servers that have a Storage Network
>>> defined as Cloudbr53 with a storage network-specific subnet (Cloudbr0 is
>>> also defined for Management and Cloudbr1 for Guests)
>>>>
>>>> The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the
>>> specific storage ip subnet range. The Storage traffic type for the Zone is
>>> defined with Cloudbr53 and VLAN as blank.
>>>>
>>>> You will see that the storage network route on the SSVM is pointed to
>>> the wrong eth1 interface as it should be eth3
>>>>
>>>> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>>>
>>>> root@s-394-VM:~# route
>>>> Kernel IP routing table
>>>> Destination Gateway Genmask Flags Metric Ref Use Iface
>>>> default 148.59.36.49 0.0.0.0 UG 0 0 0 eth2
>>>> 10.0.0.0 cloudrouter01.n 255.0.0.0 UG 0 0 0 eth1
>>>> 10.91.0.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>>> 10.91.6.0 cloudrouter01.n 255.255.255.0 UG 0 0 0 eth1
>>>> 10.101.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
>>>> nimbus.haltondc 10.101.6.1 255.255.255.255 UGH 0 0 0 eth3
>>>> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>>> 148.59.36.48 0.0.0.0 255.255.255.240 U 0 0 0 eth2
>>>> link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0
>>>> 172.16.0.0 cloudrouter01.n 255.240.0.0 UG 0 0 0 eth1
>>>> 192.168.0.0 cloudrouter01.n 255.255.0.0 UG 0 0 0 eth1
>>>>
>>>>
>>>> I also tried to define the storage traffic type with VLAN 53; the
>>> VLAN/VNI column shows blank, but It looks to be changing the routing to
>>> eth3; however, I experienced the same overall communication issue. When
>>> communicating to the management network is from the source IP on the
>>> storage network and dies coming back since I have no routing between the
>>> two networks.
>>>>
>>>> However, as a workaround, if I remove the storage traffic definition on
>>> the Zone, all traffic will be routed through the management network. All is
>>> well if I allow my secondary storage (NFS) on the management network.
>>>>
>>>>
>>>>
>>>> I’m using the host-configured “storage network” for primary storage on
>>> all my Zones without issues.
>>>>
>>>> What would be the potential issues of deleting the Storage Network
>>> definition traffic type in my zones, assuming I would keep all my secondary
>>> storage on or accessible on the management network and recreating the SSVMs?
>>>>
>>>> Is the storage definition only or mainly used for the SSVM?
>>>>
>>>> Regards,
>>>> Antoine
>>>>
>>>>
>>>> Confidentiality Warning: This message and any attachments are intended
>>> only for the use of the intended recipient(s), are confidential, and may be
>>> privileged. If you are not the intended recipient, you are hereby notified
>>> that any review, retransmission, conversion to hard copy, copying,
>>> circulation or other use of this message and any attachments is strictly
>>> prohibited. If you are not the intended recipient, please notify the sender
>>> immediately by return e-mail, and delete this message and any attachments
>>> from your system.
>>>>
>>>>
>>>>> On Feb 28, 2023, at 11:39 AM, Antoine Boucher <an...@haltondc.com>
>>> wrote:
>>>>>
>>>>> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
>>>>>
>>>>> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM
>>> zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
>>>>> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource
>>> instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
>>> eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49
>>> public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0
>>> eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22
>>> localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212
>>> eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0
>>> storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8
>>> nfsVersion=null keystore_password=*****
>>>>>
>>>>>
>>>>> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz
>>> | zgrep SecStorageSetupCommand
>>>>>
>>>>> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq
>>> 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via:
>>> 47(s-292-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
>>> 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via:
>>> 43(s-289-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq
>>> 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via:
>>> 47(s-292-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache]
>>> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
>>> 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd ,
>>> MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
>>> 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via:
>>> 48(s-311-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
>>> 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via:
>>> 48(s-311-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
>>> 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via:
>>> 48(s-311-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
>>> 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via:
>>> 48(s-311-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq
>>> 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via:
>>> 50(s-324-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
>>> 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via:
>>> 52(s-339-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
>>> 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via:
>>> 52(s-339-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
>>> 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via:
>>> 52(s-339-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
>>> 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via:
>>> 52(s-339-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
>>> 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via:
>>> 53(s-340-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>> 2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request]
>>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
>>> 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via:
>>> 53(s-340-VM), Ver: v1, Flags: 100111,
>>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>>> }
>>>>> 2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl]
>>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
>>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>>
>>>>>
>>>>> Antoine Boucher
>>>>>
>>>>>
>>>>>> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
>>>>>>
>>>>>> The routes should use eth3 not eth1.
>>>>>>
>>>>>> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
>>>>>> management-server.log by keyword `SecStorageSetupCommand` ?
>>>>>>
>>>>>>
>>>>>> -Wei
>>>>>>
>>>>>> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
>>>>>> <granwille@namhost.com.invalid <ma...@namhost.com.invalid> <ma...@namhost.com.invalid>>
>>> wrote:
>>>>>>
>>>>>>> I recently had a similar issue and now that I look at my routing
>>> tables,
>>>>>>> storage goes via eth1 and not eth3. Full details on this here:
>>>>>>>
>>> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
>>>>>>> This, therefore, also explains why I randomly got this error with my
>>> SSVM.
>>>>>>> On 2/28/23 11:35, Daan Hoogland wrote:
>>>>>>>
>>>>>>> does sound like a bug Antoine,
>>>>>>> I did the network calculation and it seems you are right.
>>>>>>> I wonder about the last two routes as well. did you do anything for
>>> those?
>>>>>>>
>>>>>>> On Sun, Feb 26, 2023 at 9:47?PM Antoine Boucher <
>>> antoineb@haltondc.com <ma...@haltondc.com> <ma...@haltondc.com>> <
>>> antoineb@haltondc.com <ma...@haltondc.com> <ma...@haltondc.com>>
>>>>>>> wrote:
>>>>>>>
>>>>>>>
>>>>>>> Hello,
>>>>>>>
>>>>>>> I'm having a networking issue on SSVMs, I have the following networks
>>>>>>> defined in “Zone 1”.
>>>>>>>
>>>>>>> Management: 10.101.0.0/22
>>>>>>> Storage: 10.101.6.0/23
>>>>>>>
>>>>>>> All worked well until we decided to configure new storage devices on
>>>>>>> 10.101.7.x, the hosts and management server have no issue but the
>>> SSVM is
>>>>>>> not able to reach it. Here are the defined interfaces of the SSVM
>>> and the
>>>>>>> routing table of the SSVM:
>>>>>>>
>>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>>> group
>>>>>>> default qlen 1000
>>>>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>>>> inet 127.0.0.1/8 scope host lo
>>>>>>> valid_lft forever preferred_lft forever
>>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> state
>>>>>>> UP group default qlen 1000
>>>>>>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>>>>>>> altname enp0s3
>>>>>>> altname ens3
>>>>>>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>>>>>>> valid_lft forever preferred_lft forever
>>>>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> state
>>>>>>> UP group default qlen 1000
>>>>>>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>>>>>>> altname enp0s4
>>>>>>> altname ens4
>>>>>>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>>>>>>> valid_lft forever preferred_lft forever
>>>>>>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> state
>>>>>>> UP group default qlen 1000
>>>>>>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>>>>>>> altname enp0s5
>>>>>>> altname ens5
>>>>>>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>>>>>>> valid_lft forever preferred_lft forever
>>>>>>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>>> state
>>>>>>> UP group default qlen 1000
>>>>>>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>>>>>>> altname enp0s6
>>>>>>> altname ens6
>>>>>>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>>>>>>> valid_lft forever preferred_lft forever
>>>>>>>
>>>>>>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev
>>> eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev
>>> eth110.101.0.0/22 dev eth1 proto kernel scope link src
>>> 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2
>>> proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto
>>> kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev
>>> eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>>>>>>>
>>>>>>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it
>>> be
>>>>>>> using eth3? The router seems to be bypassing the routing rules for
>>>>>>> 10.101.6.x since I see no traffic going through the gateway but I see
>>>>>>> traffic going through the gateway when the destination is 10.101.7.x
>>>>>>>
>>>>>>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>>>>>>
>>>>>>> Is this by design?
>>>>>>>
>>>>>>> Regards,
>>>>>>> Antoine Boucher
>>>>>>>
>>>>>>>
>>>>>>> --
>>>>>>> Regards / Groete
>>>>>>>
>>>>>>> <https://www.namhost.com <https://www.namhost.com/> <https://www.namhost.com/>> Granwille
>>> Strauss // Senior Systems Admin
>>>>>>>
>>>>>>> *e:* granwille@namhost.com <ma...@namhost.com> <ma...@namhost.com>
>>>>>>> *m:* +264 81 323 1260 <+264813231260>
>>>>>>> *w:* www.namhost.com<http://www.namhost.com> <http://www.namhost.com/> <http://www.namhost.com/>
>>>>>>>
>>>>>>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
>>>>>>> <https://www.instagram.com/namhostinternetservices/>
>>>>>>> <https://www.linkedin.com/company/namhos>
>>>>>>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>>>>>>>
>>>>>>>
>>>>>>> <
>>> https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner
>>>>
>>>>>>>
>>>>>>> Namhost Internet Services (Pty) Ltd,
>>>>>>>
>>>>>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>>>>>>
>>>>>>>
>>>>>>>
>>>>>>> The content of this message is confidential. If you have received it
>>> by
>>>>>>> mistake, please inform us by email reply and then delete the message.
>>> It is
>>>>>>> forbidden to copy, forward, or in any way reveal the contents of this
>>>>>>> message to anyone without our explicit consent. The integrity and
>>> security
>>>>>>> of this email cannot be guaranteed over the Internet. Therefore, the
>>> sender
>>>>>>> will not be held liable for any damage caused by the message. For our
>>> full
>>>>>>> privacy policy and disclaimers, please go to
>>>>>>> https://www.namhost.com/privacy-policy
>>>>>>>
>>>>>>> [image: Powered by AdSigner]
>>>>>>> <
>>> https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818
Re: SSVM routing issue
Posted by Jithin Raju <ji...@shapeblue.com>.
Hi Antonine,
>I also tried to define the storage traffic type with VLAN 53; the VLAN/VNI column shows blank,
I see the same issue with CS 4.18 too, this appears to be a bug could you report this in github? In my testing the VLAN is present in cloud. dc_storage_network_ip_range record and the same is present in the API response for API listStorageNetworkIpRange as well. Just that it is not displayed in the UI.
Could you try removing the VLAN tag 53 from the bridge Cloudbr53 and let cloudstack configure the storage VLAN?
>Is the storage definition only or mainly used for the SSVM?
Only for SSVM.
-Jithin
From: Antoine Boucher <an...@haltondc.com>
Date: Wednesday, 21 June 2023 at 12:14 AM
To: stanley.burkee@gmail.com <st...@gmail.com>, users <us...@cloudstack.apache.org>
Subject: Re: SSVM routing issue
Hi Stanley,
You will find the answers below.
Antoine Boucher
AntoineB@haltondc.com
[o] +1-226-505-9734
www.haltondc.com<http://www.haltondc.com>
“Data security made simple”
> On May 24, 2023, at 8:59 PM, Stanley Burkee <st...@gmail.com> wrote:
>
> Hi Antoine,
>
>
> Please share the cloudstack version you are using. Also check if you have
> connectivity between your management network & storage network.
4.17.2.0
>
> Please share the management server logs & your zone cloudbr0 & other
> interfaces configurations.
Here is my CentOS network config: (Management Server and Some Clusters)
[root@nimbus network-scripts]# cat ifcfg*
DEVICE=bond0
ONBOOT=yes
BONDING_OPTS="mode=6"
BRIDGE=cloudbr0
NM_CONTROLLED=no
DEVICE=bond0.53
VLAN=yes
BOOTPROTO=static
ONBOOT=yes
TYPE=Unknown
BRIDGE=cloudbr53
DEVICE=bond1
ONBOOT=yes
BONDING_OPTS="mode=6"
BRIDGE=cloudbr1
NM_CONTROLLED=no
DEVICE=cloudbr0
ONBOOT=yes
TYPE=Bridge
IPADDR=10.101.2.40
NETMASK=255.255.252.0
GATEWAY=10.101.0.1
DOMAIN="haltondc.net"
DEFROUTE=yes
NM_CONTROLLED=no
DELAY=0
DEVICE=cloudbr1
ONBOOT=yes
TYPE=Bridge
NM_CONTROLLED=no
DELAY=0
DEVICE=cloudbr53
ONBOOT=yes
TYPE=Bridge
VLAN=yes
IPADDR=10.101.6.40
#GATEWAY=10.101.6.1
NETMASK=255.255.254.0
NM_CONTROLLED=no
DELAY=0
DEVICE=eno1
TYPE=Ethernet
USERCTL=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=eno2
TYPE=Ethernet
USERCTL=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=eno3
TYPE=Ethernet
USERCTL=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=eno4
TYPE=Ethernet
USERCTL=no
#MASTER=bond1
#SLAVE=yes
#BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=no
DEVICE=ens2f0
TYPE=Ethernet
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=ens2f1
TYPE=Ethernet
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
Here is my Ubuntu 20/22 network config: (Most Clusters)
root@cs-kvm01:~# cat /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
eno1: {}
eno2: {}
ens2f0:
mtu: 1500
ens2f1:
mtu: 1500
bonds:
bond0:
interfaces:
- ens2f0
- ens2f1
mtu: 1500
parameters:
mode: balance-alb
bond1:
interfaces:
- eno1
- eno2
nameservers:
addresses: []
search: []
parameters:
mode: balance-alb
vlans:
bond0.53:
id: 53
link: bond0
mtu: 1500
bridges:
cloudbr0:
interfaces: [bond0]
mtu: 1500
addresses:
- 10.101.2.42/22
gateway4: 10.101.0.1
nameservers:
addresses:
- 10.101.0.1
search:
- haltondc.net
- haltondc.com
dhcp4: no
dhcp6: no
cloudbr1:
interfaces: [bond1]
mtu: 1500
dhcp4: no
dhcp6: no
cloudbr53:
interfaces: [bond0.53]
mtu: 1500
addresses:
- 10.101.6.42/23
dhcp4: no
dhcp6: no
>
> Thanks.
>
> Best Regards
> Stanley
>
> On Mon, 15 May 2023, 6:00 pm Antoine Boucher, <antoineb@haltondc.com <ma...@haltondc.com>> wrote:
>
>> Hello,
>>
>> Would anyone have clues on my on going SSVM issue below?
>>
>> However, I can work around the issue by deleting my Storage Network
>> traffic definition and recreating the SSVM..
>>
>> What would be the impact of deleting the Storage Network traffic
>> definition on other part of the system? My Primary Storage configuration
>> seems to all be done part of my hosts static configuration.
>>
>> Regards,
>> Antoine
>>
>>
>>> On May 11, 2023, at 10:27 AM, Antoine Boucher <an...@haltondc.com>
>> wrote:
>>>
>>> Good morning/afternoon/evening,
>>>
>>> I am following up with my SSVM routing issue when a Storage Network is
>> defined.
>>>
>>> I have a zone with Xen and KVM servers that have a Storage Network
>> defined as Cloudbr53 with a storage network-specific subnet (Cloudbr0 is
>> also defined for Management and Cloudbr1 for Guests)
>>>
>>> The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the
>> specific storage ip subnet range. The Storage traffic type for the Zone is
>> defined with Cloudbr53 and VLAN as blank.
>>>
>>> You will see that the storage network route on the SSVM is pointed to
>> the wrong eth1 interface as it should be eth3
>>>
>>> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>>
>>> root@s-394-VM:~# route
>>> Kernel IP routing table
>>> Destination Gateway Genmask Flags Metric Ref Use Iface
>>> default 148.59.36.49 0.0.0.0 UG 0 0 0 eth2
>>> 10.0.0.0 cloudrouter01.n 255.0.0.0 UG 0 0 0 eth1
>>> 10.91.0.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>> 10.91.6.0 cloudrouter01.n 255.255.255.0 UG 0 0 0 eth1
>>> 10.101.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
>>> nimbus.haltondc 10.101.6.1 255.255.255.255 UGH 0 0 0 eth3
>>> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>> 148.59.36.48 0.0.0.0 255.255.255.240 U 0 0 0 eth2
>>> link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0
>>> 172.16.0.0 cloudrouter01.n 255.240.0.0 UG 0 0 0 eth1
>>> 192.168.0.0 cloudrouter01.n 255.255.0.0 UG 0 0 0 eth1
>>>
>>>
>>> I also tried to define the storage traffic type with VLAN 53; the
>> VLAN/VNI column shows blank, but It looks to be changing the routing to
>> eth3; however, I experienced the same overall communication issue. When
>> communicating to the management network is from the source IP on the
>> storage network and dies coming back since I have no routing between the
>> two networks.
>>>
>>> However, as a workaround, if I remove the storage traffic definition on
>> the Zone, all traffic will be routed through the management network. All is
>> well if I allow my secondary storage (NFS) on the management network.
>>>
>>>
>>>
>>> I’m using the host-configured “storage network” for primary storage on
>> all my Zones without issues.
>>>
>>> What would be the potential issues of deleting the Storage Network
>> definition traffic type in my zones, assuming I would keep all my secondary
>> storage on or accessible on the management network and recreating the SSVMs?
>>>
>>> Is the storage definition only or mainly used for the SSVM?
>>>
>>> Regards,
>>> Antoine
>>>
>>>
>>> Confidentiality Warning: This message and any attachments are intended
>> only for the use of the intended recipient(s), are confidential, and may be
>> privileged. If you are not the intended recipient, you are hereby notified
>> that any review, retransmission, conversion to hard copy, copying,
>> circulation or other use of this message and any attachments is strictly
>> prohibited. If you are not the intended recipient, please notify the sender
>> immediately by return e-mail, and delete this message and any attachments
>> from your system.
>>>
>>>
>>>> On Feb 28, 2023, at 11:39 AM, Antoine Boucher <an...@haltondc.com>
>> wrote:
>>>>
>>>> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
>>>>
>>>> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM
>> zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
>>>> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource
>> instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
>> eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49
>> public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0
>> eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22
>> localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212
>> eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0
>> storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8
>> nfsVersion=null keystore_password=*****
>>>>
>>>>
>>>> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz
>> | zgrep SecStorageSetupCommand
>>>>
>>>> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq
>> 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via:
>> 47(s-292-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
>> 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via:
>> 43(s-289-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq
>> 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via:
>> 47(s-292-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache]
>> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
>> 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd ,
>> MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
>> 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
>> 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
>> 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
>> 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq
>> 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via:
>> 50(s-324-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
>> 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
>> 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
>> 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
>> 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
>> 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via:
>> 53(s-340-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
>> 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via:
>> 53(s-340-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>
>>>>
>>>> Antoine Boucher
>>>>
>>>>
>>>>> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
>>>>>
>>>>> The routes should use eth3 not eth1.
>>>>>
>>>>> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
>>>>> management-server.log by keyword `SecStorageSetupCommand` ?
>>>>>
>>>>>
>>>>> -Wei
>>>>>
>>>>> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
>>>>> <granwille@namhost.com.invalid <ma...@namhost.com.invalid> <ma...@namhost.com.invalid>>
>> wrote:
>>>>>
>>>>>> I recently had a similar issue and now that I look at my routing
>> tables,
>>>>>> storage goes via eth1 and not eth3. Full details on this here:
>>>>>>
>> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
>>>>>> This, therefore, also explains why I randomly got this error with my
>> SSVM.
>>>>>> On 2/28/23 11:35, Daan Hoogland wrote:
>>>>>>
>>>>>> does sound like a bug Antoine,
>>>>>> I did the network calculation and it seems you are right.
>>>>>> I wonder about the last two routes as well. did you do anything for
>> those?
>>>>>>
>>>>>> On Sun, Feb 26, 2023 at 9:47?PM Antoine Boucher <
>> antoineb@haltondc.com <ma...@haltondc.com> <ma...@haltondc.com>> <
>> antoineb@haltondc.com <ma...@haltondc.com> <ma...@haltondc.com>>
>>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I'm having a networking issue on SSVMs, I have the following networks
>>>>>> defined in “Zone 1”.
>>>>>>
>>>>>> Management: 10.101.0.0/22
>>>>>> Storage: 10.101.6.0/23
>>>>>>
>>>>>> All worked well until we decided to configure new storage devices on
>>>>>> 10.101.7.x, the hosts and management server have no issue but the
>> SSVM is
>>>>>> not able to reach it. Here are the defined interfaces of the SSVM
>> and the
>>>>>> routing table of the SSVM:
>>>>>>
>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>> group
>>>>>> default qlen 1000
>>>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>>> inet 127.0.0.1/8 scope host lo
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s3
>>>>>> altname ens3
>>>>>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s4
>>>>>> altname ens4
>>>>>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s5
>>>>>> altname ens5
>>>>>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s6
>>>>>> altname ens6
>>>>>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>>>>>> valid_lft forever preferred_lft forever
>>>>>>
>>>>>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev
>> eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev
>> eth110.101.0.0/22 dev eth1 proto kernel scope link src
>> 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2
>> proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto
>> kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev
>> eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>>>>>>
>>>>>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it
>> be
>>>>>> using eth3? The router seems to be bypassing the routing rules for
>>>>>> 10.101.6.x since I see no traffic going through the gateway but I see
>>>>>> traffic going through the gateway when the destination is 10.101.7.x
>>>>>>
>>>>>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>>>>>
>>>>>> Is this by design?
>>>>>>
>>>>>> Regards,
>>>>>> Antoine Boucher
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards / Groete
>>>>>>
>>>>>> <https://www.namhost.com <https://www.namhost.com/> <https://www.namhost.com/>> Granwille
>> Strauss // Senior Systems Admin
>>>>>>
>>>>>> *e:* granwille@namhost.com <ma...@namhost.com> <ma...@namhost.com>
>>>>>> *m:* +264 81 323 1260 <+264813231260>
>>>>>> *w:* www.namhost.com<http://www.namhost.com> <http://www.namhost.com/> <http://www.namhost.com/>
>>>>>>
>>>>>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
>>>>>> <https://www.instagram.com/namhostinternetservices/>
>>>>>> <https://www.linkedin.com/company/namhos>
>>>>>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>>>>>>
>>>>>>
>>>>>> <
>> https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner
>>>
>>>>>>
>>>>>> Namhost Internet Services (Pty) Ltd,
>>>>>>
>>>>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>>>>>
>>>>>>
>>>>>>
>>>>>> The content of this message is confidential. If you have received it
>> by
>>>>>> mistake, please inform us by email reply and then delete the message.
>> It is
>>>>>> forbidden to copy, forward, or in any way reveal the contents of this
>>>>>> message to anyone without our explicit consent. The integrity and
>> security
>>>>>> of this email cannot be guaranteed over the Internet. Therefore, the
>> sender
>>>>>> will not be held liable for any damage caused by the message. For our
>> full
>>>>>> privacy policy and disclaimers, please go to
>>>>>> https://www.namhost.com/privacy-policy
>>>>>>
>>>>>> [image: Powered by AdSigner]
>>>>>> <
>> https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818
Re: SSVM routing issue
Posted by Antoine Boucher <an...@haltondc.com>.
Hi Stanley,
You will find the answers below.
Antoine Boucher
AntoineB@haltondc.com
[o] +1-226-505-9734
www.haltondc.com
“Data security made simple”
> On May 24, 2023, at 8:59 PM, Stanley Burkee <st...@gmail.com> wrote:
>
> Hi Antoine,
>
>
> Please share the cloudstack version you are using. Also check if you have
> connectivity between your management network & storage network.
4.17.2.0
>
> Please share the management server logs & your zone cloudbr0 & other
> interfaces configurations.
Here is my CentOS network config: (Management Server and Some Clusters)
[root@nimbus network-scripts]# cat ifcfg*
DEVICE=bond0
ONBOOT=yes
BONDING_OPTS="mode=6"
BRIDGE=cloudbr0
NM_CONTROLLED=no
DEVICE=bond0.53
VLAN=yes
BOOTPROTO=static
ONBOOT=yes
TYPE=Unknown
BRIDGE=cloudbr53
DEVICE=bond1
ONBOOT=yes
BONDING_OPTS="mode=6"
BRIDGE=cloudbr1
NM_CONTROLLED=no
DEVICE=cloudbr0
ONBOOT=yes
TYPE=Bridge
IPADDR=10.101.2.40
NETMASK=255.255.252.0
GATEWAY=10.101.0.1
DOMAIN="haltondc.net"
DEFROUTE=yes
NM_CONTROLLED=no
DELAY=0
DEVICE=cloudbr1
ONBOOT=yes
TYPE=Bridge
NM_CONTROLLED=no
DELAY=0
DEVICE=cloudbr53
ONBOOT=yes
TYPE=Bridge
VLAN=yes
IPADDR=10.101.6.40
#GATEWAY=10.101.6.1
NETMASK=255.255.254.0
NM_CONTROLLED=no
DELAY=0
DEVICE=eno1
TYPE=Ethernet
USERCTL=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=eno2
TYPE=Ethernet
USERCTL=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=eno3
TYPE=Ethernet
USERCTL=no
MASTER=bond1
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=eno4
TYPE=Ethernet
USERCTL=no
#MASTER=bond1
#SLAVE=yes
#BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=no
DEVICE=ens2f0
TYPE=Ethernet
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=ens2f1
TYPE=Ethernet
USERCTL=no
MASTER=bond0
SLAVE=yes
BOOTPROTO=none
NM_CONTROLLED=no
ONBOOT=yes
DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback
Here is my Ubuntu 20/22 network config: (Most Clusters)
root@cs-kvm01:~# cat /etc/netplan/00-installer-config.yaml
network:
version: 2
ethernets:
eno1: {}
eno2: {}
ens2f0:
mtu: 1500
ens2f1:
mtu: 1500
bonds:
bond0:
interfaces:
- ens2f0
- ens2f1
mtu: 1500
parameters:
mode: balance-alb
bond1:
interfaces:
- eno1
- eno2
nameservers:
addresses: []
search: []
parameters:
mode: balance-alb
vlans:
bond0.53:
id: 53
link: bond0
mtu: 1500
bridges:
cloudbr0:
interfaces: [bond0]
mtu: 1500
addresses:
- 10.101.2.42/22
gateway4: 10.101.0.1
nameservers:
addresses:
- 10.101.0.1
search:
- haltondc.net
- haltondc.com
dhcp4: no
dhcp6: no
cloudbr1:
interfaces: [bond1]
mtu: 1500
dhcp4: no
dhcp6: no
cloudbr53:
interfaces: [bond0.53]
mtu: 1500
addresses:
- 10.101.6.42/23
dhcp4: no
dhcp6: no
>
> Thanks.
>
> Best Regards
> Stanley
>
> On Mon, 15 May 2023, 6:00 pm Antoine Boucher, <antoineb@haltondc.com <ma...@haltondc.com>> wrote:
>
>> Hello,
>>
>> Would anyone have clues on my on going SSVM issue below?
>>
>> However, I can work around the issue by deleting my Storage Network
>> traffic definition and recreating the SSVM..
>>
>> What would be the impact of deleting the Storage Network traffic
>> definition on other part of the system? My Primary Storage configuration
>> seems to all be done part of my hosts static configuration.
>>
>> Regards,
>> Antoine
>>
>>
>>> On May 11, 2023, at 10:27 AM, Antoine Boucher <an...@haltondc.com>
>> wrote:
>>>
>>> Good morning/afternoon/evening,
>>>
>>> I am following up with my SSVM routing issue when a Storage Network is
>> defined.
>>>
>>> I have a zone with Xen and KVM servers that have a Storage Network
>> defined as Cloudbr53 with a storage network-specific subnet (Cloudbr0 is
>> also defined for Management and Cloudbr1 for Guests)
>>>
>>> The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the
>> specific storage ip subnet range. The Storage traffic type for the Zone is
>> defined with Cloudbr53 and VLAN as blank.
>>>
>>> You will see that the storage network route on the SSVM is pointed to
>> the wrong eth1 interface as it should be eth3
>>>
>>> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>>
>>> root@s-394-VM:~# route
>>> Kernel IP routing table
>>> Destination Gateway Genmask Flags Metric Ref Use Iface
>>> default 148.59.36.49 0.0.0.0 UG 0 0 0 eth2
>>> 10.0.0.0 cloudrouter01.n 255.0.0.0 UG 0 0 0 eth1
>>> 10.91.0.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>> 10.91.6.0 cloudrouter01.n 255.255.255.0 UG 0 0 0 eth1
>>> 10.101.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
>>> nimbus.haltondc 10.101.6.1 255.255.255.255 UGH 0 0 0 eth3
>>> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>>> 148.59.36.48 0.0.0.0 255.255.255.240 U 0 0 0 eth2
>>> link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0
>>> 172.16.0.0 cloudrouter01.n 255.240.0.0 UG 0 0 0 eth1
>>> 192.168.0.0 cloudrouter01.n 255.255.0.0 UG 0 0 0 eth1
>>>
>>>
>>> I also tried to define the storage traffic type with VLAN 53; the
>> VLAN/VNI column shows blank, but It looks to be changing the routing to
>> eth3; however, I experienced the same overall communication issue. When
>> communicating to the management network is from the source IP on the
>> storage network and dies coming back since I have no routing between the
>> two networks.
>>>
>>> However, as a workaround, if I remove the storage traffic definition on
>> the Zone, all traffic will be routed through the management network. All is
>> well if I allow my secondary storage (NFS) on the management network.
>>>
>>>
>>>
>>> I’m using the host-configured “storage network” for primary storage on
>> all my Zones without issues.
>>>
>>> What would be the potential issues of deleting the Storage Network
>> definition traffic type in my zones, assuming I would keep all my secondary
>> storage on or accessible on the management network and recreating the SSVMs?
>>>
>>> Is the storage definition only or mainly used for the SSVM?
>>>
>>> Regards,
>>> Antoine
>>>
>>>
>>> Confidentiality Warning: This message and any attachments are intended
>> only for the use of the intended recipient(s), are confidential, and may be
>> privileged. If you are not the intended recipient, you are hereby notified
>> that any review, retransmission, conversion to hard copy, copying,
>> circulation or other use of this message and any attachments is strictly
>> prohibited. If you are not the intended recipient, please notify the sender
>> immediately by return e-mail, and delete this message and any attachments
>> from your system.
>>>
>>>
>>>> On Feb 28, 2023, at 11:39 AM, Antoine Boucher <an...@haltondc.com>
>> wrote:
>>>>
>>>> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
>>>>
>>>> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM
>> zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
>>>> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource
>> instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
>> eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49
>> public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0
>> eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22
>> localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212
>> eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0
>> storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8
>> nfsVersion=null keystore_password=*****
>>>>
>>>>
>>>> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz
>> | zgrep SecStorageSetupCommand
>>>>
>>>> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq
>> 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via:
>> 47(s-292-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
>> 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via:
>> 43(s-289-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq
>> 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via:
>> 47(s-292-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache]
>> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
>> 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd ,
>> MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
>> 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
>> 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
>> 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
>> 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via:
>> 48(s-311-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq
>> 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via:
>> 50(s-324-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
>> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
>> 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
>> 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
>> 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
>> 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via:
>> 52(s-339-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
>> 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via:
>> 53(s-340-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>> 2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
>> 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via:
>> 53(s-340-VM), Ver: v1, Flags: 100111,
>> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
>> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
>> }
>>>> 2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl]
>> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
>> executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>>>
>>>>
>>>> Antoine Boucher
>>>>
>>>>
>>>>> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
>>>>>
>>>>> The routes should use eth3 not eth1.
>>>>>
>>>>> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
>>>>> management-server.log by keyword `SecStorageSetupCommand` ?
>>>>>
>>>>>
>>>>> -Wei
>>>>>
>>>>> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
>>>>> <granwille@namhost.com.invalid <ma...@namhost.com.invalid> <ma...@namhost.com.invalid>>
>> wrote:
>>>>>
>>>>>> I recently had a similar issue and now that I look at my routing
>> tables,
>>>>>> storage goes via eth1 and not eth3. Full details on this here:
>>>>>>
>> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
>>>>>> This, therefore, also explains why I randomly got this error with my
>> SSVM.
>>>>>> On 2/28/23 11:35, Daan Hoogland wrote:
>>>>>>
>>>>>> does sound like a bug Antoine,
>>>>>> I did the network calculation and it seems you are right.
>>>>>> I wonder about the last two routes as well. did you do anything for
>> those?
>>>>>>
>>>>>> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <
>> antoineb@haltondc.com <ma...@haltondc.com> <ma...@haltondc.com>> <
>> antoineb@haltondc.com <ma...@haltondc.com> <ma...@haltondc.com>>
>>>>>> wrote:
>>>>>>
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I'm having a networking issue on SSVMs, I have the following networks
>>>>>> defined in “Zone 1”.
>>>>>>
>>>>>> Management: 10.101.0.0/22
>>>>>> Storage: 10.101.6.0/23
>>>>>>
>>>>>> All worked well until we decided to configure new storage devices on
>>>>>> 10.101.7.x, the hosts and management server have no issue but the
>> SSVM is
>>>>>> not able to reach it. Here are the defined interfaces of the SSVM
>> and the
>>>>>> routing table of the SSVM:
>>>>>>
>>>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
>> group
>>>>>> default qlen 1000
>>>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>>>> inet 127.0.0.1/8 scope host lo
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s3
>>>>>> altname ens3
>>>>>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s4
>>>>>> altname ens4
>>>>>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s5
>>>>>> altname ens5
>>>>>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>>>>>> valid_lft forever preferred_lft forever
>>>>>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
>> state
>>>>>> UP group default qlen 1000
>>>>>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>>>>>> altname enp0s6
>>>>>> altname ens6
>>>>>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>>>>>> valid_lft forever preferred_lft forever
>>>>>>
>>>>>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev
>> eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev
>> eth110.101.0.0/22 dev eth1 proto kernel scope link src
>> 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2
>> proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto
>> kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev
>> eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>>>>>>
>>>>>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it
>> be
>>>>>> using eth3? The router seems to be bypassing the routing rules for
>>>>>> 10.101.6.x since I see no traffic going through the gateway but I see
>>>>>> traffic going through the gateway when the destination is 10.101.7.x
>>>>>>
>>>>>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>>>>>
>>>>>> Is this by design?
>>>>>>
>>>>>> Regards,
>>>>>> Antoine Boucher
>>>>>>
>>>>>>
>>>>>> --
>>>>>> Regards / Groete
>>>>>>
>>>>>> <https://www.namhost.com <https://www.namhost.com/> <https://www.namhost.com/>> Granwille
>> Strauss // Senior Systems Admin
>>>>>>
>>>>>> *e:* granwille@namhost.com <ma...@namhost.com> <ma...@namhost.com>
>>>>>> *m:* +264 81 323 1260 <+264813231260>
>>>>>> *w:* www.namhost.com <http://www.namhost.com/> <http://www.namhost.com/>
>>>>>>
>>>>>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
>>>>>> <https://www.instagram.com/namhostinternetservices/>
>>>>>> <https://www.linkedin.com/company/namhos>
>>>>>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>>>>>>
>>>>>>
>>>>>> <
>> https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner
>>>
>>>>>>
>>>>>> Namhost Internet Services (Pty) Ltd,
>>>>>>
>>>>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>>>>>
>>>>>>
>>>>>>
>>>>>> The content of this message is confidential. If you have received it
>> by
>>>>>> mistake, please inform us by email reply and then delete the message.
>> It is
>>>>>> forbidden to copy, forward, or in any way reveal the contents of this
>>>>>> message to anyone without our explicit consent. The integrity and
>> security
>>>>>> of this email cannot be guaranteed over the Internet. Therefore, the
>> sender
>>>>>> will not be held liable for any damage caused by the message. For our
>> full
>>>>>> privacy policy and disclaimers, please go to
>>>>>> https://www.namhost.com/privacy-policy
>>>>>>
>>>>>> [image: Powered by AdSigner]
>>>>>> <
>> https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818
Re: SSVM routing issue
Posted by Stanley Burkee <st...@gmail.com>.
Hi Antoine,
Please share the cloudstack version you are using. Also check if you have
connectivity between your management network & storage network.
Please share the management server logs & your zone cloudbr0 & other
interfaces configurations.
Thanks.
Best Regards
Stanley
On Mon, 15 May 2023, 6:00 pm Antoine Boucher, <an...@haltondc.com> wrote:
> Hello,
>
> Would anyone have clues on my on going SSVM issue below?
>
> However, I can work around the issue by deleting my Storage Network
> traffic definition and recreating the SSVM..
>
> What would be the impact of deleting the Storage Network traffic
> definition on other part of the system? My Primary Storage configuration
> seems to all be done part of my hosts static configuration.
>
> Regards,
> Antoine
>
>
> > On May 11, 2023, at 10:27 AM, Antoine Boucher <an...@haltondc.com>
> wrote:
> >
> > Good morning/afternoon/evening,
> >
> > I am following up with my SSVM routing issue when a Storage Network is
> defined.
> >
> > I have a zone with Xen and KVM servers that have a Storage Network
> defined as Cloudbr53 with a storage network-specific subnet (Cloudbr0 is
> also defined for Management and Cloudbr1 for Guests)
> >
> > The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the
> specific storage ip subnet range. The Storage traffic type for the Zone is
> defined with Cloudbr53 and VLAN as blank.
> >
> > You will see that the storage network route on the SSVM is pointed to
> the wrong eth1 interface as it should be eth3
> >
> > 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
> >
> > root@s-394-VM:~# route
> > Kernel IP routing table
> > Destination Gateway Genmask Flags Metric Ref Use Iface
> > default 148.59.36.49 0.0.0.0 UG 0 0 0 eth2
> > 10.0.0.0 cloudrouter01.n 255.0.0.0 UG 0 0 0 eth1
> > 10.91.0.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
> > 10.91.6.0 cloudrouter01.n 255.255.255.0 UG 0 0 0 eth1
> > 10.101.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
> > nimbus.haltondc 10.101.6.1 255.255.255.255 UGH 0 0 0 eth3
> > 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
> > 148.59.36.48 0.0.0.0 255.255.255.240 U 0 0 0 eth2
> > link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0
> > 172.16.0.0 cloudrouter01.n 255.240.0.0 UG 0 0 0 eth1
> > 192.168.0.0 cloudrouter01.n 255.255.0.0 UG 0 0 0 eth1
> >
> >
> > I also tried to define the storage traffic type with VLAN 53; the
> VLAN/VNI column shows blank, but It looks to be changing the routing to
> eth3; however, I experienced the same overall communication issue. When
> communicating to the management network is from the source IP on the
> storage network and dies coming back since I have no routing between the
> two networks.
> >
> > However, as a workaround, if I remove the storage traffic definition on
> the Zone, all traffic will be routed through the management network. All is
> well if I allow my secondary storage (NFS) on the management network.
> >
> >
> >
> > I’m using the host-configured “storage network” for primary storage on
> all my Zones without issues.
> >
> > What would be the potential issues of deleting the Storage Network
> definition traffic type in my zones, assuming I would keep all my secondary
> storage on or accessible on the management network and recreating the SSVMs?
> >
> > Is the storage definition only or mainly used for the SSVM?
> >
> > Regards,
> > Antoine
> >
> >
> > Confidentiality Warning: This message and any attachments are intended
> only for the use of the intended recipient(s), are confidential, and may be
> privileged. If you are not the intended recipient, you are hereby notified
> that any review, retransmission, conversion to hard copy, copying,
> circulation or other use of this message and any attachments is strictly
> prohibited. If you are not the intended recipient, please notify the sender
> immediately by return e-mail, and delete this message and any attachments
> from your system.
> >
> >
> >> On Feb 28, 2023, at 11:39 AM, Antoine Boucher <an...@haltondc.com>
> wrote:
> >>
> >> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
> >>
> >> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM
> zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
> >> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource
> instance=SecStorage sslcopy=true role=templateProcessor mtu=1500
> eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49
> public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0
> eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22
> localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212
> eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0
> storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8
> nfsVersion=null keystore_password=*****
> >>
> >>
> >> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz
> | zgrep SecStorageSetupCommand
> >>
> >> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq
> 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via:
> 47(s-292-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
> 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via:
> 43(s-289-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq
> 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via:
> 47(s-292-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache]
> (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq
> 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd ,
> MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
> 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via:
> 48(s-311-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq
> 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via:
> 48(s-311-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
> 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via:
> 48(s-311-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq
> 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via:
> 48(s-311-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq
> 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via:
> 50(s-324-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://
> 10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
> 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via:
> 52(s-339-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq
> 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via:
> 52(s-339-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
> 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via:
> 52(s-339-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq
> 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via:
> 52(s-339-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
> 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via:
> 53(s-340-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >> 2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request]
> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq
> 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via:
> 53(s-340-VM), Ver: v1, Flags: 100111,
> [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://
> 10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}]
> }
> >> 2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl]
> (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from
> executing class com.cloud.agent.api.SecStorageSetupCommand: success
> >>
> >>
> >> Antoine Boucher
> >>
> >>
> >>> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
> >>>
> >>> The routes should use eth3 not eth1.
> >>>
> >>> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
> >>> management-server.log by keyword `SecStorageSetupCommand` ?
> >>>
> >>>
> >>> -Wei
> >>>
> >>> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
> >>> <granwille@namhost.com.invalid <ma...@namhost.com.invalid>>
> wrote:
> >>>
> >>>> I recently had a similar issue and now that I look at my routing
> tables,
> >>>> storage goes via eth1 and not eth3. Full details on this here:
> >>>>
> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
> >>>> This, therefore, also explains why I randomly got this error with my
> SSVM.
> >>>> On 2/28/23 11:35, Daan Hoogland wrote:
> >>>>
> >>>> does sound like a bug Antoine,
> >>>> I did the network calculation and it seems you are right.
> >>>> I wonder about the last two routes as well. did you do anything for
> those?
> >>>>
> >>>> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <
> antoineb@haltondc.com <ma...@haltondc.com>> <
> antoineb@haltondc.com <ma...@haltondc.com>>
> >>>> wrote:
> >>>>
> >>>>
> >>>> Hello,
> >>>>
> >>>> I'm having a networking issue on SSVMs, I have the following networks
> >>>> defined in “Zone 1”.
> >>>>
> >>>> Management: 10.101.0.0/22
> >>>> Storage: 10.101.6.0/23
> >>>>
> >>>> All worked well until we decided to configure new storage devices on
> >>>> 10.101.7.x, the hosts and management server have no issue but the
> SSVM is
> >>>> not able to reach it. Here are the defined interfaces of the SSVM
> and the
> >>>> routing table of the SSVM:
> >>>>
> >>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
> group
> >>>> default qlen 1000
> >>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> >>>> inet 127.0.0.1/8 scope host lo
> >>>> valid_lft forever preferred_lft forever
> >>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> >>>> UP group default qlen 1000
> >>>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
> >>>> altname enp0s3
> >>>> altname ens3
> >>>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
> >>>> valid_lft forever preferred_lft forever
> >>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> >>>> UP group default qlen 1000
> >>>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
> >>>> altname enp0s4
> >>>> altname ens4
> >>>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
> >>>> valid_lft forever preferred_lft forever
> >>>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> >>>> UP group default qlen 1000
> >>>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
> >>>> altname enp0s5
> >>>> altname ens5
> >>>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
> >>>> valid_lft forever preferred_lft forever
> >>>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast
> state
> >>>> UP group default qlen 1000
> >>>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
> >>>> altname enp0s6
> >>>> altname ens6
> >>>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
> >>>> valid_lft forever preferred_lft forever
> >>>>
> >>>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev
> eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev
> eth110.101.0.0/22 dev eth1 proto kernel scope link src
> 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2
> proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto
> kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev
> eth1192.168.0.0/16 via 10.101.0.1 dev eth1
> >>>>
> >>>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it
> be
> >>>> using eth3? The router seems to be bypassing the routing rules for
> >>>> 10.101.6.x since I see no traffic going through the gateway but I see
> >>>> traffic going through the gateway when the destination is 10.101.7.x
> >>>>
> >>>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
> >>>>
> >>>> Is this by design?
> >>>>
> >>>> Regards,
> >>>> Antoine Boucher
> >>>>
> >>>>
> >>>> --
> >>>> Regards / Groete
> >>>>
> >>>> <https://www.namhost.com <https://www.namhost.com/>> Granwille
> Strauss // Senior Systems Admin
> >>>>
> >>>> *e:* granwille@namhost.com <ma...@namhost.com>
> >>>> *m:* +264 81 323 1260 <+264813231260>
> >>>> *w:* www.namhost.com <http://www.namhost.com/>
> >>>>
> >>>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
> >>>> <https://www.instagram.com/namhostinternetservices/>
> >>>> <https://www.linkedin.com/company/namhos>
> >>>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
> >>>>
> >>>>
> >>>> <
> https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner
> >
> >>>>
> >>>> Namhost Internet Services (Pty) Ltd,
> >>>>
> >>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
> >>>>
> >>>>
> >>>>
> >>>> The content of this message is confidential. If you have received it
> by
> >>>> mistake, please inform us by email reply and then delete the message.
> It is
> >>>> forbidden to copy, forward, or in any way reveal the contents of this
> >>>> message to anyone without our explicit consent. The integrity and
> security
> >>>> of this email cannot be guaranteed over the Internet. Therefore, the
> sender
> >>>> will not be held liable for any damage caused by the message. For our
> full
> >>>> privacy policy and disclaimers, please go to
> >>>> https://www.namhost.com/privacy-policy
> >>>>
> >>>> [image: Powered by AdSigner]
> >>>> <
> https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818
> >
> >>
> >
>
>
Re: SSVM routing issue
Posted by Antoine Boucher <an...@haltondc.com>.
Hello,
Would anyone have clues on my on going SSVM issue below?
However, I can work around the issue by deleting my Storage Network traffic definition and recreating the SSVM..
What would be the impact of deleting the Storage Network traffic definition on other part of the system? My Primary Storage configuration seems to all be done part of my hosts static configuration.
Regards,
Antoine
> On May 11, 2023, at 10:27 AM, Antoine Boucher <an...@haltondc.com> wrote:
>
> Good morning/afternoon/evening,
>
> I am following up with my SSVM routing issue when a Storage Network is defined.
>
> I have a zone with Xen and KVM servers that have a Storage Network defined as Cloudbr53 with a storage network-specific subnet (Cloudbr0 is also defined for Management and Cloudbr1 for Guests)
>
> The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the specific storage ip subnet range. The Storage traffic type for the Zone is defined with Cloudbr53 and VLAN as blank.
>
> You will see that the storage network route on the SSVM is pointed to the wrong eth1 interface as it should be eth3
>
> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
>
> root@s-394-VM:~# route
> Kernel IP routing table
> Destination Gateway Genmask Flags Metric Ref Use Iface
> default 148.59.36.49 0.0.0.0 UG 0 0 0 eth2
> 10.0.0.0 cloudrouter01.n 255.0.0.0 UG 0 0 0 eth1
> 10.91.0.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
> 10.91.6.0 cloudrouter01.n 255.255.255.0 UG 0 0 0 eth1
> 10.101.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
> nimbus.haltondc 10.101.6.1 255.255.255.255 UGH 0 0 0 eth3
> 10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
> 148.59.36.48 0.0.0.0 255.255.255.240 U 0 0 0 eth2
> link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0
> 172.16.0.0 cloudrouter01.n 255.240.0.0 UG 0 0 0 eth1
> 192.168.0.0 cloudrouter01.n 255.255.0.0 UG 0 0 0 eth1
>
>
> I also tried to define the storage traffic type with VLAN 53; the VLAN/VNI column shows blank, but It looks to be changing the routing to eth3; however, I experienced the same overall communication issue. When communicating to the management network is from the source IP on the storage network and dies coming back since I have no routing between the two networks.
>
> However, as a workaround, if I remove the storage traffic definition on the Zone, all traffic will be routed through the management network. All is well if I allow my secondary storage (NFS) on the management network.
>
>
>
> I’m using the host-configured “storage network” for primary storage on all my Zones without issues.
>
> What would be the potential issues of deleting the Storage Network definition traffic type in my zones, assuming I would keep all my secondary storage on or accessible on the management network and recreating the SSVMs?
>
> Is the storage definition only or mainly used for the SSVM?
>
> Regards,
> Antoine
>
>
> Confidentiality Warning: This message and any attachments are intended only for the use of the intended recipient(s), are confidential, and may be privileged. If you are not the intended recipient, you are hereby notified that any review, retransmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, and delete this message and any attachments from your system.
>
>
>> On Feb 28, 2023, at 11:39 AM, Antoine Boucher <an...@haltondc.com> wrote:
>>
>> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
>>
>> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
>> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource instance=SecStorage sslcopy=true role=templateProcessor mtu=1500 eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49 public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0 eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22 localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212 eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0 storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8 nfsVersion=null keystore_password=*****
>>
>>
>> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz | zgrep SecStorageSetupCommand
>>
>> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via: 47(s-292-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via: 47(s-292-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache] (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd , MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via: 50(s-324-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via: 53(s-340-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>> 2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via: 53(s-340-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
>> 2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>>
>>
>> Antoine Boucher
>>
>>
>>> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
>>>
>>> The routes should use eth3 not eth1.
>>>
>>> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
>>> management-server.log by keyword `SecStorageSetupCommand` ?
>>>
>>>
>>> -Wei
>>>
>>> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
>>> <granwille@namhost.com.invalid <ma...@namhost.com.invalid>> wrote:
>>>
>>>> I recently had a similar issue and now that I look at my routing tables,
>>>> storage goes via eth1 and not eth3. Full details on this here:
>>>> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
>>>> This, therefore, also explains why I randomly got this error with my SSVM.
>>>> On 2/28/23 11:35, Daan Hoogland wrote:
>>>>
>>>> does sound like a bug Antoine,
>>>> I did the network calculation and it seems you are right.
>>>> I wonder about the last two routes as well. did you do anything for those?
>>>>
>>>> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <antoineb@haltondc.com <ma...@haltondc.com>> <antoineb@haltondc.com <ma...@haltondc.com>>
>>>> wrote:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> I'm having a networking issue on SSVMs, I have the following networks
>>>> defined in “Zone 1”.
>>>>
>>>> Management: 10.101.0.0/22
>>>> Storage: 10.101.6.0/23
>>>>
>>>> All worked well until we decided to configure new storage devices on
>>>> 10.101.7.x, the hosts and management server have no issue but the SSVM is
>>>> not able to reach it. Here are the defined interfaces of the SSVM and the
>>>> routing table of the SSVM:
>>>>
>>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>>>> default qlen 1000
>>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>>> inet 127.0.0.1/8 scope host lo
>>>> valid_lft forever preferred_lft forever
>>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>>> UP group default qlen 1000
>>>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>>>> altname enp0s3
>>>> altname ens3
>>>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>>>> valid_lft forever preferred_lft forever
>>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>>> UP group default qlen 1000
>>>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>>>> altname enp0s4
>>>> altname ens4
>>>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>>>> valid_lft forever preferred_lft forever
>>>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>>> UP group default qlen 1000
>>>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>>>> altname enp0s5
>>>> altname ens5
>>>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>>>> valid_lft forever preferred_lft forever
>>>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>>> UP group default qlen 1000
>>>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>>>> altname enp0s6
>>>> altname ens6
>>>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>>>> valid_lft forever preferred_lft forever
>>>>
>>>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev eth110.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>>>>
>>>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be
>>>> using eth3? The router seems to be bypassing the routing rules for
>>>> 10.101.6.x since I see no traffic going through the gateway but I see
>>>> traffic going through the gateway when the destination is 10.101.7.x
>>>>
>>>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>>>
>>>> Is this by design?
>>>>
>>>> Regards,
>>>> Antoine Boucher
>>>>
>>>>
>>>> --
>>>> Regards / Groete
>>>>
>>>> <https://www.namhost.com <https://www.namhost.com/>> Granwille Strauss // Senior Systems Admin
>>>>
>>>> *e:* granwille@namhost.com <ma...@namhost.com>
>>>> *m:* +264 81 323 1260 <+264813231260>
>>>> *w:* www.namhost.com <http://www.namhost.com/>
>>>>
>>>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
>>>> <https://www.instagram.com/namhostinternetservices/>
>>>> <https://www.linkedin.com/company/namhos>
>>>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>>>>
>>>>
>>>> <https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>
>>>>
>>>> Namhost Internet Services (Pty) Ltd,
>>>>
>>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>>>
>>>>
>>>>
>>>> The content of this message is confidential. If you have received it by
>>>> mistake, please inform us by email reply and then delete the message. It is
>>>> forbidden to copy, forward, or in any way reveal the contents of this
>>>> message to anyone without our explicit consent. The integrity and security
>>>> of this email cannot be guaranteed over the Internet. Therefore, the sender
>>>> will not be held liable for any damage caused by the message. For our full
>>>> privacy policy and disclaimers, please go to
>>>> https://www.namhost.com/privacy-policy
>>>>
>>>> [image: Powered by AdSigner]
>>>> <https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>
>>
>
Re: SSVM routing issue
Posted by Antoine Boucher <an...@haltondc.com>.
Good morning/afternoon/evening,
I am following up with my SSVM routing issue when a Storage Network is defined.
I have a zone with Xen and KVM servers that have a Storage Network defined as Cloudbr53 with a storage network-specific subnet (Cloudbr0 is also defined for Management and Cloudbr1 for Guests)
The Cloudbr53 bridge is “hard coded” to VLAN 53 on all hosts within the specific storage ip subnet range. The Storage traffic type for the Zone is defined with Cloudbr53 and VLAN as blank.
You will see that the storage network route on the SSVM is pointed to the wrong eth1 interface as it should be eth3
10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
root@s-394-VM:~# route
Kernel IP routing table
Destination Gateway Genmask Flags Metric Ref Use Iface
default 148.59.36.49 0.0.0.0 UG 0 0 0 eth2
10.0.0.0 cloudrouter01.n 255.0.0.0 UG 0 0 0 eth1
10.91.0.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
10.91.6.0 cloudrouter01.n 255.255.255.0 UG 0 0 0 eth1
10.101.0.0 0.0.0.0 255.255.252.0 U 0 0 0 eth1
nimbus.haltondc 10.101.6.1 255.255.255.255 UGH 0 0 0 eth3
10.101.6.0 cloudrouter01.n 255.255.254.0 UG 0 0 0 eth1
148.59.36.48 0.0.0.0 255.255.255.240 U 0 0 0 eth2
link-local 0.0.0.0 255.255.0.0 U 0 0 0 eth0
172.16.0.0 cloudrouter01.n 255.240.0.0 UG 0 0 0 eth1
192.168.0.0 cloudrouter01.n 255.255.0.0 UG 0 0 0 eth1
I also tried to define the storage traffic type with VLAN 53; the VLAN/VNI column shows blank, but It looks to be changing the routing to eth3; however, I experienced the same overall communication issue. When communicating to the management network is from the source IP on the storage network and dies coming back since I have no routing between the two networks.
However, as a workaround, if I remove the storage traffic definition on the Zone, all traffic will be routed through the management network. All is well if I allow my secondary storage (NFS) on the management network.
I’m using the host-configured “storage network” for primary storage on all my Zones without issues.
What would be the potential issues of deleting the Storage Network definition traffic type in my zones, assuming I would keep all my secondary storage on or accessible on the management network and recreating the SSVMs?
Is the storage definition only or mainly used for the SSVM?
Regards,
Antoine
Confidentiality Warning: This message and any attachments are intended only for the use of the intended recipient(s), are confidential, and may be privileged. If you are not the intended recipient, you are hereby notified that any review, retransmission, conversion to hard copy, copying, circulation or other use of this message and any attachments is strictly prohibited. If you are not the intended recipient, please notify the sender immediately by return e-mail, and delete this message and any attachments from your system.
> On Feb 28, 2023, at 11:39 AM, Antoine Boucher <an...@haltondc.com> wrote:
>
> # root@s-340-VM:~# cat /var/cache/cloud/cmdline
>
> template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
> resource=com.cloud.storage.resource.PremiumSecondaryStorageResource instance=SecStorage sslcopy=true role=templateProcessor mtu=1500 eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49 public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0 eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22 localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212 eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0 storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8 nfsVersion=null keystore_password=*****
>
>
> # cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz | zgrep SecStorageSetupCommand
>
> 2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via: 47(s-292-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via: 47(s-292-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache] (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd , MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via: 50(s-324-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via: 53(s-340-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
> 2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via: 53(s-340-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
> 2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
>
>
> Antoine Boucher
>
>
>> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
>>
>> The routes should use eth3 not eth1.
>>
>> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
>> management-server.log by keyword `SecStorageSetupCommand` ?
>>
>>
>> -Wei
>>
>> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
>> <granwille@namhost.com.invalid <ma...@namhost.com.invalid>> wrote:
>>
>>> I recently had a similar issue and now that I look at my routing tables,
>>> storage goes via eth1 and not eth3. Full details on this here:
>>> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
>>> This, therefore, also explains why I randomly got this error with my SSVM.
>>> On 2/28/23 11:35, Daan Hoogland wrote:
>>>
>>> does sound like a bug Antoine,
>>> I did the network calculation and it seems you are right.
>>> I wonder about the last two routes as well. did you do anything for those?
>>>
>>> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <antoineb@haltondc.com <ma...@haltondc.com>> <antoineb@haltondc.com <ma...@haltondc.com>>
>>> wrote:
>>>
>>>
>>> Hello,
>>>
>>> I'm having a networking issue on SSVMs, I have the following networks
>>> defined in “Zone 1”.
>>>
>>> Management: 10.101.0.0/22
>>> Storage: 10.101.6.0/23
>>>
>>> All worked well until we decided to configure new storage devices on
>>> 10.101.7.x, the hosts and management server have no issue but the SSVM is
>>> not able to reach it. Here are the defined interfaces of the SSVM and the
>>> routing table of the SSVM:
>>>
>>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>>> default qlen 1000
>>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>>> inet 127.0.0.1/8 scope host lo
>>> valid_lft forever preferred_lft forever
>>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>> UP group default qlen 1000
>>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>>> altname enp0s3
>>> altname ens3
>>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>>> valid_lft forever preferred_lft forever
>>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>> UP group default qlen 1000
>>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>>> altname enp0s4
>>> altname ens4
>>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>>> valid_lft forever preferred_lft forever
>>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>> UP group default qlen 1000
>>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>>> altname enp0s5
>>> altname ens5
>>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>>> valid_lft forever preferred_lft forever
>>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>>> UP group default qlen 1000
>>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>>> altname enp0s6
>>> altname ens6
>>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>>> valid_lft forever preferred_lft forever
>>>
>>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev eth110.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>>>
>>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be
>>> using eth3? The router seems to be bypassing the routing rules for
>>> 10.101.6.x since I see no traffic going through the gateway but I see
>>> traffic going through the gateway when the destination is 10.101.7.x
>>>
>>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>>
>>> Is this by design?
>>>
>>> Regards,
>>> Antoine Boucher
>>>
>>>
>>> --
>>> Regards / Groete
>>>
>>> <https://www.namhost.com <https://www.namhost.com/>> Granwille Strauss // Senior Systems Admin
>>>
>>> *e:* granwille@namhost.com <ma...@namhost.com>
>>> *m:* +264 81 323 1260 <+264813231260>
>>> *w:* www.namhost.com <http://www.namhost.com/>
>>>
>>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
>>> <https://www.instagram.com/namhostinternetservices/>
>>> <https://www.linkedin.com/company/namhos>
>>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>>>
>>>
>>> <https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>
>>>
>>> Namhost Internet Services (Pty) Ltd,
>>>
>>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>>
>>>
>>>
>>> The content of this message is confidential. If you have received it by
>>> mistake, please inform us by email reply and then delete the message. It is
>>> forbidden to copy, forward, or in any way reveal the contents of this
>>> message to anyone without our explicit consent. The integrity and security
>>> of this email cannot be guaranteed over the Internet. Therefore, the sender
>>> will not be held liable for any damage caused by the message. For our full
>>> privacy policy and disclaimers, please go to
>>> https://www.namhost.com/privacy-policy
>>>
>>> [image: Powered by AdSigner]
>>> <https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>
>
Re: SSVM routing issue on /23 storage network
Posted by Antoine Boucher <an...@haltondc.com>.
# root@s-340-VM:~# cat /var/cache/cloud/cmdline
template=domP type=secstorage host=10.101.2.40 port=8250 name=s-340-VM zone=1 pod=1 guid=s-340-VM workers=5 authorized_key=****
resource=com.cloud.storage.resource.PremiumSecondaryStorageResource instance=SecStorage sslcopy=true role=templateProcessor mtu=1500 eth2ip=148.59.36.60 eth2mask=255.255.255.240 gateway=148.59.36.49 public.network.device=eth2 eth0ip=169.254.211.29 eth0mask=255.255.0.0 eth1ip=10.101.3.231 eth1mask=255.255.252.0 mgmtcidr=10.101.0.0/22 localgw=10.101.0.1 private.network.device=eth1 eth3ip=10.101.7.212 eth3mask=255.255.254.0 storageip=10.101.7.212 storagenetmask=255.255.254.0 storagegateway=10.101.6.1 internaldns1=10.101.0.1 dns1=1.1.1.1 dns2=8.8.8.8 nfsVersion=null keystore_password=*****
# cat /var/log/cloudstack/management/management-server.log.2023-02-*.gz | zgrep SecStorageSetupCommand
2023-02-18 14:35:38,699 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Seq 47-6546545008336437249: Sending { Cmd , MgmtId: 130593671224, via: 47(s-292-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-18 14:35:42,024 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-290:ctx-cf94f90e) (logid:6dc1b961) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-20 11:12:33,345 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq 43-719450040472436737: Sending { Cmd , MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 11:12:34,389 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Seq 47-4507540277044445185: Sending { Cmd , MgmtId: 130593671224, via: 47(s-292-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 11:12:37,406 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-5:ctx-5834440f) (logid:2d16c04c) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-20 11:59:14,400 WARN [c.c.a.m.AgentAttache] (AgentConnectTaskPool-4:ctx-3b91dfcf) (logid:50ea75a6) Seq 43-719450040472436737: Timed out on Seq 43-719450040472436737: { Cmd , MgmtId: 130593671224, via: 43(s-289-VM), Ver: v1, Flags: 10100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 12:25:40,060 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq 48-8498011021871415297: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 12:25:43,138 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-20 12:25:43,159 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Seq 48-8498011021871415298: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 12:25:45,730 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-16:ctx-236c5a52) (logid:e701c43f) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-20 12:53:59,308 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq 48-3231051257661620225: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 12:54:01,842 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-20 12:54:01,871 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Seq 48-3231051257661620226: Sending { Cmd , MgmtId: 130593671224, via: 48(s-311-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-20 12:54:04,295 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-10:ctx-bf90573f) (logid:dc87251d) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-22 15:23:33,561 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Seq 50-2542563464627355649: Sending { Cmd , MgmtId: 130593671224, via: 50(s-324-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.91.6.5/volume1/ACS_Backup06","_role":"Image"}},"secUrl":"nfs://10.91.6.5/volume1/ACS_Backup06","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-22 15:23:36,815 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-23:ctx-22a41bf8) (logid:894a50f0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-26 14:46:39,737 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq 52-8409064929230848001: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-26 14:46:42,926 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-26 14:46:42,945 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Seq 52-8409064929230848002: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-26 14:46:45,435 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-26:ctx-50d91205) (logid:4c7783a0) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-26 15:07:11,934 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq 52-7356067041356283905: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-26 15:07:14,985 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-26 15:07:15,001 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Seq 52-7356067041356283906: Sending { Cmd , MgmtId: 130593671224, via: 52(s-339-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-26 15:07:17,516 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-27:ctx-8f2e92c1) (logid:4a947f5e) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-26 16:03:33,807 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq 53-603482350067646465: Sending { Cmd , MgmtId: 130593671224, via: 53(s-340-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.40/export/secondary","_role":"Image"}},"secUrl":"nfs://10.101.6.40/export/secondary","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-26 16:03:37,126 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
2023-02-26 16:03:37,142 DEBUG [c.c.a.t.Request] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Seq 53-603482350067646466: Sending { Cmd , MgmtId: 130593671224, via: 53(s-340-VM), Ver: v1, Flags: 100111, [{"com.cloud.agent.api.SecStorageSetupCommand":{"store":{"com.cloud.agent.api.to.NfsTO":{"_url":"nfs://10.101.6.23/mnt/Store08/CSBackup08","_role":"Image"}},"secUrl":"nfs://10.101.6.23/mnt/Store08/CSBackup08","certs":{},"postUploadKey":"89HVaWPMPwbWI-QGJrI5jzoGULt3lyZzHN4pnc-kn36Le5Hy_Hh3l6ZABLMgJXeBlA4vspDa-NyrxtBbJGj20A","wait":"0","bypassHostMaintenance":"false"}}] }
2023-02-26 16:03:39,890 DEBUG [c.c.a.m.AgentManagerImpl] (AgentConnectTaskPool-28:ctx-8b4b3cb8) (logid:6ae260b3) Details from executing class com.cloud.agent.api.SecStorageSetupCommand: success
Antoine Boucher
> On Feb 28, 2023, at 4:47 AM, Wei ZHOU <us...@gmail.com> wrote:
>
> The routes should use eth3 not eth1.
>
> Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
> management-server.log by keyword `SecStorageSetupCommand` ?
>
>
> -Wei
>
> On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
> <granwille@namhost.com.invalid <ma...@namhost.com.invalid>> wrote:
>
>> I recently had a similar issue and now that I look at my routing tables,
>> storage goes via eth1 and not eth3. Full details on this here:
>> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
>> This, therefore, also explains why I randomly got this error with my SSVM.
>> On 2/28/23 11:35, Daan Hoogland wrote:
>>
>> does sound like a bug Antoine,
>> I did the network calculation and it seems you are right.
>> I wonder about the last two routes as well. did you do anything for those?
>>
>> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <antoineb@haltondc.com <ma...@haltondc.com>> <antoineb@haltondc.com <ma...@haltondc.com>>
>> wrote:
>>
>>
>> Hello,
>>
>> I'm having a networking issue on SSVMs, I have the following networks
>> defined in “Zone 1”.
>>
>> Management: 10.101.0.0/22
>> Storage: 10.101.6.0/23
>>
>> All worked well until we decided to configure new storage devices on
>> 10.101.7.x, the hosts and management server have no issue but the SSVM is
>> not able to reach it. Here are the defined interfaces of the SSVM and the
>> routing table of the SSVM:
>>
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> valid_lft forever preferred_lft forever
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>> altname enp0s3
>> altname ens3
>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>> valid_lft forever preferred_lft forever
>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>> altname enp0s4
>> altname ens4
>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>> valid_lft forever preferred_lft forever
>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>> altname enp0s5
>> altname ens5
>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>> valid_lft forever preferred_lft forever
>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>> altname enp0s6
>> altname ens6
>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>> valid_lft forever preferred_lft forever
>>
>> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev eth110.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>>
>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be
>> using eth3? The router seems to be bypassing the routing rules for
>> 10.101.6.x since I see no traffic going through the gateway but I see
>> traffic going through the gateway when the destination is 10.101.7.x
>>
>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>
>> Is this by design?
>>
>> Regards,
>> Antoine Boucher
>>
>>
>> --
>> Regards / Groete
>>
>> <https://www.namhost.com <https://www.namhost.com/>> Granwille Strauss // Senior Systems Admin
>>
>> *e:* granwille@namhost.com <ma...@namhost.com>
>> *m:* +264 81 323 1260 <+264813231260>
>> *w:* www.namhost.com <http://www.namhost.com/>
>>
>> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
>> <https://www.instagram.com/namhostinternetservices/>
>> <https://www.linkedin.com/company/namhos>
>> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>>
>>
>> <https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>
>>
>> Namhost Internet Services (Pty) Ltd,
>>
>> 24 Black Eagle Rd, Hermanus, 7210, RSA
>>
>>
>>
>> The content of this message is confidential. If you have received it by
>> mistake, please inform us by email reply and then delete the message. It is
>> forbidden to copy, forward, or in any way reveal the contents of this
>> message to anyone without our explicit consent. The integrity and security
>> of this email cannot be guaranteed over the Internet. Therefore, the sender
>> will not be held liable for any damage caused by the message. For our full
>> privacy policy and disclaimers, please go to
>> https://www.namhost.com/privacy-policy
>>
>> [image: Powered by AdSigner]
>> <https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>
Re: SSVM routing issue on /23 storage network
Posted by Wei ZHOU <us...@gmail.com>.
The routes should use eth3 not eth1.
Can you share the `/var/cache/cloud/cmdline` file in SSVM, and filter
management-server.log by keyword `SecStorageSetupCommand` ?
-Wei
On Tue, 28 Feb 2023 at 10:42, Granwille Strauss
<gr...@namhost.com.invalid> wrote:
> I recently had a similar issue and now that I look at my routing tables,
> storage goes via eth1 and not eth3. Full details on this here:
> https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
> This, therefore, also explains why I randomly got this error with my SSVM.
> On 2/28/23 11:35, Daan Hoogland wrote:
>
> does sound like a bug Antoine,
> I did the network calculation and it seems you are right.
> I wonder about the last two routes as well. did you do anything for those?
>
> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <an...@haltondc.com> <an...@haltondc.com>
> wrote:
>
>
> Hello,
>
> I'm having a networking issue on SSVMs, I have the following networks
> defined in “Zone 1”.
>
> Management: 10.101.0.0/22
> Storage: 10.101.6.0/23
>
> All worked well until we decided to configure new storage devices on
> 10.101.7.x, the hosts and management server have no issue but the SSVM is
> not able to reach it. Here are the defined interfaces of the SSVM and the
> routing table of the SSVM:
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
> altname enp0s3
> altname ens3
> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
> valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
> altname enp0s4
> altname ens4
> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
> valid_lft forever preferred_lft forever
> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
> altname enp0s5
> altname ens5
> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
> valid_lft forever preferred_lft forever
> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
> altname enp0s6
> altname ens6
> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
> valid_lft forever preferred_lft forever
>
> default via 148.59.36.49 dev eth210.0.0.0/8 via 10.101.0.1 dev eth110.91.0.0/23 via 10.101.0.1 dev eth110.91.6.0/24 via 10.101.0.1 dev eth110.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.25310.101.6.0/23 via 10.101.0.1 dev eth1148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208172.16.0.0/12 via 10.101.0.1 dev eth1192.168.0.0/16 via 10.101.0.1 dev eth1
>
> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be
> using eth3? The router seems to be bypassing the routing rules for
> 10.101.6.x since I see no traffic going through the gateway but I see
> traffic going through the gateway when the destination is 10.101.7.x
>
> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>
> Is this by design?
>
> Regards,
> Antoine Boucher
>
>
> --
> Regards / Groete
>
> <https://www.namhost.com> Granwille Strauss // Senior Systems Admin
>
> *e:* granwille@namhost.com
> *m:* +264 81 323 1260 <+264813231260>
> *w:* www.namhost.com
>
> <https://www.facebook.com/namhost> <https://twitter.com/namhost>
> <https://www.instagram.com/namhostinternetservices/>
> <https://www.linkedin.com/company/namhos>
> <https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
>
>
> <https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>
>
> Namhost Internet Services (Pty) Ltd,
>
> 24 Black Eagle Rd, Hermanus, 7210, RSA
>
>
>
> The content of this message is confidential. If you have received it by
> mistake, please inform us by email reply and then delete the message. It is
> forbidden to copy, forward, or in any way reveal the contents of this
> message to anyone without our explicit consent. The integrity and security
> of this email cannot be guaranteed over the Internet. Therefore, the sender
> will not be held liable for any damage caused by the message. For our full
> privacy policy and disclaimers, please go to
> https://www.namhost.com/privacy-policy
>
> [image: Powered by AdSigner]
> <https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>
>
Re: SSVM routing issue on /23 storage network
Posted by Granwille Strauss <gr...@namhost.com.INVALID>.
I recently had a similar issue and now that I look at my routing tables,
storage goes via eth1 and not eth3. Full details on this here:
https://github.com/apache/cloudstack/issues/7244#issuecomment-1434755523
This, therefore, also explains why I randomly got this error with my SSVM.
On 2/28/23 11:35, Daan Hoogland wrote:
> does sound like a bug Antoine,
> I did the network calculation and it seems you are right.
> I wonder about the last two routes as well. did you do anything for those?
>
> On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher<an...@haltondc.com>
> wrote:
>
>> Hello,
>>
>> I'm having a networking issue on SSVMs, I have the following networks
>> defined in “Zone 1”.
>>
>> Management: 10.101.0.0/22
>> Storage: 10.101.6.0/23
>>
>> All worked well until we decided to configure new storage devices on
>> 10.101.7.x, the hosts and management server have no issue but the SSVM is
>> not able to reach it. Here are the defined interfaces of the SSVM and the
>> routing table of the SSVM:
>>
>> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
>> default qlen 1000
>> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
>> inet 127.0.0.1/8 scope host lo
>> valid_lft forever preferred_lft forever
>> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
>> altname enp0s3
>> altname ens3
>> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
>> valid_lft forever preferred_lft forever
>> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
>> altname enp0s4
>> altname ens4
>> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
>> valid_lft forever preferred_lft forever
>> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
>> altname enp0s5
>> altname ens5
>> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
>> valid_lft forever preferred_lft forever
>> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
>> UP group default qlen 1000
>> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
>> altname enp0s6
>> altname ens6
>> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
>> valid_lft forever preferred_lft forever
>>
>> default via 148.59.36.49 dev eth2
>> 10.0.0.0/8 via 10.101.0.1 dev eth1
>> 10.91.0.0/23 via 10.101.0.1 dev eth1
>> 10.91.6.0/24 via 10.101.0.1 dev eth1
>> 10.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.253
>> 10.101.6.0/23 via 10.101.0.1 dev eth1
>> 148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61
>> 169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208
>> 172.16.0.0/12 via 10.101.0.1 dev eth1
>> 192.168.0.0/16 via 10.101.0.1 dev eth1
>>
>> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be
>> using eth3? The router seems to be bypassing the routing rules for
>> 10.101.6.x since I see no traffic going through the gateway but I see
>> traffic going through the gateway when the destination is 10.101.7.x
>>
>> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>>
>> Is this by design?
>>
>> Regards,
>> Antoine Boucher
>>
>
--
Regards / Groete
<https://www.namhost.com> Granwille Strauss // Senior Systems Admin
*e:* granwille@namhost.com
*m:* +264 81 323 1260 <tel:+264813231260>
*w:* www.namhost.com <https://www.namhost.com/>
<https://www.facebook.com/namhost><https://twitter.com/namhost><https://www.instagram.com/namhostinternetservices/><https://www.linkedin.com/company/namhos><https://www.youtube.com/channel/UCTd5v-kVPaic_dguGur15AA>
<https://www.adsigner.com/v1/l/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818/banner>
Namhost Internet Services (Pty) Ltd,
24 Black Eagle Rd, Hermanus, 7210, RSA
The content of this message is confidential. If you have received it by
mistake, please inform us by email reply and then delete the message. It
is forbidden to copy, forward, or in any way reveal the contents of this
message to anyone without our explicit consent. The integrity and
security of this email cannot be guaranteed over the Internet.
Therefore, the sender will not be held liable for any damage caused by
the message. For our full privacy policy and disclaimers, please go to
https://www.namhost.com/privacy-policy
Powered by AdSigner
<https://www.adsigner.com/v1/c/631091998d4670001fe43ec2/621c9b76c140bb001ed0f818>
Re: SSVM routing issue on /23 storage network
Posted by Daan Hoogland <da...@gmail.com>.
does sound like a bug Antoine,
I did the network calculation and it seems you are right.
I wonder about the last two routes as well. did you do anything for those?
On Sun, Feb 26, 2023 at 9:47 PM Antoine Boucher <an...@haltondc.com>
wrote:
> Hello,
>
> I'm having a networking issue on SSVMs, I have the following networks
> defined in “Zone 1”.
>
> Management: 10.101.0.0/22
> Storage: 10.101.6.0/23
>
> All worked well until we decided to configure new storage devices on
> 10.101.7.x, the hosts and management server have no issue but the SSVM is
> not able to reach it. Here are the defined interfaces of the SSVM and the
> routing table of the SSVM:
>
> 1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group
> default qlen 1000
> link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
> inet 127.0.0.1/8 scope host lo
> valid_lft forever preferred_lft forever
> 2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 0e:00:a9:fe:72:ec brd ff:ff:ff:ff:ff:ff
> altname enp0s3
> altname ens3
> inet 169.254.114.236/16 brd 169.254.255.255 scope global eth0
> valid_lft forever preferred_lft forever
> 3: eth1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 1e:00:a1:00:00:06 brd ff:ff:ff:ff:ff:ff
> altname enp0s4
> altname ens4
> inet 10.101.3.205/22 brd 10.101.3.255 scope global eth1
> valid_lft forever preferred_lft forever
> 4: eth2: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 1e:00:6b:00:00:3d brd ff:ff:ff:ff:ff:ff
> altname enp0s5
> altname ens5
> inet 148.59.36.61/28 brd 148.59.36.63 scope global eth2
> valid_lft forever preferred_lft forever
> 5: eth3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc pfifo_fast state
> UP group default qlen 1000
> link/ether 1e:00:09:00:00:6b brd ff:ff:ff:ff:ff:ff
> altname enp0s6
> altname ens6
> inet 10.101.7.226/23 brd 10.101.7.255 scope global eth3
> valid_lft forever preferred_lft forever
>
> default via 148.59.36.49 dev eth2
> 10.0.0.0/8 via 10.101.0.1 dev eth1
> 10.91.0.0/23 via 10.101.0.1 dev eth1
> 10.91.6.0/24 via 10.101.0.1 dev eth1
> 10.101.0.0/22 dev eth1 proto kernel scope link src 10.101.3.253
> 10.101.6.0/23 via 10.101.0.1 dev eth1
> 148.59.36.48/28 dev eth2 proto kernel scope link src 148.59.36.61
> 169.254.0.0/16 dev eth0 proto kernel scope link src 169.254.232.208
> 172.16.0.0/12 via 10.101.0.1 dev eth1
> 192.168.0.0/16 via 10.101.0.1 dev eth1
>
> Why is the routing for 10.101.6.0/23 routing via eth1, shoudn’nt it be
> using eth3? The router seems to be bypassing the routing rules for
> 10.101.6.x since I see no traffic going through the gateway but I see
> traffic going through the gateway when the destination is 10.101.7.x
>
> If I modify the routing for 10.101.6.0/23 to eth3 all is well.
>
> Is this by design?
>
> Regards,
> Antoine Boucher
>
--
Daan