You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Alessandro Caviglione <c....@gmail.com> on 2019/06/24 23:15:19 UTC

Very BIG mess with networking

Hi guys,
I'm experiencing a big issue with networking.
First of all, we're running CS 4.11.2 on XS 6.5 with Advanced Networking.
Our XS Pool network conf was:
- eth0 Management
- eth1 empty
- eth2 Public
- eth3 empty
- eth4 + eth5: bond LACP for GuestVM

During our last maintenance last week, we decided to create a new bond
active-passive for Management (eth0 + eth1) and a new bond for Public
(eth2 + eth3)
In addition, we would change GuestVM bond from LACP to active-active.
So, we created from pool master a new bond and put eth0 + eth1 interface in.

MGT_NET_UUID=$(xe network-create name-label=Management)
PMI_PIF_UUID=$(xe pif-list host-uuid=xxx management=true params=uuid | awk
'{ print $5 }')
MGT_PIF0_UUID=$(xe pif-list host-uuid=xxx device=eth0 params=uuid | awk '{
print $5 }')
MGT_PIF1_UUID=$(xe pif-list host-uuid=xxx device=eth1 params=uuid | awk '{
print $5 }')
xe bond-create network-uuid=$MGT_NET_UUID
pif-uuids=$PMI_PIF_UUID,$MGT_PIF1_UUID mode=active-backup

I used the same method to create new bond for Public network (obviously
changing nics).

To change bond mode for GuestVM network I've used:

xe pif-list device=bond0 VLAN=-1
xe pif-param-set uuid=<Bond0 UUID> other-config:bond-mode=balance-slb

I've repeated Public and GuestVM commands on each of the three hosts in the
pool, for Management I've done it only on Pool Master.
After that I've restarted toolstack and (after the issue I'll explain) also
reboot every host.
However, this is the result:
- Existing VMs runs fine and I can stop, start, migrate
- New VM that require new network will fail
- restart network with clean will fail and makes network and instance
unavailable

This is the CS log:

2019-06-25 00:14:35,751 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created VM
dbb045ef-5072-96ef-fbb0-1d7e3af0a0ea for r-899-VM
2019-06-25 00:14:35,756 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) PV args are
%template=domP%name=r-899-VM%eth2ip=31.44.38.53%eth2mask=255.255.255.240%gateway=31.44.38.49%eth0ip=10.122.12.1%eth0mask=255.255.255.0%domain=
tet.com
%cidrsize=24%dhcprange=10.122.12.1%eth1ip=169.254.3.241%eth1mask=255.255.0.0%type=router%disable_rp_filter=true%dns1=8.8.8.8%dns2=8.8.4.4
2019-06-25 00:14:35,757 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) HVM args are  template=domP
name=r-899-VM eth2ip=31.44.38.53 eth2mask=255.255.255.240
gateway=31.44.38.49 eth0ip=10.122.12.1 eth0mask=255.255.255.0 domain=tet.com
cidrsize=24 dhcprange=10.122.12.1 eth1ip=169.254.3.241 eth1mask=255.255.0.0
type=router disable_rp_filter=true dns1=8.8.8.8 dns2=8.8.4.4
2019-06-25 00:14:35,790 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) VBD
ec5e1e54-5902-cbcb-a6a8-abec0479d27c created for
com.cloud.agent.api.to.DiskTO@36117847
2019-06-25 00:14:35,790 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
nic [Nic:Public-31.44.38.53-vlan://untagged]
2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
Public
2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one network
with the name Public
2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
Public on host=192.168.200.39;
 Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c
2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
nic [Nic:Guest-10.122.12.1-vlan://384]
2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
GuestVM
2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
GuestVM on host=192.168.200.39;
 Network=300e55f0-88ff-a460-e498-e75424bc292a;
pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc
2019-06-25 00:14:35,826 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VLAN 384 on host
192.168.200.39 on device bond0
2019-06-25 00:14:36,467 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) VLAN is created for 384.
The uuid is e34dc684-8a87-7ef6-5a49-8214011f8c3c
2019-06-25 00:14:36,480 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
94558211-6bc6-ce64-9535-6d424b2b072c on 0
2019-06-25 00:14:36,481 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
nic [Nic:Control-169.254.3.241-null]
2019-06-25 00:14:36,531 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) already have a vif on dom0
for link local network
2019-06-25 00:14:36,675 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
1ee58631-8524-ffa8-7ef7-e0acde48449f on 1
2019-06-25 00:14:37,688 WARN  [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Task failed! Task record:
              uuid: 3177dcfc-ade9-3079-00f3-191efa1f90b2
           nameLabel: Async.VM.start_on
     nameDescription:
   allowedOperations: []
   currentOperations: {}
             created: Tue Jun 25 00:14:42 CEST 2019
            finished: Tue Jun 25 00:14:42 CEST 2019
              status: failure
          residentOn: com.xensource.xenapi.Host@85c62ee8
            progress: 1.0
                type: <none/>
              result:
           errorInfo: [HOST_CANNOT_ATTACH_NETWORK,
OpaqueRef:a323ff49-04bb-3e0f-50e3-b1bc7ef31630,
OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a]
         otherConfig: {}
           subtaskOf: com.xensource.xenapi.Task@aaf13f6f
            subtasks: []

 And this is the xensource.log

 INET 0.0.0.0:80|VBD.create R:47cd5359e955|audit] VBD.create: VM =
'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; VDI =
'f80241ff-2a00-4565-bfa4-a980f1462f3e'
 INET 0.0.0.0:80|VBD.create R:47cd5359e955|xapi] Checking whether there's a
migrate in progress...
 INET 0.0.0.0:80|VBD.create R:47cd5359e955|xapi] VBD.create (device = 0;
uuid = f2b9e26f-66d2-d939-e2a1-04af3ae41ac9; ref =
OpaqueRef:48deb7cb-2d3b-8370-107a-68ba25213c3e)
 INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|audit] VIF.create: VM =
'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
'9fa48b75-d68e-feaf-2eb4-8a7340f8c89b'
 INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] VIF.create running
 INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] Found mac_seed on VM:
supplied MAC parameter = '1e:00:19:00:00:4e'
 INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] VIF
ref='OpaqueRef:96cfcfaa-cf0f-e8a2-6ff5-4029815999b1' created (VM =
'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
'1e:00:19:00:00:4e')
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|audit] VLAN.create: network =
'2ce5f897-376f-4562-46c5-6f161106584b'; VLAN tag = 363
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xapi] Session.create
trackid=70355f146f1dd9d4873b9fdde8d1b8eb pool=true uname= originator=
is_local_superuser=true auth_user_sid=
parent=trackid=ab7ab58a3d7585f75b89abdab8725787
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|mscgen] xapi=>xapi
[label="(XML)"];
 UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
D:9eca0ccc2d01 created by task R:6046e28d22a8
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|mscgen] xapi=>dst_xapi
[label="(XML)"];
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client] stunnel pid:
10191 (cached = true) connected to 192.168.200.36:443
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client]
with_recorded_stunnelpid
task_opt=OpaqueRef:6046e28d-22a8-72de-9c3e-b08b87bb1ca6 s_pid=10191
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client] stunnel pid:
10191 (cached = true) returned stunnel to cache
 INET 0.0.0.0:80|local logout in message forwarder D:457fa1a34193|xapi]
Session.destroy trackid=70355f146f1dd9d4873b9fdde8d1b8eb
 INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|taskhelper] the status of
R:6046e28d22a8 is: success; cannot set it to `success
 INET 0.0.0.0:80|VIF.create R:320e6896912a|audit] VIF.create: VM =
'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
'2ce5f897-376f-4562-46c5-6f161106584b'
 INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] VIF.create running
 INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] Found mac_seed on VM:
supplied MAC parameter = '02:00:19:9e:00:02'
 INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] VIF
ref='OpaqueRef:d4db98cf-6c5c-bb9f-999a-291c33e5c0cd' created (VM =
'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
'02:00:19:9e:00:02')
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|audit] Host.call_plugin
host = '18d712b3-f64a-4178-9404-144f4f8fce2f (LIONARCH)'; plugin = 'vmops';
fn = 'setLinkLocalIP'; args = [ brName: xapi4 ]
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xapi] Session.create
trackid=f3e9f44a74e7dad26498e75c6de5eeba pool=true uname= originator=
is_local_superuser=true auth_user_sid=
parent=trackid=ab7ab58a3d7585f75b89abdab8725787
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|mscgen] xapi=>xapi
[label="(XML)"];
 UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
D:30eddea1c3fd created by task R:620d4a6a391b
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|mscgen] xapi=>dst_xapi
[label="(XML)"];
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client] stunnel
pid: 10224 (cached = true) connected to 192.168.200.36:443
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client]
with_recorded_stunnelpid
task_opt=OpaqueRef:620d4a6a-391b-e414-e4f3-40ea69e89d7a s_pid=10224
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client] stunnel
pid: 10224 (cached = true) returned stunnel to cache
 INET 0.0.0.0:80|local logout in message forwarder D:f89f813a5123|xapi]
Session.destroy trackid=f3e9f44a74e7dad26498e75c6de5eeba
 INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|taskhelper] the status of
R:620d4a6a391b is: success; cannot set it to `success
 INET 0.0.0.0:80|VIF.create R:1685ff9e765e|audit] VIF.create: VM =
'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
'e667320f-0e48-4cc6-6329-d723846cf8be'
 INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] VIF.create running
 INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] Found mac_seed on VM:
supplied MAC parameter = '0e:00:a9:fe:01:73'
 INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] VIF
ref='OpaqueRef:36a3163e-2a8e-fda1-7e13-0d6802b8ba12' created (VM =
'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
'0e:00:a9:fe:01:73')
|Async.VM.start_on R:96fa354a6a0d|dispatcher] spawning a new thread to
handle the current task (trackid=ab7ab58a3d7585f75b89abdab8725787)
|Async.VM.start_on R:96fa354a6a0d|audit] VM.start_on: VM =
'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; host
'18d712b3-f64a-4178-9404-144f4f8fce2f (LIONARCH)'
|Async.VM.start_on R:96fa354a6a0d|xapi] No operations are valid because
current-operations = [ OpaqueRef:96fa354a-6a0d-da55-f1d4-15eaa51d6640 ->
attach ]
es
|Async.VM.start_on R:96fa354a6a0d|xapi] The VM's BIOS strings were not yet
filled in. The VM is now made BIOS-generic.
|Async.VM.start_on R:96fa354a6a0d|xapi] Checking whether VM
OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e can run on host
OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e
|Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at
xapi_network_attach_helpers.ml:50.8-90 -> list.ml:69.12-15 ->
xapi_vm_helpers.ml:379.4-111
|Async.VM.start_on R:96fa354a6a0d|xapi] Caught exception while checking if
network OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a could be attached on
host OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e:CANNOT_PLUG_BOND_SLAVE:
[ OpaqueRef:b9238dea-802c-d8da-a7b7-ee5bc642a615 ]
|Async.VM.start_on R:96fa354a6a0d|xapi] Raised at xapi_vm_helpers.ml:390.10-134
-> list.ml:69.12-15 -> xapi_vm_helpers.ml:507.1-47 ->
message_forwarding.ml:932.5-85 -> threadext.ml:20.20-24 ->
threadext.ml:20.62-65
-> message_forwarding.ml:40.25-57 -> message_forwarding.ml:1262.9-276 ->
pervasiveext.ml:22.2-9
|Async.VM.start_on R:96fa354a6a0d|xapi] Raised at pervasiveext.ml:26.22-25
-> pervasiveext.ml:22.2-9
|Async.VM.start_on R:96fa354a6a0d|xapi] Raised at pervasiveext.ml:26.22-25
-> pervasiveext.ml:22.2-9
|Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at pervasiveext.ml:26.22-25
-> message_forwarding.ml:1248.3-1023 -> rbac.ml:229.16-23
|Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at rbac.ml:238.10-15 ->
server_helpers.ml:79.11-41
|Async.VM.start_on R:96fa354a6a0d|dispatcher] Server_helpers.exec
exception_handler: Got exception HOST_CANNOT_ATTACH_NETWORK: [
OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e;
OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a ]

Please, PLEASE, let me know that someone knows how to fix it!

Re: Very BIG mess with networking

Posted by Andrija Panic <an...@gmail.com>.
I'm fine with just a beer :P

Glad you solved that one !

On Wed, 26 Jun 2019 at 11:53, Alessandro Caviglione <c....@gmail.com>
wrote:

> Hi Andrija,
> I want to say a big THANK YOU for your suggestions, I changed bond mode and
> Network name on XenSerevr Pool and updated network name on CS... and it
> works now!!
> Thank you again!!
> A big hug! :)
>
> On Tue, Jun 25, 2019 at 12:45 PM Andrija Panic <an...@gmail.com>
> wrote:
>
> > I would say, to make sure you have identical network/bond setup as before
> > (irrelevant of which chils pifs are in the bond/network) - so from
> > CloudStack point of view, you did zero changes (same networks/bind).
> >
> > Limited reading capabilities from myside on mobile...but I would say that
> > pluging in the vif for a guest Network is the problem - in XenServer
> logs,
> > you can clearly see error while joining slave interface (vif) to the
> > network - but again, not sure which network is it (public or guest
> Network)
> >
> > On Tue, Jun 25, 2019, 10:50 Alessandro Caviglione <
> c.alessandro@gmail.com>
> > wrote:
> >
> > > No, new bond have the same name...
> > > In the log there is:
> > >
> > >  Found more than one network with the name Public
> > > 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
> > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) *Found a network called
> > > Public on host=192.168.200.39;
> > >  Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
> > > pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c*
> > > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> > > dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
> > > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for
> r-899-VM
> > on
> > > nic [Nic:Guest-10.122.12.1-vlan://384]
> > > 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network
> named
> > > GuestVM
> > > 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
> > > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> > > GuestVM on host=192.168.200.39;
> > >  Network=300e55f0-88ff-a460-e498-e75424bc292a;
> > > pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc
> > >
> > > So it seems that, however, it get the right Network and continue to
> > looking
> > > for other network.
> > > Do you think this is the issue also if CS continue with other tasks
> > instead
> > > stops founding more than one network Public?
> > > Could I simply change network name in Xen Pool and update in CS?
> > >
> > > On Tue, Jun 25, 2019 at 9:52 AM Andrija Panic <andrija.panic@gmail.com
> >
> > > wrote:
> > >
> > > > If your new bond have changes name, have you changed also XenServer
> > > Traffic
> > > > Label in CloudStack ?
> > > > Active-active is known to be sometimes very problematic, switch back
> to
> > > > active-passive until you solve your issues. Experiment later with
> > > > active-active.
> > > >
> > > >
> > > >
> > > > On Tue, Jun 25, 2019, 09:34 Michael Kesper <mk...@schokokeks.org>
> > > wrote:
> > > >
> > > > > Hi Alessandro,
> > > > >
> > > > > On 25.06.19 08:43, Alessandro Caviglione wrote:
> > > > > > complains on more than one network with name Publci...  ???
> > > > >
> > > > > [...]
> > > > >
> > > > > >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for
> network
> > > > named
> > > > > >> Public
> > > > > >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than
> one
> > > > > > network
> > > > > >> with the name Public
> > > > >
> > > > > Bye
> > > > > Michael
> > > > >
> > > > >
> > > > >
> > > >
> > >
> >
>


-- 

Andrija Panić

Re: Very BIG mess with networking

Posted by Alessandro Caviglione <c....@gmail.com>.
Hi Andrija,
I want to say a big THANK YOU for your suggestions, I changed bond mode and
Network name on XenSerevr Pool and updated network name on CS... and it
works now!!
Thank you again!!
A big hug! :)

On Tue, Jun 25, 2019 at 12:45 PM Andrija Panic <an...@gmail.com>
wrote:

> I would say, to make sure you have identical network/bond setup as before
> (irrelevant of which chils pifs are in the bond/network) - so from
> CloudStack point of view, you did zero changes (same networks/bind).
>
> Limited reading capabilities from myside on mobile...but I would say that
> pluging in the vif for a guest Network is the problem - in XenServer logs,
> you can clearly see error while joining slave interface (vif) to the
> network - but again, not sure which network is it (public or guest Network)
>
> On Tue, Jun 25, 2019, 10:50 Alessandro Caviglione <c....@gmail.com>
> wrote:
>
> > No, new bond have the same name...
> > In the log there is:
> >
> >  Found more than one network with the name Public
> > 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) *Found a network called
> > Public on host=192.168.200.39;
> >  Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
> > pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c*
> > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> > dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
> > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM
> on
> > nic [Nic:Guest-10.122.12.1-vlan://384]
> > 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> > GuestVM
> > 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> > GuestVM on host=192.168.200.39;
> >  Network=300e55f0-88ff-a460-e498-e75424bc292a;
> > pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc
> >
> > So it seems that, however, it get the right Network and continue to
> looking
> > for other network.
> > Do you think this is the issue also if CS continue with other tasks
> instead
> > stops founding more than one network Public?
> > Could I simply change network name in Xen Pool and update in CS?
> >
> > On Tue, Jun 25, 2019 at 9:52 AM Andrija Panic <an...@gmail.com>
> > wrote:
> >
> > > If your new bond have changes name, have you changed also XenServer
> > Traffic
> > > Label in CloudStack ?
> > > Active-active is known to be sometimes very problematic, switch back to
> > > active-passive until you solve your issues. Experiment later with
> > > active-active.
> > >
> > >
> > >
> > > On Tue, Jun 25, 2019, 09:34 Michael Kesper <mk...@schokokeks.org>
> > wrote:
> > >
> > > > Hi Alessandro,
> > > >
> > > > On 25.06.19 08:43, Alessandro Caviglione wrote:
> > > > > complains on more than one network with name Publci...  ???
> > > >
> > > > [...]
> > > >
> > > > >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network
> > > named
> > > > >> Public
> > > > >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one
> > > > > network
> > > > >> with the name Public
> > > >
> > > > Bye
> > > > Michael
> > > >
> > > >
> > > >
> > >
> >
>

Re: Very BIG mess with networking

Posted by Andrija Panic <an...@gmail.com>.
I would say, to make sure you have identical network/bond setup as before
(irrelevant of which chils pifs are in the bond/network) - so from
CloudStack point of view, you did zero changes (same networks/bind).

Limited reading capabilities from myside on mobile...but I would say that
pluging in the vif for a guest Network is the problem - in XenServer logs,
you can clearly see error while joining slave interface (vif) to the
network - but again, not sure which network is it (public or guest Network)

On Tue, Jun 25, 2019, 10:50 Alessandro Caviglione <c....@gmail.com>
wrote:

> No, new bond have the same name...
> In the log there is:
>
>  Found more than one network with the name Public
> 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) *Found a network called
> Public on host=192.168.200.39;
>  Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
> pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c*
> 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
> 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
> nic [Nic:Guest-10.122.12.1-vlan://384]
> 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> GuestVM
> 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> GuestVM on host=192.168.200.39;
>  Network=300e55f0-88ff-a460-e498-e75424bc292a;
> pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc
>
> So it seems that, however, it get the right Network and continue to looking
> for other network.
> Do you think this is the issue also if CS continue with other tasks instead
> stops founding more than one network Public?
> Could I simply change network name in Xen Pool and update in CS?
>
> On Tue, Jun 25, 2019 at 9:52 AM Andrija Panic <an...@gmail.com>
> wrote:
>
> > If your new bond have changes name, have you changed also XenServer
> Traffic
> > Label in CloudStack ?
> > Active-active is known to be sometimes very problematic, switch back to
> > active-passive until you solve your issues. Experiment later with
> > active-active.
> >
> >
> >
> > On Tue, Jun 25, 2019, 09:34 Michael Kesper <mk...@schokokeks.org>
> wrote:
> >
> > > Hi Alessandro,
> > >
> > > On 25.06.19 08:43, Alessandro Caviglione wrote:
> > > > complains on more than one network with name Publci...  ???
> > >
> > > [...]
> > >
> > > >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network
> > named
> > > >> Public
> > > >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one
> > > > network
> > > >> with the name Public
> > >
> > > Bye
> > > Michael
> > >
> > >
> > >
> >
>

Re: Very BIG mess with networking

Posted by Alessandro Caviglione <c....@gmail.com>.
No, new bond have the same name...
In the log there is:

 Found more than one network with the name Public
2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) *Found a network called
Public on host=192.168.200.39;
 Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c*
2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
nic [Nic:Guest-10.122.12.1-vlan://384]
2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
GuestVM
2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
(DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
GuestVM on host=192.168.200.39;
 Network=300e55f0-88ff-a460-e498-e75424bc292a;
pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc

So it seems that, however, it get the right Network and continue to looking
for other network.
Do you think this is the issue also if CS continue with other tasks instead
stops founding more than one network Public?
Could I simply change network name in Xen Pool and update in CS?

On Tue, Jun 25, 2019 at 9:52 AM Andrija Panic <an...@gmail.com>
wrote:

> If your new bond have changes name, have you changed also XenServer Traffic
> Label in CloudStack ?
> Active-active is known to be sometimes very problematic, switch back to
> active-passive until you solve your issues. Experiment later with
> active-active.
>
>
>
> On Tue, Jun 25, 2019, 09:34 Michael Kesper <mk...@schokokeks.org> wrote:
>
> > Hi Alessandro,
> >
> > On 25.06.19 08:43, Alessandro Caviglione wrote:
> > > complains on more than one network with name Publci...  ???
> >
> > [...]
> >
> > >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network
> named
> > >> Public
> > >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one
> > > network
> > >> with the name Public
> >
> > Bye
> > Michael
> >
> >
> >
>

Re: Very BIG mess with networking

Posted by Andrija Panic <an...@gmail.com>.
If your new bond have changes name, have you changed also XenServer Traffic
Label in CloudStack ?
Active-active is known to be sometimes very problematic, switch back to
active-passive until you solve your issues. Experiment later with
active-active.



On Tue, Jun 25, 2019, 09:34 Michael Kesper <mk...@schokokeks.org> wrote:

> Hi Alessandro,
>
> On 25.06.19 08:43, Alessandro Caviglione wrote:
> > complains on more than one network with name Publci...  ???
>
> [...]
>
> >> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> >> Public
> >> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> >> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one
> > network
> >> with the name Public
>
> Bye
> Michael
>
>
>

Re: Very BIG mess with networking

Posted by Michael Kesper <mk...@schokokeks.org>.
Hi Alessandro,

On 25.06.19 08:43, Alessandro Caviglione wrote:
> complains on more than one network with name Publci...  ???

[...]

>> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
>> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
>> Public
>> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
>> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one
> network
>> with the name Public

Bye
Michael



Re: Very BIG mess with networking

Posted by Alessandro Caviglione <c....@gmail.com>.
complains on more than one network with name Publci...  ???

On Tue, Jun 25, 2019 at 6:41 AM Andrija Panic <an...@gmail.com>
wrote:

> Quick one...
>
>
> Issues creating slave interface are reported by XenS.
> Active-active ia VERY problematic on XenS, revert to lacp or something.acs
> complains on more than one network with name Publci...
>
>
> Best...
>
> On Tue, Jun 25, 2019, 01:15 Alessandro Caviglione <c....@gmail.com>
> wrote:
>
> > Hi guys,
> > I'm experiencing a big issue with networking.
> > First of all, we're running CS 4.11.2 on XS 6.5 with Advanced Networking.
> > Our XS Pool network conf was:
> > - eth0 Management
> > - eth1 empty
> > - eth2 Public
> > - eth3 empty
> > - eth4 + eth5: bond LACP for GuestVM
> >
> > During our last maintenance last week, we decided to create a new bond
> > active-passive for Management (eth0 + eth1) and a new bond for Public
> > (eth2 + eth3)
> > In addition, we would change GuestVM bond from LACP to active-active.
> > So, we created from pool master a new bond and put eth0 + eth1 interface
> > in.
> >
> > MGT_NET_UUID=$(xe network-create name-label=Management)
> > PMI_PIF_UUID=$(xe pif-list host-uuid=xxx management=true params=uuid |
> awk
> > '{ print $5 }')
> > MGT_PIF0_UUID=$(xe pif-list host-uuid=xxx device=eth0 params=uuid | awk
> '{
> > print $5 }')
> > MGT_PIF1_UUID=$(xe pif-list host-uuid=xxx device=eth1 params=uuid | awk
> '{
> > print $5 }')
> > xe bond-create network-uuid=$MGT_NET_UUID
> > pif-uuids=$PMI_PIF_UUID,$MGT_PIF1_UUID mode=active-backup
> >
> > I used the same method to create new bond for Public network (obviously
> > changing nics).
> >
> > To change bond mode for GuestVM network I've used:
> >
> > xe pif-list device=bond0 VLAN=-1
> > xe pif-param-set uuid=<Bond0 UUID> other-config:bond-mode=balance-slb
> >
> > I've repeated Public and GuestVM commands on each of the three hosts in
> the
> > pool, for Management I've done it only on Pool Master.
> > After that I've restarted toolstack and (after the issue I'll explain)
> also
> > reboot every host.
> > However, this is the result:
> > - Existing VMs runs fine and I can stop, start, migrate
> > - New VM that require new network will fail
> > - restart network with clean will fail and makes network and instance
> > unavailable
> >
> > This is the CS log:
> >
> > 2019-06-25 00:14:35,751 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created VM
> > dbb045ef-5072-96ef-fbb0-1d7e3af0a0ea for r-899-VM
> > 2019-06-25 00:14:35,756 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) PV args are
> >
> >
> %template=domP%name=r-899-VM%eth2ip=31.44.38.53%eth2mask=255.255.255.240%gateway=31.44.38.49%eth0ip=10.122.12.1%eth0mask=255.255.255.0%domain=
> > tet.com
> >
> >
> %cidrsize=24%dhcprange=10.122.12.1%eth1ip=169.254.3.241%eth1mask=255.255.0.0%type=router%disable_rp_filter=true%dns1=8.8.8.8%dns2=8.8.4.4
> > 2019-06-25 00:14:35,757 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) HVM args are
> template=domP
> > name=r-899-VM eth2ip=31.44.38.53 eth2mask=255.255.255.240
> > gateway=31.44.38.49 eth0ip=10.122.12.1 eth0mask=255.255.255.0 domain=
> > tet.com
> > cidrsize=24 dhcprange=10.122.12.1 eth1ip=169.254.3.241
> eth1mask=255.255.0.0
> > type=router disable_rp_filter=true dns1=8.8.8.8 dns2=8.8.4.4
> > 2019-06-25 00:14:35,790 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) VBD
> > ec5e1e54-5902-cbcb-a6a8-abec0479d27c created for
> > com.cloud.agent.api.to.DiskTO@36117847
> > 2019-06-25 00:14:35,790 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM
> on
> > nic [Nic:Public-31.44.38.53-vlan://untagged]
> > 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> > Public
> > 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one
> network
> > with the name Public
> > 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> > Public on host=192.168.200.39;
> >  Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
> > pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c
> > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> > dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
> > 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM
> on
> > nic [Nic:Guest-10.122.12.1-vlan://384]
> > 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> > GuestVM
> > 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> > GuestVM on host=192.168.200.39;
> >  Network=300e55f0-88ff-a460-e498-e75424bc292a;
> > pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc
> > 2019-06-25 00:14:35,826 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VLAN 384 on host
> > 192.168.200.39 on device bond0
> > 2019-06-25 00:14:36,467 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) VLAN is created for 384.
> > The uuid is e34dc684-8a87-7ef6-5a49-8214011f8c3c
> > 2019-06-25 00:14:36,480 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> > 94558211-6bc6-ce64-9535-6d424b2b072c on 0
> > 2019-06-25 00:14:36,481 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM
> on
> > nic [Nic:Control-169.254.3.241-null]
> > 2019-06-25 00:14:36,531 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) already have a vif on dom0
> > for link local network
> > 2019-06-25 00:14:36,675 DEBUG [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> > 1ee58631-8524-ffa8-7ef7-e0acde48449f on 1
> > 2019-06-25 00:14:37,688 WARN  [c.c.h.x.r.CitrixResourceBase]
> > (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Task failed! Task record:
> >               uuid: 3177dcfc-ade9-3079-00f3-191efa1f90b2
> >            nameLabel: Async.VM.start_on
> >      nameDescription:
> >    allowedOperations: []
> >    currentOperations: {}
> >              created: Tue Jun 25 00:14:42 CEST 2019
> >             finished: Tue Jun 25 00:14:42 CEST 2019
> >               status: failure
> >           residentOn: com.xensource.xenapi.Host@85c62ee8
> >             progress: 1.0
> >                 type: <none/>
> >               result:
> >            errorInfo: [HOST_CANNOT_ATTACH_NETWORK,
> > OpaqueRef:a323ff49-04bb-3e0f-50e3-b1bc7ef31630,
> > OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a]
> >          otherConfig: {}
> >            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
> >             subtasks: []
> >
> >  And this is the xensource.log
> >
> >  INET 0.0.0.0:80|VBD.create R:47cd5359e955|audit] VBD.create: VM =
> > 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; VDI =
> > 'f80241ff-2a00-4565-bfa4-a980f1462f3e'
> >  INET 0.0.0.0:80|VBD.create R:47cd5359e955|xapi] Checking whether
> there's
> > a
> > migrate in progress...
> >  INET 0.0.0.0:80|VBD.create R:47cd5359e955|xapi] VBD.create (device = 0;
> > uuid = f2b9e26f-66d2-d939-e2a1-04af3ae41ac9; ref =
> > OpaqueRef:48deb7cb-2d3b-8370-107a-68ba25213c3e)
> >  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|audit] VIF.create: VM =
> > 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
> > '9fa48b75-d68e-feaf-2eb4-8a7340f8c89b'
> >  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] VIF.create running
> >  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] Found mac_seed on VM:
> > supplied MAC parameter = '1e:00:19:00:00:4e'
> >  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] VIF
> > ref='OpaqueRef:96cfcfaa-cf0f-e8a2-6ff5-4029815999b1' created (VM =
> > 'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
> > '1e:00:19:00:00:4e')
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|audit] VLAN.create: network
> =
> > '2ce5f897-376f-4562-46c5-6f161106584b'; VLAN tag = 363
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xapi] Session.create
> > trackid=70355f146f1dd9d4873b9fdde8d1b8eb pool=true uname= originator=
> > is_local_superuser=true auth_user_sid=
> > parent=trackid=ab7ab58a3d7585f75b89abdab8725787
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|mscgen] xapi=>xapi
> > [label="(XML)"];
> >  UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
> > D:9eca0ccc2d01 created by task R:6046e28d22a8
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|mscgen] xapi=>dst_xapi
> > [label="(XML)"];
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client] stunnel pid:
> > 10191 (cached = true) connected to 192.168.200.36:443
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client]
> > with_recorded_stunnelpid
> > task_opt=OpaqueRef:6046e28d-22a8-72de-9c3e-b08b87bb1ca6 s_pid=10191
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client] stunnel pid:
> > 10191 (cached = true) returned stunnel to cache
> >  INET 0.0.0.0:80|local logout in message forwarder D:457fa1a34193|xapi]
> > Session.destroy trackid=70355f146f1dd9d4873b9fdde8d1b8eb
> >  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|taskhelper] the status of
> > R:6046e28d22a8 is: success; cannot set it to `success
> >  INET 0.0.0.0:80|VIF.create R:320e6896912a|audit] VIF.create: VM =
> > 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
> > '2ce5f897-376f-4562-46c5-6f161106584b'
> >  INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] VIF.create running
> >  INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] Found mac_seed on VM:
> > supplied MAC parameter = '02:00:19:9e:00:02'
> >  INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] VIF
> > ref='OpaqueRef:d4db98cf-6c5c-bb9f-999a-291c33e5c0cd' created (VM =
> > 'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
> > '02:00:19:9e:00:02')
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|audit] Host.call_plugin
> > host = '18d712b3-f64a-4178-9404-144f4f8fce2f (LIONARCH)'; plugin =
> 'vmops';
> > fn = 'setLinkLocalIP'; args = [ brName: xapi4 ]
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xapi] Session.create
> > trackid=f3e9f44a74e7dad26498e75c6de5eeba pool=true uname= originator=
> > is_local_superuser=true auth_user_sid=
> > parent=trackid=ab7ab58a3d7585f75b89abdab8725787
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|mscgen] xapi=>xapi
> > [label="(XML)"];
> >  UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
> > D:30eddea1c3fd created by task R:620d4a6a391b
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|mscgen] xapi=>dst_xapi
> > [label="(XML)"];
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client] stunnel
> > pid: 10224 (cached = true) connected to 192.168.200.36:443
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client]
> > with_recorded_stunnelpid
> > task_opt=OpaqueRef:620d4a6a-391b-e414-e4f3-40ea69e89d7a s_pid=10224
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client] stunnel
> > pid: 10224 (cached = true) returned stunnel to cache
> >  INET 0.0.0.0:80|local logout in message forwarder D:f89f813a5123|xapi]
> > Session.destroy trackid=f3e9f44a74e7dad26498e75c6de5eeba
> >  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|taskhelper] the status
> of
> > R:620d4a6a391b is: success; cannot set it to `success
> >  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|audit] VIF.create: VM =
> > 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
> > 'e667320f-0e48-4cc6-6329-d723846cf8be'
> >  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] VIF.create running
> >  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] Found mac_seed on VM:
> > supplied MAC parameter = '0e:00:a9:fe:01:73'
> >  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] VIF
> > ref='OpaqueRef:36a3163e-2a8e-fda1-7e13-0d6802b8ba12' created (VM =
> > 'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
> > '0e:00:a9:fe:01:73')
> > |Async.VM.start_on R:96fa354a6a0d|dispatcher] spawning a new thread to
> > handle the current task (trackid=ab7ab58a3d7585f75b89abdab8725787)
> > |Async.VM.start_on R:96fa354a6a0d|audit] VM.start_on: VM =
> > 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; host
> > '18d712b3-f64a-4178-9404-144f4f8fce2f (LIONARCH)'
> > |Async.VM.start_on R:96fa354a6a0d|xapi] No operations are valid because
> > current-operations = [ OpaqueRef:96fa354a-6a0d-da55-f1d4-15eaa51d6640 ->
> > attach ]
> > es
> > |Async.VM.start_on R:96fa354a6a0d|xapi] The VM's BIOS strings were not
> yet
> > filled in. The VM is now made BIOS-generic.
> > |Async.VM.start_on R:96fa354a6a0d|xapi] Checking whether VM
> > OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e can run on host
> > OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e
> > |Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at
> > xapi_network_attach_helpers.ml:50.8-90 -> list.ml:69.12-15 ->
> > xapi_vm_helpers.ml:379.4-111
> > |Async.VM.start_on R:96fa354a6a0d|xapi] Caught exception while checking
> if
> > network OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a could be attached
> on
> > host
> OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e:CANNOT_PLUG_BOND_SLAVE:
> > [ OpaqueRef:b9238dea-802c-d8da-a7b7-ee5bc642a615 ]
> > |Async.VM.start_on R:96fa354a6a0d|xapi] Raised at xapi_vm_helpers.ml:390
> > .10-134
> > -> list.ml:69.12-15 -> xapi_vm_helpers.ml:507.1-47 ->
> > message_forwarding.ml:932.5-85 -> threadext.ml:20.20-24 ->
> > threadext.ml:20.62-65
> > -> message_forwarding.ml:40.25-57 -> message_forwarding.ml:1262.9-276 ->
> > pervasiveext.ml:22.2-9
> > |Async.VM.start_on R:96fa354a6a0d|xapi] Raised at pervasiveext.ml:26
> .22-25
> > -> pervasiveext.ml:22.2-9
> > |Async.VM.start_on R:96fa354a6a0d|xapi] Raised at pervasiveext.ml:26
> .22-25
> > -> pervasiveext.ml:22.2-9
> > |Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at
> pervasiveext.ml:26
> > .22-25
> > -> message_forwarding.ml:1248.3-1023 -> rbac.ml:229.16-23
> > |Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at rbac.ml:238.10-15
> > ->
> > server_helpers.ml:79.11-41
> > |Async.VM.start_on R:96fa354a6a0d|dispatcher] Server_helpers.exec
> > exception_handler: Got exception HOST_CANNOT_ATTACH_NETWORK: [
> > OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e;
> > OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a ]
> >
> > Please, PLEASE, let me know that someone knows how to fix it!
> >
>

Re: Very BIG mess with networking

Posted by Andrija Panic <an...@gmail.com>.
Quick one...


Issues creating slave interface are reported by XenS.
Active-active ia VERY problematic on XenS, revert to lacp or something.acs
complains on more than one network with name Publci...


Best...

On Tue, Jun 25, 2019, 01:15 Alessandro Caviglione <c....@gmail.com>
wrote:

> Hi guys,
> I'm experiencing a big issue with networking.
> First of all, we're running CS 4.11.2 on XS 6.5 with Advanced Networking.
> Our XS Pool network conf was:
> - eth0 Management
> - eth1 empty
> - eth2 Public
> - eth3 empty
> - eth4 + eth5: bond LACP for GuestVM
>
> During our last maintenance last week, we decided to create a new bond
> active-passive for Management (eth0 + eth1) and a new bond for Public
> (eth2 + eth3)
> In addition, we would change GuestVM bond from LACP to active-active.
> So, we created from pool master a new bond and put eth0 + eth1 interface
> in.
>
> MGT_NET_UUID=$(xe network-create name-label=Management)
> PMI_PIF_UUID=$(xe pif-list host-uuid=xxx management=true params=uuid | awk
> '{ print $5 }')
> MGT_PIF0_UUID=$(xe pif-list host-uuid=xxx device=eth0 params=uuid | awk '{
> print $5 }')
> MGT_PIF1_UUID=$(xe pif-list host-uuid=xxx device=eth1 params=uuid | awk '{
> print $5 }')
> xe bond-create network-uuid=$MGT_NET_UUID
> pif-uuids=$PMI_PIF_UUID,$MGT_PIF1_UUID mode=active-backup
>
> I used the same method to create new bond for Public network (obviously
> changing nics).
>
> To change bond mode for GuestVM network I've used:
>
> xe pif-list device=bond0 VLAN=-1
> xe pif-param-set uuid=<Bond0 UUID> other-config:bond-mode=balance-slb
>
> I've repeated Public and GuestVM commands on each of the three hosts in the
> pool, for Management I've done it only on Pool Master.
> After that I've restarted toolstack and (after the issue I'll explain) also
> reboot every host.
> However, this is the result:
> - Existing VMs runs fine and I can stop, start, migrate
> - New VM that require new network will fail
> - restart network with clean will fail and makes network and instance
> unavailable
>
> This is the CS log:
>
> 2019-06-25 00:14:35,751 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created VM
> dbb045ef-5072-96ef-fbb0-1d7e3af0a0ea for r-899-VM
> 2019-06-25 00:14:35,756 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) PV args are
>
> %template=domP%name=r-899-VM%eth2ip=31.44.38.53%eth2mask=255.255.255.240%gateway=31.44.38.49%eth0ip=10.122.12.1%eth0mask=255.255.255.0%domain=
> tet.com
>
> %cidrsize=24%dhcprange=10.122.12.1%eth1ip=169.254.3.241%eth1mask=255.255.0.0%type=router%disable_rp_filter=true%dns1=8.8.8.8%dns2=8.8.4.4
> 2019-06-25 00:14:35,757 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) HVM args are  template=domP
> name=r-899-VM eth2ip=31.44.38.53 eth2mask=255.255.255.240
> gateway=31.44.38.49 eth0ip=10.122.12.1 eth0mask=255.255.255.0 domain=
> tet.com
> cidrsize=24 dhcprange=10.122.12.1 eth1ip=169.254.3.241 eth1mask=255.255.0.0
> type=router disable_rp_filter=true dns1=8.8.8.8 dns2=8.8.4.4
> 2019-06-25 00:14:35,790 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) VBD
> ec5e1e54-5902-cbcb-a6a8-abec0479d27c created for
> com.cloud.agent.api.to.DiskTO@36117847
> 2019-06-25 00:14:35,790 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
> nic [Nic:Public-31.44.38.53-vlan://untagged]
> 2019-06-25 00:14:35,792 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> Public
> 2019-06-25 00:14:35,793 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found more than one network
> with the name Public
> 2019-06-25 00:14:35,802 DEBUG [c.c.h.x.r.XsLocalNetwork]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> Public on host=192.168.200.39;
>  Network=9fa48b75-d68e-feaf-2eb4-8a7340f8c89b;
> pif=ca4c1679-fa36-bc93-37de-28a74ddc4f2c
> 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> dfaab3d7-7921-e4d5-ba27-537e8d549a5c on 2
> 2019-06-25 00:14:35,807 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
> nic [Nic:Guest-10.122.12.1-vlan://384]
> 2019-06-25 00:14:35,809 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Looking for network named
> GuestVM
> 2019-06-25 00:14:35,825 DEBUG [c.c.h.x.r.XsLocalNetwork]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Found a network called
> GuestVM on host=192.168.200.39;
>  Network=300e55f0-88ff-a460-e498-e75424bc292a;
> pif=b67841c5-6361-0dbf-a63d-a3e9c1b9f2fc
> 2019-06-25 00:14:35,826 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VLAN 384 on host
> 192.168.200.39 on device bond0
> 2019-06-25 00:14:36,467 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) VLAN is created for 384.
> The uuid is e34dc684-8a87-7ef6-5a49-8214011f8c3c
> 2019-06-25 00:14:36,480 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> 94558211-6bc6-ce64-9535-6d424b2b072c on 0
> 2019-06-25 00:14:36,481 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Creating VIF for r-899-VM on
> nic [Nic:Control-169.254.3.241-null]
> 2019-06-25 00:14:36,531 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) already have a vif on dom0
> for link local network
> 2019-06-25 00:14:36,675 DEBUG [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Created a vif
> 1ee58631-8524-ffa8-7ef7-e0acde48449f on 1
> 2019-06-25 00:14:37,688 WARN  [c.c.h.x.r.CitrixResourceBase]
> (DirectAgent-35:ctx-c5073156) (logid:0d9e7907) Task failed! Task record:
>               uuid: 3177dcfc-ade9-3079-00f3-191efa1f90b2
>            nameLabel: Async.VM.start_on
>      nameDescription:
>    allowedOperations: []
>    currentOperations: {}
>              created: Tue Jun 25 00:14:42 CEST 2019
>             finished: Tue Jun 25 00:14:42 CEST 2019
>               status: failure
>           residentOn: com.xensource.xenapi.Host@85c62ee8
>             progress: 1.0
>                 type: <none/>
>               result:
>            errorInfo: [HOST_CANNOT_ATTACH_NETWORK,
> OpaqueRef:a323ff49-04bb-3e0f-50e3-b1bc7ef31630,
> OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a]
>          otherConfig: {}
>            subtaskOf: com.xensource.xenapi.Task@aaf13f6f
>             subtasks: []
>
>  And this is the xensource.log
>
>  INET 0.0.0.0:80|VBD.create R:47cd5359e955|audit] VBD.create: VM =
> 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; VDI =
> 'f80241ff-2a00-4565-bfa4-a980f1462f3e'
>  INET 0.0.0.0:80|VBD.create R:47cd5359e955|xapi] Checking whether there's
> a
> migrate in progress...
>  INET 0.0.0.0:80|VBD.create R:47cd5359e955|xapi] VBD.create (device = 0;
> uuid = f2b9e26f-66d2-d939-e2a1-04af3ae41ac9; ref =
> OpaqueRef:48deb7cb-2d3b-8370-107a-68ba25213c3e)
>  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|audit] VIF.create: VM =
> 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
> '9fa48b75-d68e-feaf-2eb4-8a7340f8c89b'
>  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] VIF.create running
>  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] Found mac_seed on VM:
> supplied MAC parameter = '1e:00:19:00:00:4e'
>  INET 0.0.0.0:80|VIF.create R:ac85e8d39cd8|xapi] VIF
> ref='OpaqueRef:96cfcfaa-cf0f-e8a2-6ff5-4029815999b1' created (VM =
> 'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
> '1e:00:19:00:00:4e')
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|audit] VLAN.create: network =
> '2ce5f897-376f-4562-46c5-6f161106584b'; VLAN tag = 363
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xapi] Session.create
> trackid=70355f146f1dd9d4873b9fdde8d1b8eb pool=true uname= originator=
> is_local_superuser=true auth_user_sid=
> parent=trackid=ab7ab58a3d7585f75b89abdab8725787
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|mscgen] xapi=>xapi
> [label="(XML)"];
>  UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
> D:9eca0ccc2d01 created by task R:6046e28d22a8
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|mscgen] xapi=>dst_xapi
> [label="(XML)"];
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client] stunnel pid:
> 10191 (cached = true) connected to 192.168.200.36:443
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client]
> with_recorded_stunnelpid
> task_opt=OpaqueRef:6046e28d-22a8-72de-9c3e-b08b87bb1ca6 s_pid=10191
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|xmlrpc_client] stunnel pid:
> 10191 (cached = true) returned stunnel to cache
>  INET 0.0.0.0:80|local logout in message forwarder D:457fa1a34193|xapi]
> Session.destroy trackid=70355f146f1dd9d4873b9fdde8d1b8eb
>  INET 0.0.0.0:80|VLAN.create R:6046e28d22a8|taskhelper] the status of
> R:6046e28d22a8 is: success; cannot set it to `success
>  INET 0.0.0.0:80|VIF.create R:320e6896912a|audit] VIF.create: VM =
> 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
> '2ce5f897-376f-4562-46c5-6f161106584b'
>  INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] VIF.create running
>  INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] Found mac_seed on VM:
> supplied MAC parameter = '02:00:19:9e:00:02'
>  INET 0.0.0.0:80|VIF.create R:320e6896912a|xapi] VIF
> ref='OpaqueRef:d4db98cf-6c5c-bb9f-999a-291c33e5c0cd' created (VM =
> 'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
> '02:00:19:9e:00:02')
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|audit] Host.call_plugin
> host = '18d712b3-f64a-4178-9404-144f4f8fce2f (LIONARCH)'; plugin = 'vmops';
> fn = 'setLinkLocalIP'; args = [ brName: xapi4 ]
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xapi] Session.create
> trackid=f3e9f44a74e7dad26498e75c6de5eeba pool=true uname= originator=
> is_local_superuser=true auth_user_sid=
> parent=trackid=ab7ab58a3d7585f75b89abdab8725787
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|mscgen] xapi=>xapi
> [label="(XML)"];
>  UNIX /var/xapi/xapi||dummytaskhelper] task dispatch:session.get_uuid
> D:30eddea1c3fd created by task R:620d4a6a391b
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|mscgen] xapi=>dst_xapi
> [label="(XML)"];
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client] stunnel
> pid: 10224 (cached = true) connected to 192.168.200.36:443
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client]
> with_recorded_stunnelpid
> task_opt=OpaqueRef:620d4a6a-391b-e414-e4f3-40ea69e89d7a s_pid=10224
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|xmlrpc_client] stunnel
> pid: 10224 (cached = true) returned stunnel to cache
>  INET 0.0.0.0:80|local logout in message forwarder D:f89f813a5123|xapi]
> Session.destroy trackid=f3e9f44a74e7dad26498e75c6de5eeba
>  INET 0.0.0.0:80|host.call_plugin R:620d4a6a391b|taskhelper] the status of
> R:620d4a6a391b is: success; cannot set it to `success
>  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|audit] VIF.create: VM =
> 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; network =
> 'e667320f-0e48-4cc6-6329-d723846cf8be'
>  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] VIF.create running
>  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] Found mac_seed on VM:
> supplied MAC parameter = '0e:00:a9:fe:01:73'
>  INET 0.0.0.0:80|VIF.create R:1685ff9e765e|xapi] VIF
> ref='OpaqueRef:36a3163e-2a8e-fda1-7e13-0d6802b8ba12' created (VM =
> 'OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e'; MAC address =
> '0e:00:a9:fe:01:73')
> |Async.VM.start_on R:96fa354a6a0d|dispatcher] spawning a new thread to
> handle the current task (trackid=ab7ab58a3d7585f75b89abdab8725787)
> |Async.VM.start_on R:96fa354a6a0d|audit] VM.start_on: VM =
> 'dc9b4f1a-960c-8f02-a34c-32ad20e053f6 (r-899-VM)'; host
> '18d712b3-f64a-4178-9404-144f4f8fce2f (LIONARCH)'
> |Async.VM.start_on R:96fa354a6a0d|xapi] No operations are valid because
> current-operations = [ OpaqueRef:96fa354a-6a0d-da55-f1d4-15eaa51d6640 ->
> attach ]
> es
> |Async.VM.start_on R:96fa354a6a0d|xapi] The VM's BIOS strings were not yet
> filled in. The VM is now made BIOS-generic.
> |Async.VM.start_on R:96fa354a6a0d|xapi] Checking whether VM
> OpaqueRef:d3de28ef-ef26-02f7-46aa-ec01fe5c035e can run on host
> OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e
> |Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at
> xapi_network_attach_helpers.ml:50.8-90 -> list.ml:69.12-15 ->
> xapi_vm_helpers.ml:379.4-111
> |Async.VM.start_on R:96fa354a6a0d|xapi] Caught exception while checking if
> network OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a could be attached on
> host OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e:CANNOT_PLUG_BOND_SLAVE:
> [ OpaqueRef:b9238dea-802c-d8da-a7b7-ee5bc642a615 ]
> |Async.VM.start_on R:96fa354a6a0d|xapi] Raised at xapi_vm_helpers.ml:390
> .10-134
> -> list.ml:69.12-15 -> xapi_vm_helpers.ml:507.1-47 ->
> message_forwarding.ml:932.5-85 -> threadext.ml:20.20-24 ->
> threadext.ml:20.62-65
> -> message_forwarding.ml:40.25-57 -> message_forwarding.ml:1262.9-276 ->
> pervasiveext.ml:22.2-9
> |Async.VM.start_on R:96fa354a6a0d|xapi] Raised at pervasiveext.ml:26.22-25
> -> pervasiveext.ml:22.2-9
> |Async.VM.start_on R:96fa354a6a0d|xapi] Raised at pervasiveext.ml:26.22-25
> -> pervasiveext.ml:22.2-9
> |Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at pervasiveext.ml:26
> .22-25
> -> message_forwarding.ml:1248.3-1023 -> rbac.ml:229.16-23
> |Async.VM.start_on R:96fa354a6a0d|backtrace] Raised at rbac.ml:238.10-15
> ->
> server_helpers.ml:79.11-41
> |Async.VM.start_on R:96fa354a6a0d|dispatcher] Server_helpers.exec
> exception_handler: Got exception HOST_CANNOT_ATTACH_NETWORK: [
> OpaqueRef:f0e56fe4-004c-31af-4767-531e22622c1e;
> OpaqueRef:f9e9b47a-28c3-5031-a8fb-5e103c970a8a ]
>
> Please, PLEASE, let me know that someone knows how to fix it!
>