You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by "S.Fuller" <st...@gmail.com> on 2022/10/28 15:35:34 UTC

Live Migration fails - Cannot get interface MTU - No such device

I'm working on migrating an existing cluster to new servers, I have two new
servers, which I have tested by adding them to their own cluster within an
existing pod. I have been able to successfully complete live migrations
between these two servers. I then removed the servers from this test
cluster, and added them to the cluster that contains the nodes I want to
retire. While I can move VMs off of the new servers to the nodes I want to
retire, when I attempt to live migrate TO the new nodes, I receive the
error in the subject of this post.

When the VM is running on a "new" node, the source bridge for it's network
interface is  "brbond1-<VLAN_ID>". When it's running on an "old" node it's
"breth2-<VLAN_ID>"

The new servers have a slightly different network configuration than the
old servers. The old servers had a single interface directly assigned to
each of the bridge networks, plus one for storage. The new servers have two
bonded interfaces (each with two physical nics assigned) which are then
assigned to the bridged networks.

Seeing as how the live migration worked between the new nodes in their own
cluster, as well as from the new nodes to the old nodes, I assumed
migration from the old nodes to the new ones should work as well. Not sure
how to start troubleshooting this and hoping the answer isn't "the network
configurations need to be identical"

-- 
Steve Fuller
stevefuller@gmail.com

Re: Live Migration fails - Cannot get interface MTU - No such device

Posted by Wei ZHOU <us...@gmail.com>.
Hi Steve,

CloudStack is able to handle vm migration between hosts with different
device names.
There is a libvirt hook present on kvm hosts /etc/libvirt/hooks/qemu

-Wei


On Fri, 28 Oct 2022 at 20:56, S.Fuller <st...@gmail.com> wrote:

> Alex - Thanks for the reply. As I dug into this a bit more, I noticed that
> the network that my instance's NIC associated to was in a different
> cluster. Now that I've that VM's NIC associated with a network within the
> same cluster. Things are working as expected.
>
> On Fri, Oct 28, 2022 at 11:48 AM Alex Mattioli <
> Alex.Mattioli@shapeblue.com>
> wrote:
>
> > Hi Steve,
> > I'd assume you have "brbond1" set as the guest traffic label for that
> > zone, that being the case the other servers need to match that. ACS uses
> > the traffic labels to map the virtual networks to the physical
> nics/bonds.
> >
> > Can you deploy VMs to the new nodes?
> >
> > Regards,
> > Alex
> >
> >
> >
> >
> > -----Original Message-----
> > From: S.Fuller <st...@gmail.com>
> > Sent: 28 October 2022 17:36
> > To: users@cloudstack.apache.org
> > Subject: Live Migration fails - Cannot get interface MTU - No such device
> >
> > I'm working on migrating an existing cluster to new servers, I have two
> > new servers, which I have tested by adding them to their own cluster
> within
> > an existing pod. I have been able to successfully complete live
> migrations
> > between these two servers. I then removed the servers from this test
> > cluster, and added them to the cluster that contains the nodes I want to
> > retire. While I can move VMs off of the new servers to the nodes I want
> to
> > retire, when I attempt to live migrate TO the new nodes, I receive the
> > error in the subject of this post.
> >
> > When the VM is running on a "new" node, the source bridge for it's
> network
> > interface is  "brbond1-<VLAN_ID>". When it's running on an "old" node
> it's
> > "breth2-<VLAN_ID>"
> >
> > The new servers have a slightly different network configuration than the
> > old servers. The old servers had a single interface directly assigned to
> > each of the bridge networks, plus one for storage. The new servers have
> two
> > bonded interfaces (each with two physical nics assigned) which are then
> > assigned to the bridged networks.
> >
> > Seeing as how the live migration worked between the new nodes in their
> own
> > cluster, as well as from the new nodes to the old nodes, I assumed
> > migration from the old nodes to the new ones should work as well. Not
> sure
> > how to start troubleshooting this and hoping the answer isn't "the
> network
> > configurations need to be identical"
> >
> > --
> > Steve Fuller
> > stevefuller@gmail.com
> >
>
>
> --
> Steve Fuller
> stevefuller@gmail.com
>

Re: Live Migration fails - Cannot get interface MTU - No such device

Posted by "S.Fuller" <st...@gmail.com>.
Alex - Thanks for the reply. As I dug into this a bit more, I noticed that
the network that my instance's NIC associated to was in a different
cluster. Now that I've that VM's NIC associated with a network within the
same cluster. Things are working as expected.

On Fri, Oct 28, 2022 at 11:48 AM Alex Mattioli <Al...@shapeblue.com>
wrote:

> Hi Steve,
> I'd assume you have "brbond1" set as the guest traffic label for that
> zone, that being the case the other servers need to match that. ACS uses
> the traffic labels to map the virtual networks to the physical nics/bonds.
>
> Can you deploy VMs to the new nodes?
>
> Regards,
> Alex
>
>
>
>
> -----Original Message-----
> From: S.Fuller <st...@gmail.com>
> Sent: 28 October 2022 17:36
> To: users@cloudstack.apache.org
> Subject: Live Migration fails - Cannot get interface MTU - No such device
>
> I'm working on migrating an existing cluster to new servers, I have two
> new servers, which I have tested by adding them to their own cluster within
> an existing pod. I have been able to successfully complete live migrations
> between these two servers. I then removed the servers from this test
> cluster, and added them to the cluster that contains the nodes I want to
> retire. While I can move VMs off of the new servers to the nodes I want to
> retire, when I attempt to live migrate TO the new nodes, I receive the
> error in the subject of this post.
>
> When the VM is running on a "new" node, the source bridge for it's network
> interface is  "brbond1-<VLAN_ID>". When it's running on an "old" node it's
> "breth2-<VLAN_ID>"
>
> The new servers have a slightly different network configuration than the
> old servers. The old servers had a single interface directly assigned to
> each of the bridge networks, plus one for storage. The new servers have two
> bonded interfaces (each with two physical nics assigned) which are then
> assigned to the bridged networks.
>
> Seeing as how the live migration worked between the new nodes in their own
> cluster, as well as from the new nodes to the old nodes, I assumed
> migration from the old nodes to the new ones should work as well. Not sure
> how to start troubleshooting this and hoping the answer isn't "the network
> configurations need to be identical"
>
> --
> Steve Fuller
> stevefuller@gmail.com
>


-- 
Steve Fuller
stevefuller@gmail.com

RE: Live Migration fails - Cannot get interface MTU - No such device

Posted by Alex Mattioli <Al...@shapeblue.com>.
Hi Steve,
I'd assume you have "brbond1" set as the guest traffic label for that zone, that being the case the other servers need to match that. ACS uses the traffic labels to map the virtual networks to the physical nics/bonds.

Can you deploy VMs to the new nodes?

Regards,
Alex

 


-----Original Message-----
From: S.Fuller <st...@gmail.com> 
Sent: 28 October 2022 17:36
To: users@cloudstack.apache.org
Subject: Live Migration fails - Cannot get interface MTU - No such device

I'm working on migrating an existing cluster to new servers, I have two new servers, which I have tested by adding them to their own cluster within an existing pod. I have been able to successfully complete live migrations between these two servers. I then removed the servers from this test cluster, and added them to the cluster that contains the nodes I want to retire. While I can move VMs off of the new servers to the nodes I want to retire, when I attempt to live migrate TO the new nodes, I receive the error in the subject of this post.

When the VM is running on a "new" node, the source bridge for it's network interface is  "brbond1-<VLAN_ID>". When it's running on an "old" node it's "breth2-<VLAN_ID>"

The new servers have a slightly different network configuration than the old servers. The old servers had a single interface directly assigned to each of the bridge networks, plus one for storage. The new servers have two bonded interfaces (each with two physical nics assigned) which are then assigned to the bridged networks.

Seeing as how the live migration worked between the new nodes in their own cluster, as well as from the new nodes to the old nodes, I assumed migration from the old nodes to the new ones should work as well. Not sure how to start troubleshooting this and hoping the answer isn't "the network configurations need to be identical"

--
Steve Fuller
stevefuller@gmail.com