You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Nixon Varghese K S <ni...@netstratum.com> on 2024/04/27 01:34:20 UTC

Guest VM connecting DC NAS

Hi All,

The optimum way to get NFS storage on the ACS guest VM is what we're
attempting to determine.This test environment is set up for advanced
networking on ACS (4.19.1).

ACS Portal: 10.10.40.252
NFS server: 10.10.40.250
KVM host: 172.16.0.100 (Have two network interface cards, one for private
use (cloudbr0) and the other for public use (cloudbr1))

ACS Management Range: 172.16.0.10–172.16.0.50 (cloudbr0)
ACS Public Range: Public IP RANGE (cloudbr1)

I had trunked KVM Privet NIC to talk to the ACS and NFS subnets. So through
172.16.0.0, I can communicate with the 10.10.40.0 network.

When I launch a VM with an isolated network of 10.1.1.5, it creates a VR
with 3 NICs (eth0: 10.1.1.1, eth1: control, and eth2: public). I need to
mount an NFS server with this guest VM. While checking the VR route, I can
see the default route to the public NIC. Through that NIC, I won't get the
10.10.40.250 system as it passed out from KVM through cloudbr1.

It is not advised to trunk KVM host cloudbr1 NIC and allow 10.10.40.250
traffic to route through the public network. What, in this particular
situation, will be the best course of action? I'm eager to hear your
thoughts.

With Regards,
Nixon Varghese

Re: Guest VM connecting DC NAS

Posted by Jithin Raju <ji...@shapeblue.com>.
Hi Nixon,

In an isolated guest network you could create a guest instance and use it as a NAS?
In a shared network you will have more flexibility in using existing NAS in the cloudstack provisioned guest instances.

-Jithin

From: Nixon Varghese K S <ni...@netstratum.com>
Date: Saturday, 27 April 2024 at 7:04 AM
To: users@cloudstack.apache.org <us...@cloudstack.apache.org>
Subject: Guest VM connecting DC NAS
Hi All,

The optimum way to get NFS storage on the ACS guest VM is what we're
attempting to determine.This test environment is set up for advanced
networking on ACS (4.19.1).

ACS Portal: 10.10.40.252
NFS server: 10.10.40.250
KVM host: 172.16.0.100 (Have two network interface cards, one for private
use (cloudbr0) and the other for public use (cloudbr1))

ACS Management Range: 172.16.0.10–172.16.0.50 (cloudbr0)
ACS Public Range: Public IP RANGE (cloudbr1)

I had trunked KVM Privet NIC to talk to the ACS and NFS subnets. So through
172.16.0.0, I can communicate with the 10.10.40.0 network.

When I launch a VM with an isolated network of 10.1.1.5, it creates a VR
with 3 NICs (eth0: 10.1.1.1, eth1: control, and eth2: public). I need to
mount an NFS server with this guest VM. While checking the VR route, I can
see the default route to the public NIC. Through that NIC, I won't get the
10.10.40.250 system as it passed out from KVM through cloudbr1.

It is not advised to trunk KVM host cloudbr1 NIC and allow 10.10.40.250
traffic to route through the public network. What, in this particular
situation, will be the best course of action? I'm eager to hear your
thoughts.

With Regards,
Nixon Varghese