You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Axel Irriger <ir...@web.de> on 2013/04/13 20:09:38 UTC
Cloudstack 4.0.1 single host installation -> no networking?
Hi everybody
I try to install CloudStack on Ubuntu 12.04 on a single host (as a test
installation) and I'm a bit stuck on networking.
Here's my setup:
HP n40l
1 NIC, DHCP'ed to 192.168.2.199
Gateway and DNS 192.168.2.1 (my router)
A basic zone with the following IP ranges configured:
Guest IP ranges 192.168.2.60-192.168.2.70
Management IP range 192.168.2.50 - 192.168.2.59
Virtual router config is empty
Security groups setup is:
Ingress TCP 1-1024, UDP 1-1026, ICMP -1 -1. All with CIDR 0/0
I configured networking like this:
# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).
# The loopback network interface
auto lo
iface lo inet loopback
# The primary network interface
auto eth0
iface eth0 inet dhcp
# Public network
auto cloudbr0
iface cloudbr0 inet manual
bridge_ports eth0.200
bridge_fd 5
bridge_stp off
bridge_maxwait 1
# Private network
auto cloudbr1
iface cloudbr1 inet manual
bridge_ports eth0.300
bridge_fd 5
bridge_stp off
bridge_maxwait 1
My cloud agent configuration does look like this:
#Storage
#Wed Apr 10 18:18:19 CEST 2013
guest.network.device=cloudbr0
workers=5
private.network.device=cloudbr1
port=8250
resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
pod=1
zone=1
guid=b06aff50-b93c-3479-8f5c-16c2e621e197
public.network.device=cloudbr0
cluster=1
local.storage.uuid=98afc039-4cd8-4be1-b1eb-1d8a2d747753
domr.scripts.dir=scripts/network/domr/kvm
LibvirtComputingResource.id=5
host=192.168.2.199
Initially, with only the management server running, my iptables does look
like this:
Chain INPUT (policy ACCEPT 13259 packets, 1942K bytes)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT udp -- virbr0 any anywhere anywhere
udp dpt:domain
0 0 ACCEPT tcp -- virbr0 any anywhere anywhere
tcp dpt:domain
0 0 ACCEPT udp -- virbr0 any anywhere anywhere
udp dpt:bootps
0 0 ACCEPT tcp -- virbr0 any anywhere anywhere
tcp dpt:bootps
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT all -- any virbr0 anywhere
192.168.122.0/24 state RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 any 192.168.122.0/24 anywhere
0 0 ACCEPT all -- virbr0 virbr0 anywhere anywhere
0 0 REJECT all -- any virbr0 anywhere anywhere
reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 any anywhere anywhere
reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 13141 packets, 1962K bytes)
pkts bytes target prot opt in out source
destination
My ebtables config:
Bridge table: filter
Bridge chain: INPUT, entries: 0, policy: ACCEPT
Bridge chain: FORWARD, entries: 0, policy: ACCEPT
Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
Then, I start the cloud-agent. This leads to a zone getting enabled and two
system VMs being started. Now, ebtables still is completely empty.
Though, iptables now looks like this:
Chain INPUT (policy ACCEPT 23083 packets, 72M bytes)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT udp -- virbr0 any anywhere anywhere
udp dpt:domain
0 0 ACCEPT tcp -- virbr0 any anywhere anywhere
tcp dpt:domain
0 0 ACCEPT udp -- virbr0 any anywhere anywhere
udp dpt:bootps
0 0 ACCEPT tcp -- virbr0 any anywhere anywhere
tcp dpt:bootps
Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
pkts bytes target prot opt in out source
destination
0 0 BF-cloudbr0 all -- any cloudbr0 anywhere
anywhere PHYSDEV match --physdev-is-bridged
0 0 BF-cloudbr0 all -- cloudbr0 any anywhere
anywhere PHYSDEV match --physdev-is-bridged
0 0 DROP all -- any cloudbr0 anywhere
anywhere
0 0 DROP all -- cloudbr0 any anywhere
anywhere
0 0 BF-cloudbr1 all -- any cloudbr1 anywhere
anywhere PHYSDEV match --physdev-is-bridged
0 0 BF-cloudbr1 all -- cloudbr1 any anywhere
anywhere PHYSDEV match --physdev-is-bridged
0 0 DROP all -- any cloudbr1 anywhere
anywhere
0 0 DROP all -- cloudbr1 any anywhere
anywhere
0 0 ACCEPT all -- any virbr0 anywhere
192.168.122.0/24 state RELATED,ESTABLISHED
0 0 ACCEPT all -- virbr0 any 192.168.122.0/24 anywhere
0 0 ACCEPT all -- virbr0 virbr0 anywhere anywhere
0 0 REJECT all -- any virbr0 anywhere anywhere
reject-with icmp-port-unreachable
0 0 REJECT all -- virbr0 any anywhere anywhere
reject-with icmp-port-unreachable
Chain OUTPUT (policy ACCEPT 22646 packets, 75M bytes)
pkts bytes target prot opt in out source
destination
Chain BF-cloudbr0 (2 references)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT all -- any any anywhere anywhere
state RELATED,ESTABLISHED
0 0 BF-cloudbr0-IN all -- any any anywhere
anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
0 0 BF-cloudbr0-OUT all -- any any anywhere
anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
0 0 ACCEPT all -- any any anywhere anywhere
PHYSDEV match --physdev-out eth0.200 --physdev-is-bridged
Chain BF-cloudbr0-IN (1 references)
pkts bytes target prot opt in out source
destination
0 0 v-2-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
0 0 s-1-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
Chain BF-cloudbr0-OUT (1 references)
pkts bytes target prot opt in out source
destination
0 0 v-2-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-out vnet2 --physdev-is-bridged
0 0 s-1-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-out vnet5 --physdev-is-bridged
Chain BF-cloudbr1 (2 references)
pkts bytes target prot opt in out source
destination
0 0 ACCEPT all -- any any anywhere anywhere
state RELATED,ESTABLISHED
0 0 BF-cloudbr1-IN all -- any any anywhere
anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
0 0 BF-cloudbr1-OUT all -- any any anywhere
anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
0 0 ACCEPT all -- any any anywhere anywhere
PHYSDEV match --physdev-out eth0.300 --physdev-is-bridged
Chain BF-cloudbr1-IN (1 references)
pkts bytes target prot opt in out source
destination
0 0 v-2-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
0 0 s-1-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
0 0 s-1-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
Chain BF-cloudbr1-OUT (1 references)
pkts bytes target prot opt in out source
destination
0 0 v-2-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-out vnet1 --physdev-is-bridged
0 0 s-1-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-out vnet4 --physdev-is-bridged
0 0 s-1-VM all -- any any anywhere anywhere
PHYSDEV match --physdev-out vnet6 --physdev-is-bridged
Chain s-1-VM (6 references)
pkts bytes target prot opt in out source
destination
0 0 RETURN all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
0 0 RETURN all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
0 0 RETURN all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
0 0 ACCEPT all -- any any anywhere anywhere
Chain v-2-VM (4 references)
pkts bytes target prot opt in out source
destination
0 0 RETURN all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
0 0 RETURN all -- any any anywhere anywhere
PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
0 0 ACCEPT all -- any any anywhere anywhere
If I check the system VMs in the dashboard, the secondary storage VM is
configured like this:
Public IP Address
192.168.2.60
Private IP Address
192.168.2.50
Link Local IP Adddress
169.254.0.234
Host
n40l
Gateway
192.168.2.1
The console proxy vm is configured like this:
Public IP Address
192.168.2.61
Private IP Address
192.168.2.56
Link Local IP Adddress
169.254.1.46
Host
n40l
Gateway
192.168.2.1
I can reach both VMs using link local IP address, but besides that the VMs
are completely isolated and can't talk to anything on the net or the host.
What am I doing wrong?
Best regards and thanks for your help,
Axel
Re: Cloudstack 4.0.1 single host installation -> no networking?
Posted by Ahmad Emneina <ae...@gmail.com>.
on the hypervisor, does it look like the vm's nics are being bridged to the
proper interface? I dont know much about kvm, but I believe thats all that
cloudstack is doing.
On Sat, Apr 13, 2013 at 12:33 PM, Axel Irriger <ir...@web.de> wrote:
> Hey,
>
> thanks for answering! Sadly, my DHCP server does not grant addresses to
> cloudstack, but cloudstack does assign them automagically.
>
> From what I see (think to understand), traffic does not gets forwarded to
> the VM's or from the VM's. Also, if I ssh to one of the system vms using
> the
> link local IP address, I can't ping anything in the 192.168.2.0 subnet,
> even
> though the config inside the system VM's look correct.
>
> Any other ideas or information, which may help?
>
> Beat regards,
>
> Axel
>
> -----Ursprüngliche Nachricht-----
> Von: Ahmad Emneina [mailto:aemneina@gmail.com]
> Gesendet: Samstag, 13. April 2013 20:22
> An: Cloudstack users mailing list
> Betreff: Re: Cloudstack 4.0.1 single host installation -> no networking?
>
> the issue might be that you have a dhcp server in the 192.168.2.0/xsubnet.
> You might want to try to disable it, and statically assign an ip to your
> host, or get your dhcp server to ignore the mac addresses cloudstack uses
> to
> create the vm's. I belive they start with 06.
>
>
> On Sat, Apr 13, 2013 at 11:09 AM, Axel Irriger <ir...@web.de>
> wrote:
>
> > Hi everybody
> >
> >
> >
> > I try to install CloudStack on Ubuntu 12.04 on a single host (as a
> > test
> > installation) and I'm a bit stuck on networking.
> >
> >
> >
> > Here's my setup:
> >
> > HP n40l
> >
> > 1 NIC, DHCP'ed to 192.168.2.199
> >
> > Gateway and DNS 192.168.2.1 (my router)
> >
> > A basic zone with the following IP ranges configured:
> >
> > Guest IP ranges 192.168.2.60-192.168.2.70
> >
> > Management IP range 192.168.2.50 - 192.168.2.59
> >
> > Virtual router config is empty
> >
> > Security groups setup is:
> > Ingress TCP 1-1024, UDP 1-1026, ICMP -1 -1. All with CIDR 0/0
> >
> >
> >
> > I configured networking like this:
> >
> > # This file describes the network interfaces available on your system
> >
> > # and how to activate them. For more information, see interfaces(5).
> >
> > # The loopback network interface
> >
> > auto lo
> >
> > iface lo inet loopback
> >
> > # The primary network interface
> >
> > auto eth0
> >
> > iface eth0 inet dhcp
> >
> > # Public network
> >
> > auto cloudbr0
> >
> > iface cloudbr0 inet manual
> >
> > bridge_ports eth0.200
> >
> > bridge_fd 5
> >
> > bridge_stp off
> >
> > bridge_maxwait 1
> >
> > # Private network
> >
> > auto cloudbr1
> >
> > iface cloudbr1 inet manual
> >
> > bridge_ports eth0.300
> >
> > bridge_fd 5
> >
> > bridge_stp off
> >
> > bridge_maxwait 1
> >
> >
> >
> > My cloud agent configuration does look like this:
> >
> > #Storage
> >
> > #Wed Apr 10 18:18:19 CEST 2013
> >
> > guest.network.device=cloudbr0
> >
> > workers=5
> >
> > private.network.device=cloudbr1
> >
> > port=8250
> >
> > resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
> >
> > pod=1
> >
> > zone=1
> >
> > guid=b06aff50-b93c-3479-8f5c-16c2e621e197
> >
> > public.network.device=cloudbr0
> >
> > cluster=1
> >
> > local.storage.uuid=98afc039-4cd8-4be1-b1eb-1d8a2d747753
> >
> > domr.scripts.dir=scripts/network/domr/kvm
> >
> > LibvirtComputingResource.id=5
> >
> > host=192.168.2.199
> >
> >
> >
> > Initially, with only the management server running, my iptables does
> > look like this:
> >
> >
> >
> > Chain INPUT (policy ACCEPT 13259 packets, 1942K bytes)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 ACCEPT udp -- virbr0 any anywhere
> > anywhere
> > udp dpt:domain
> >
> > 0 0 ACCEPT tcp -- virbr0 any anywhere
> > anywhere
> > tcp dpt:domain
> >
> > 0 0 ACCEPT udp -- virbr0 any anywhere
> > anywhere
> > udp dpt:bootps
> >
> > 0 0 ACCEPT tcp -- virbr0 any anywhere
> > anywhere
> > tcp dpt:bootps
> >
> >
> >
> > Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 ACCEPT all -- any virbr0 anywhere
> > 192.168.122.0/24 state RELATED,ESTABLISHED
> >
> > 0 0 ACCEPT all -- virbr0 any 192.168.122.0/24
> > anywhere
> >
> > 0 0 ACCEPT all -- virbr0 virbr0 anywhere
> > anywhere
> >
> > 0 0 REJECT all -- any virbr0 anywhere
> > anywhere
> > reject-with icmp-port-unreachable
> >
> > 0 0 REJECT all -- virbr0 any anywhere
> > anywhere
> > reject-with icmp-port-unreachable
> >
> >
> >
> > Chain OUTPUT (policy ACCEPT 13141 packets, 1962K bytes)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> >
> >
> > My ebtables config:
> >
> > Bridge table: filter
> >
> >
> >
> > Bridge chain: INPUT, entries: 0, policy: ACCEPT
> >
> >
> >
> > Bridge chain: FORWARD, entries: 0, policy: ACCEPT
> >
> >
> >
> > Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
> >
> >
> >
> > Then, I start the cloud-agent. This leads to a zone getting enabled
> > and two system VMs being started. Now, ebtables still is completely
> empty.
> >
> > Though, iptables now looks like this:
> >
> >
> >
> > Chain INPUT (policy ACCEPT 23083 packets, 72M bytes)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 ACCEPT udp -- virbr0 any anywhere
> > anywhere
> > udp dpt:domain
> >
> > 0 0 ACCEPT tcp -- virbr0 any anywhere
> > anywhere
> > tcp dpt:domain
> >
> > 0 0 ACCEPT udp -- virbr0 any anywhere
> > anywhere
> > udp dpt:bootps
> >
> > 0 0 ACCEPT tcp -- virbr0 any anywhere
> > anywhere
> > tcp dpt:bootps
> >
> >
> >
> > Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 BF-cloudbr0 all -- any cloudbr0 anywhere
> > anywhere PHYSDEV match --physdev-is-bridged
> >
> > 0 0 BF-cloudbr0 all -- cloudbr0 any anywhere
> > anywhere PHYSDEV match --physdev-is-bridged
> >
> > 0 0 DROP all -- any cloudbr0 anywhere
> > anywhere
> >
> > 0 0 DROP all -- cloudbr0 any anywhere
> > anywhere
> >
> > 0 0 BF-cloudbr1 all -- any cloudbr1 anywhere
> > anywhere PHYSDEV match --physdev-is-bridged
> >
> > 0 0 BF-cloudbr1 all -- cloudbr1 any anywhere
> > anywhere PHYSDEV match --physdev-is-bridged
> >
> > 0 0 DROP all -- any cloudbr1 anywhere
> > anywhere
> >
> > 0 0 DROP all -- cloudbr1 any anywhere
> > anywhere
> >
> > 0 0 ACCEPT all -- any virbr0 anywhere
> > 192.168.122.0/24 state RELATED,ESTABLISHED
> >
> > 0 0 ACCEPT all -- virbr0 any 192.168.122.0/24
> > anywhere
> >
> > 0 0 ACCEPT all -- virbr0 virbr0 anywhere
> > anywhere
> >
> > 0 0 REJECT all -- any virbr0 anywhere
> > anywhere
> > reject-with icmp-port-unreachable
> >
> > 0 0 REJECT all -- virbr0 any anywhere
> > anywhere
> > reject-with icmp-port-unreachable
> >
> >
> >
> > Chain OUTPUT (policy ACCEPT 22646 packets, 75M bytes)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> >
> >
> > Chain BF-cloudbr0 (2 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 ACCEPT all -- any any anywhere
> > anywhere
> > state RELATED,ESTABLISHED
> >
> > 0 0 BF-cloudbr0-IN all -- any any anywhere
> > anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
> >
> > 0 0 BF-cloudbr0-OUT all -- any any anywhere
> > anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
> >
> > 0 0 ACCEPT all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out eth0.200 --physdev-is-bridged
> >
> >
> >
> > Chain BF-cloudbr0-IN (1 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 v-2-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
> >
> > 0 0 s-1-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
> >
> >
> >
> > Chain BF-cloudbr0-OUT (1 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 v-2-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out vnet2 --physdev-is-bridged
> >
> > 0 0 s-1-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out vnet5 --physdev-is-bridged
> >
> >
> >
> > Chain BF-cloudbr1 (2 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 ACCEPT all -- any any anywhere
> > anywhere
> > state RELATED,ESTABLISHED
> >
> > 0 0 BF-cloudbr1-IN all -- any any anywhere
> > anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
> >
> > 0 0 BF-cloudbr1-OUT all -- any any anywhere
> > anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
> >
> > 0 0 ACCEPT all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out eth0.300 --physdev-is-bridged
> >
> >
> >
> > Chain BF-cloudbr1-IN (1 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 v-2-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
> >
> > 0 0 s-1-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
> >
> > 0 0 s-1-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
> >
> >
> >
> > Chain BF-cloudbr1-OUT (1 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 v-2-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out vnet1 --physdev-is-bridged
> >
> > 0 0 s-1-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out vnet4 --physdev-is-bridged
> >
> > 0 0 s-1-VM all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-out vnet6 --physdev-is-bridged
> >
> >
> >
> > Chain s-1-VM (6 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 RETURN all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
> >
> > 0 0 RETURN all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
> >
> > 0 0 RETURN all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
> >
> > 0 0 ACCEPT all -- any any anywhere
> > anywhere
> >
> >
> >
> > Chain v-2-VM (4 references)
> >
> > pkts bytes target prot opt in out source
> > destination
> >
> > 0 0 RETURN all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
> >
> > 0 0 RETURN all -- any any anywhere
> > anywhere
> > PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
> >
> > 0 0 ACCEPT all -- any any anywhere
> > anywhere
> >
> >
> >
> > If I check the system VMs in the dashboard, the secondary storage VM
> > is configured like this:
> >
> > Public IP Address
> >
> > 192.168.2.60
> >
> > Private IP Address
> >
> > 192.168.2.50
> >
> > Link Local IP Adddress
> >
> > 169.254.0.234
> >
> > Host
> >
> > n40l
> >
> > Gateway
> >
> > 192.168.2.1
> >
> >
> >
> > The console proxy vm is configured like this:
> >
> > Public IP Address
> >
> > 192.168.2.61
> >
> > Private IP Address
> >
> > 192.168.2.56
> >
> > Link Local IP Adddress
> >
> > 169.254.1.46
> >
> > Host
> >
> > n40l
> >
> > Gateway
> >
> > 192.168.2.1
> >
> >
> >
> > I can reach both VMs using link local IP address, but besides that the
> > VMs are completely isolated and can't talk to anything on the net or the
> host.
> >
> >
> >
> > What am I doing wrong?
> >
> >
> >
> > Best regards and thanks for your help,
> >
> >
> >
> > Axel
> >
> >
> >
> >
>
>
AW: Cloudstack 4.0.1 single host installation -> no networking?
Posted by Axel Irriger <ir...@web.de>.
Hey,
thanks for answering! Sadly, my DHCP server does not grant addresses to
cloudstack, but cloudstack does assign them automagically.
>From what I see (think to understand), traffic does not gets forwarded to
the VM's or from the VM's. Also, if I ssh to one of the system vms using the
link local IP address, I can't ping anything in the 192.168.2.0 subnet, even
though the config inside the system VM's look correct.
Any other ideas or information, which may help?
Beat regards,
Axel
-----Ursprüngliche Nachricht-----
Von: Ahmad Emneina [mailto:aemneina@gmail.com]
Gesendet: Samstag, 13. April 2013 20:22
An: Cloudstack users mailing list
Betreff: Re: Cloudstack 4.0.1 single host installation -> no networking?
the issue might be that you have a dhcp server in the 192.168.2.0/x subnet.
You might want to try to disable it, and statically assign an ip to your
host, or get your dhcp server to ignore the mac addresses cloudstack uses to
create the vm's. I belive they start with 06.
On Sat, Apr 13, 2013 at 11:09 AM, Axel Irriger <ir...@web.de> wrote:
> Hi everybody
>
>
>
> I try to install CloudStack on Ubuntu 12.04 on a single host (as a
> test
> installation) and I'm a bit stuck on networking.
>
>
>
> Here's my setup:
>
> HP n40l
>
> 1 NIC, DHCP'ed to 192.168.2.199
>
> Gateway and DNS 192.168.2.1 (my router)
>
> A basic zone with the following IP ranges configured:
>
> Guest IP ranges 192.168.2.60-192.168.2.70
>
> Management IP range 192.168.2.50 - 192.168.2.59
>
> Virtual router config is empty
>
> Security groups setup is:
> Ingress TCP 1-1024, UDP 1-1026, ICMP -1 -1. All with CIDR 0/0
>
>
>
> I configured networking like this:
>
> # This file describes the network interfaces available on your system
>
> # and how to activate them. For more information, see interfaces(5).
>
> # The loopback network interface
>
> auto lo
>
> iface lo inet loopback
>
> # The primary network interface
>
> auto eth0
>
> iface eth0 inet dhcp
>
> # Public network
>
> auto cloudbr0
>
> iface cloudbr0 inet manual
>
> bridge_ports eth0.200
>
> bridge_fd 5
>
> bridge_stp off
>
> bridge_maxwait 1
>
> # Private network
>
> auto cloudbr1
>
> iface cloudbr1 inet manual
>
> bridge_ports eth0.300
>
> bridge_fd 5
>
> bridge_stp off
>
> bridge_maxwait 1
>
>
>
> My cloud agent configuration does look like this:
>
> #Storage
>
> #Wed Apr 10 18:18:19 CEST 2013
>
> guest.network.device=cloudbr0
>
> workers=5
>
> private.network.device=cloudbr1
>
> port=8250
>
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>
> pod=1
>
> zone=1
>
> guid=b06aff50-b93c-3479-8f5c-16c2e621e197
>
> public.network.device=cloudbr0
>
> cluster=1
>
> local.storage.uuid=98afc039-4cd8-4be1-b1eb-1d8a2d747753
>
> domr.scripts.dir=scripts/network/domr/kvm
>
> LibvirtComputingResource.id=5
>
> host=192.168.2.199
>
>
>
> Initially, with only the management server running, my iptables does
> look like this:
>
>
>
> Chain INPUT (policy ACCEPT 13259 packets, 1942K bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:domain
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:domain
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:bootps
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:bootps
>
>
>
> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT all -- any virbr0 anywhere
> 192.168.122.0/24 state RELATED,ESTABLISHED
>
> 0 0 ACCEPT all -- virbr0 any 192.168.122.0/24
> anywhere
>
> 0 0 ACCEPT all -- virbr0 virbr0 anywhere
> anywhere
>
> 0 0 REJECT all -- any virbr0 anywhere
> anywhere
> reject-with icmp-port-unreachable
>
> 0 0 REJECT all -- virbr0 any anywhere
> anywhere
> reject-with icmp-port-unreachable
>
>
>
> Chain OUTPUT (policy ACCEPT 13141 packets, 1962K bytes)
>
> pkts bytes target prot opt in out source
> destination
>
>
>
> My ebtables config:
>
> Bridge table: filter
>
>
>
> Bridge chain: INPUT, entries: 0, policy: ACCEPT
>
>
>
> Bridge chain: FORWARD, entries: 0, policy: ACCEPT
>
>
>
> Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
>
>
>
> Then, I start the cloud-agent. This leads to a zone getting enabled
> and two system VMs being started. Now, ebtables still is completely empty.
>
> Though, iptables now looks like this:
>
>
>
> Chain INPUT (policy ACCEPT 23083 packets, 72M bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:domain
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:domain
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:bootps
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:bootps
>
>
>
> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 BF-cloudbr0 all -- any cloudbr0 anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 BF-cloudbr0 all -- cloudbr0 any anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 DROP all -- any cloudbr0 anywhere
> anywhere
>
> 0 0 DROP all -- cloudbr0 any anywhere
> anywhere
>
> 0 0 BF-cloudbr1 all -- any cloudbr1 anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 BF-cloudbr1 all -- cloudbr1 any anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 DROP all -- any cloudbr1 anywhere
> anywhere
>
> 0 0 DROP all -- cloudbr1 any anywhere
> anywhere
>
> 0 0 ACCEPT all -- any virbr0 anywhere
> 192.168.122.0/24 state RELATED,ESTABLISHED
>
> 0 0 ACCEPT all -- virbr0 any 192.168.122.0/24
> anywhere
>
> 0 0 ACCEPT all -- virbr0 virbr0 anywhere
> anywhere
>
> 0 0 REJECT all -- any virbr0 anywhere
> anywhere
> reject-with icmp-port-unreachable
>
> 0 0 REJECT all -- virbr0 any anywhere
> anywhere
> reject-with icmp-port-unreachable
>
>
>
> Chain OUTPUT (policy ACCEPT 22646 packets, 75M bytes)
>
> pkts bytes target prot opt in out source
> destination
>
>
>
> Chain BF-cloudbr0 (2 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> state RELATED,ESTABLISHED
>
> 0 0 BF-cloudbr0-IN all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
>
> 0 0 BF-cloudbr0-OUT all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out eth0.200 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr0-IN (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr0-OUT (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet2 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet5 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr1 (2 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> state RELATED,ESTABLISHED
>
> 0 0 BF-cloudbr1-IN all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
>
> 0 0 BF-cloudbr1-OUT all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out eth0.300 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr1-IN (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr1-OUT (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet1 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet4 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet6 --physdev-is-bridged
>
>
>
> Chain s-1-VM (6 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
>
>
>
> Chain v-2-VM (4 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
>
>
>
> If I check the system VMs in the dashboard, the secondary storage VM
> is configured like this:
>
> Public IP Address
>
> 192.168.2.60
>
> Private IP Address
>
> 192.168.2.50
>
> Link Local IP Adddress
>
> 169.254.0.234
>
> Host
>
> n40l
>
> Gateway
>
> 192.168.2.1
>
>
>
> The console proxy vm is configured like this:
>
> Public IP Address
>
> 192.168.2.61
>
> Private IP Address
>
> 192.168.2.56
>
> Link Local IP Adddress
>
> 169.254.1.46
>
> Host
>
> n40l
>
> Gateway
>
> 192.168.2.1
>
>
>
> I can reach both VMs using link local IP address, but besides that the
> VMs are completely isolated and can't talk to anything on the net or the
host.
>
>
>
> What am I doing wrong?
>
>
>
> Best regards and thanks for your help,
>
>
>
> Axel
>
>
>
>
Re: Cloudstack 4.0.1 single host installation -> no networking?
Posted by Ahmad Emneina <ae...@gmail.com>.
the issue might be that you have a dhcp server in the 192.168.2.0/x subnet.
You might want to try to disable it, and statically assign an ip to your
host, or get your dhcp server to ignore the mac addresses cloudstack uses
to create the vm's. I belive they start with 06.
On Sat, Apr 13, 2013 at 11:09 AM, Axel Irriger <ir...@web.de> wrote:
> Hi everybody
>
>
>
> I try to install CloudStack on Ubuntu 12.04 on a single host (as a test
> installation) and I'm a bit stuck on networking.
>
>
>
> Here's my setup:
>
> HP n40l
>
> 1 NIC, DHCP'ed to 192.168.2.199
>
> Gateway and DNS 192.168.2.1 (my router)
>
> A basic zone with the following IP ranges configured:
>
> Guest IP ranges 192.168.2.60-192.168.2.70
>
> Management IP range 192.168.2.50 - 192.168.2.59
>
> Virtual router config is empty
>
> Security groups setup is:
> Ingress TCP 1-1024, UDP 1-1026, ICMP -1 -1. All with CIDR 0/0
>
>
>
> I configured networking like this:
>
> # This file describes the network interfaces available on your system
>
> # and how to activate them. For more information, see interfaces(5).
>
> # The loopback network interface
>
> auto lo
>
> iface lo inet loopback
>
> # The primary network interface
>
> auto eth0
>
> iface eth0 inet dhcp
>
> # Public network
>
> auto cloudbr0
>
> iface cloudbr0 inet manual
>
> bridge_ports eth0.200
>
> bridge_fd 5
>
> bridge_stp off
>
> bridge_maxwait 1
>
> # Private network
>
> auto cloudbr1
>
> iface cloudbr1 inet manual
>
> bridge_ports eth0.300
>
> bridge_fd 5
>
> bridge_stp off
>
> bridge_maxwait 1
>
>
>
> My cloud agent configuration does look like this:
>
> #Storage
>
> #Wed Apr 10 18:18:19 CEST 2013
>
> guest.network.device=cloudbr0
>
> workers=5
>
> private.network.device=cloudbr1
>
> port=8250
>
> resource=com.cloud.hypervisor.kvm.resource.LibvirtComputingResource
>
> pod=1
>
> zone=1
>
> guid=b06aff50-b93c-3479-8f5c-16c2e621e197
>
> public.network.device=cloudbr0
>
> cluster=1
>
> local.storage.uuid=98afc039-4cd8-4be1-b1eb-1d8a2d747753
>
> domr.scripts.dir=scripts/network/domr/kvm
>
> LibvirtComputingResource.id=5
>
> host=192.168.2.199
>
>
>
> Initially, with only the management server running, my iptables does look
> like this:
>
>
>
> Chain INPUT (policy ACCEPT 13259 packets, 1942K bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:domain
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:domain
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:bootps
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:bootps
>
>
>
> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT all -- any virbr0 anywhere
> 192.168.122.0/24 state RELATED,ESTABLISHED
>
> 0 0 ACCEPT all -- virbr0 any 192.168.122.0/24
> anywhere
>
> 0 0 ACCEPT all -- virbr0 virbr0 anywhere
> anywhere
>
> 0 0 REJECT all -- any virbr0 anywhere
> anywhere
> reject-with icmp-port-unreachable
>
> 0 0 REJECT all -- virbr0 any anywhere
> anywhere
> reject-with icmp-port-unreachable
>
>
>
> Chain OUTPUT (policy ACCEPT 13141 packets, 1962K bytes)
>
> pkts bytes target prot opt in out source
> destination
>
>
>
> My ebtables config:
>
> Bridge table: filter
>
>
>
> Bridge chain: INPUT, entries: 0, policy: ACCEPT
>
>
>
> Bridge chain: FORWARD, entries: 0, policy: ACCEPT
>
>
>
> Bridge chain: OUTPUT, entries: 0, policy: ACCEPT
>
>
>
> Then, I start the cloud-agent. This leads to a zone getting enabled and two
> system VMs being started. Now, ebtables still is completely empty.
>
> Though, iptables now looks like this:
>
>
>
> Chain INPUT (policy ACCEPT 23083 packets, 72M bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:domain
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:domain
>
> 0 0 ACCEPT udp -- virbr0 any anywhere
> anywhere
> udp dpt:bootps
>
> 0 0 ACCEPT tcp -- virbr0 any anywhere
> anywhere
> tcp dpt:bootps
>
>
>
> Chain FORWARD (policy ACCEPT 0 packets, 0 bytes)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 BF-cloudbr0 all -- any cloudbr0 anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 BF-cloudbr0 all -- cloudbr0 any anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 DROP all -- any cloudbr0 anywhere
> anywhere
>
> 0 0 DROP all -- cloudbr0 any anywhere
> anywhere
>
> 0 0 BF-cloudbr1 all -- any cloudbr1 anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 BF-cloudbr1 all -- cloudbr1 any anywhere
> anywhere PHYSDEV match --physdev-is-bridged
>
> 0 0 DROP all -- any cloudbr1 anywhere
> anywhere
>
> 0 0 DROP all -- cloudbr1 any anywhere
> anywhere
>
> 0 0 ACCEPT all -- any virbr0 anywhere
> 192.168.122.0/24 state RELATED,ESTABLISHED
>
> 0 0 ACCEPT all -- virbr0 any 192.168.122.0/24
> anywhere
>
> 0 0 ACCEPT all -- virbr0 virbr0 anywhere
> anywhere
>
> 0 0 REJECT all -- any virbr0 anywhere
> anywhere
> reject-with icmp-port-unreachable
>
> 0 0 REJECT all -- virbr0 any anywhere
> anywhere
> reject-with icmp-port-unreachable
>
>
>
> Chain OUTPUT (policy ACCEPT 22646 packets, 75M bytes)
>
> pkts bytes target prot opt in out source
> destination
>
>
>
> Chain BF-cloudbr0 (2 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> state RELATED,ESTABLISHED
>
> 0 0 BF-cloudbr0-IN all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
>
> 0 0 BF-cloudbr0-OUT all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out eth0.200 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr0-IN (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr0-OUT (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet2 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet5 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr1 (2 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> state RELATED,ESTABLISHED
>
> 0 0 BF-cloudbr1-IN all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-in --physdev-is-bridged
>
> 0 0 BF-cloudbr1-OUT all -- any any anywhere
> anywhere PHYSDEV match --physdev-is-out --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out eth0.300 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr1-IN (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
>
>
>
> Chain BF-cloudbr1-OUT (1 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 v-2-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet1 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet4 --physdev-is-bridged
>
> 0 0 s-1-VM all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-out vnet6 --physdev-is-bridged
>
>
>
> Chain s-1-VM (6 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet4 --physdev-is-bridged
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet6 --physdev-is-bridged
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet5 --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
>
>
>
> Chain v-2-VM (4 references)
>
> pkts bytes target prot opt in out source
> destination
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet1 --physdev-is-bridged
>
> 0 0 RETURN all -- any any anywhere
> anywhere
> PHYSDEV match --physdev-in vnet2 --physdev-is-bridged
>
> 0 0 ACCEPT all -- any any anywhere
> anywhere
>
>
>
> If I check the system VMs in the dashboard, the secondary storage VM is
> configured like this:
>
> Public IP Address
>
> 192.168.2.60
>
> Private IP Address
>
> 192.168.2.50
>
> Link Local IP Adddress
>
> 169.254.0.234
>
> Host
>
> n40l
>
> Gateway
>
> 192.168.2.1
>
>
>
> The console proxy vm is configured like this:
>
> Public IP Address
>
> 192.168.2.61
>
> Private IP Address
>
> 192.168.2.56
>
> Link Local IP Adddress
>
> 169.254.1.46
>
> Host
>
> n40l
>
> Gateway
>
> 192.168.2.1
>
>
>
> I can reach both VMs using link local IP address, but besides that the VMs
> are completely isolated and can't talk to anything on the net or the host.
>
>
>
> What am I doing wrong?
>
>
>
> Best regards and thanks for your help,
>
>
>
> Axel
>
>
>
>