You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by John Adams <ad...@gmail.com> on 2017/02/15 06:37:23 UTC

Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Hi all,

Still learning the ropes in a test environment here. Hitting a little snag
with networking here. The physical network has 2 VLANs. (192.168.10.0 and
192.168.30.0)

This is my current ACS testing environment:

1 management server (Ubuntu 14.04): 192.168.30.14
2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12

With that, I created 2 different zones, each with 1 pod and 1 cluster and 1
host respectively.

*The good:*
I can create VMs on either of the hosts. I'm able to ping the VMs and even
ssh into them only if I'm on the host or the management server or from the
ACS console itself (within the network).

*The Issue:*
I can't ssh or even ping the VMs when in the same network outside the host
environment. What could be the problem?

A. Management Server network config is as below:
-------------------------
*auto lo*
*iface lo inet loopback*

*auto eth0*
*iface eth0 inet static*
*       address 192.168.30.14*
*       netmask 255.255.255.0*
*       gateway 192.168.30.254*
       *dns-nameservers 192.168.30.254 4.2.2.2*
       *#dns-domain cloudstack.et.test.local*
---------------------------------------------

B. The KVM host network configuration is a below:

Host 1: .10
-----------------------------------------

*# interfaces(5) file used by ifup(8) and ifdown(8)*

*auto lo*

*iface lo inet loopback*

*# The primary network interface*

*auto em1*

*iface em1 inet manual*


*# Public network*

*   auto cloudbr0*

*   iface cloudbr0 inet static*

*    address 192.168.10.12*

*    network 192.168.10.0*

*    netmask 255.255.255.0*

*    gateway 192.168.10.254*

*    broadcast 192.168.10.255*

*    dns-nameservers 192.168.10.254 4.2.2.2*

*    #dns-domain cloudstack.et.test.local*

*    bridge_ports em1*

*    bridge_fd 5*

*    bridge_stp off*

*    bridge_maxwait 1*


*# Private network (not in use for now. Just using 1 bridge)*

*    auto cloudbr1*

*    iface cloudbr1 inet manual*

*    bridge_ports none*

*    bridge_fd 5*

*    bridge_stp off*

*    bridge_maxwait 1*
-----------------------------------


Host 2: .30
-----------------------------------

*# interfaces(5) file used by ifup(8) and ifdown(8)*

*auto lo*

*iface lo inet loopback*

*# The primary network interface*

*auto em1*

*iface em1 inet manual*


*# Public network*

*   auto cloudbr0*

*   iface cloudbr0 inet static*

*    address 192.168.30.12*

*    network 192.168.30.0*

*    netmask 255.255.255.0*

*    gateway 192.168.30.254*

*    broadcast 192.168.30.255*

*    dns-nameservers 192.168.30.254 4.2.2.2*

*    #dns-domain cloudstack.et.test.local*

*    bridge_ports em1*

*    bridge_fd 5*

*    bridge_stp off*

*    bridge_maxwait 1*


*# Private network (not in use for now. Just using 1 bridge)*

*    auto cloudbr1*

*    iface cloudbr1 inet manual*

*    bridge_ports none*

*    bridge_fd 5*

*    bridge_stp off*

*    bridge_maxwait 1*

-----------------------------------


--John O. Adams

Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by Nux! <nu...@li.nux.ro>.
BTW, you should stop using Level3's public dns, such as 4.2.2.2. 
A while ago they started to "randomly" redirect requests to certain advertised domains, noticed something like this last year or 2 years ago.

Run your own, it's simple.

--
Sent from the Delta quadrant using Borg technology!

Nux!
www.nux.ro

----- Original Message -----
> From: "John Adams" <ad...@gmail.com>
> To: "users" <us...@cloudstack.apache.org>
> Sent: Wednesday, 15 February, 2017 06:37:23
> Subject: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

> Hi all,
> 
> Still learning the ropes in a test environment here. Hitting a little snag
> with networking here. The physical network has 2 VLANs. (192.168.10.0 and
> 192.168.30.0)
> 
> This is my current ACS testing environment:
> 
> 1 management server (Ubuntu 14.04): 192.168.30.14
> 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
> 
> With that, I created 2 different zones, each with 1 pod and 1 cluster and 1
> host respectively.
> 
> *The good:*
> I can create VMs on either of the hosts. I'm able to ping the VMs and even
> ssh into them only if I'm on the host or the management server or from the
> ACS console itself (within the network).
> 
> *The Issue:*
> I can't ssh or even ping the VMs when in the same network outside the host
> environment. What could be the problem?
> 
> A. Management Server network config is as below:
> -------------------------
> *auto lo*
> *iface lo inet loopback*
> 
> *auto eth0*
> *iface eth0 inet static*
> *       address 192.168.30.14*
> *       netmask 255.255.255.0*
> *       gateway 192.168.30.254*
>       *dns-nameservers 192.168.30.254 4.2.2.2*
>       *#dns-domain cloudstack.et.test.local*
> ---------------------------------------------
> 
> B. The KVM host network configuration is a below:
> 
> Host 1: .10
> -----------------------------------------
> 
> *# interfaces(5) file used by ifup(8) and ifdown(8)*
> 
> *auto lo*
> 
> *iface lo inet loopback*
> 
> *# The primary network interface*
> 
> *auto em1*
> 
> *iface em1 inet manual*
> 
> 
> *# Public network*
> 
> *   auto cloudbr0*
> 
> *   iface cloudbr0 inet static*
> 
> *    address 192.168.10.12*
> 
> *    network 192.168.10.0*
> 
> *    netmask 255.255.255.0*
> 
> *    gateway 192.168.10.254*
> 
> *    broadcast 192.168.10.255*
> 
> *    dns-nameservers 192.168.10.254 4.2.2.2*
> 
> *    #dns-domain cloudstack.et.test.local*
> 
> *    bridge_ports em1*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> 
> 
> *# Private network (not in use for now. Just using 1 bridge)*
> 
> *    auto cloudbr1*
> 
> *    iface cloudbr1 inet manual*
> 
> *    bridge_ports none*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> -----------------------------------
> 
> 
> Host 2: .30
> -----------------------------------
> 
> *# interfaces(5) file used by ifup(8) and ifdown(8)*
> 
> *auto lo*
> 
> *iface lo inet loopback*
> 
> *# The primary network interface*
> 
> *auto em1*
> 
> *iface em1 inet manual*
> 
> 
> *# Public network*
> 
> *   auto cloudbr0*
> 
> *   iface cloudbr0 inet static*
> 
> *    address 192.168.30.12*
> 
> *    network 192.168.30.0*
> 
> *    netmask 255.255.255.0*
> 
> *    gateway 192.168.30.254*
> 
> *    broadcast 192.168.30.255*
> 
> *    dns-nameservers 192.168.30.254 4.2.2.2*
> 
> *    #dns-domain cloudstack.et.test.local*
> 
> *    bridge_ports em1*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> 
> 
> *# Private network (not in use for now. Just using 1 bridge)*
> 
> *    auto cloudbr1*
> 
> *    iface cloudbr1 inet manual*
> 
> *    bridge_ports none*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> 
> -----------------------------------
> 
> 
> --John O. Adams

Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by Sanjeev N <sa...@apache.org>.
Make sure you have security groups configured to allow ssh access to the
vms.

On Thu, Feb 16, 2017 at 4:31 PM, Dag Sonstebo <Da...@shapeblue.com>
wrote:

> Hi John,
>
> Thanks for clarifying. Got a few more questions regarding your design:
>
> First of all – were you planning on using two zones, or were you planning
> on using one zone with two hypervisors?
>
> Secondly – you’ve mentioned two subnets (rather than two VLANs) –
> 192.168.30.0/24 and 192.168.10.0/24 – how were you planning on using
> these? Which of these is your management network (where your hypervisors,
> management and storage lives) and which one is your guest network? Do you
> have L3 routing between these? How did you map the traffic types
> (management, guest) to your cloudbridges on your KVM hosts?
>
> With regards to your questions:
>
> > *The good:*
> >I can create VMs on either of the hosts. I'm able to ping the VMs and even
> > ssh into them only if I'm on the host or the management server or from
> the
> > ACS console itself (within the network).
>
> This doesn’t quite make sense – did you configure security groups to allow
> ICMP and ssh? If not your networking is not right – you should not be able
> to do this unless you allow the traffic through security groups.
>
> > *The Issue:*
> > I can't ssh or even ping the VMs when in the same network outside the
> host
> > environment. What could be the problem?
>
> As above – this all depends on your security groups.
>
> > A. Management Server network config is as below:
> > B. The KVM host network configuration is a below:
>
> OK not sure what you are trying to achieve. With the resources you have
> listed I would probably do something like the following:
>
> - Configure one zone with one pod, one cluster and two hypervisors.
> - Configure 192.168.10.0/24 as your Management network – put all three
> hosts on this and map the traffic to cloudbr0.
> - Configure 192.168.30.0/24 as your guest network, map traffic to
> cloudbr1.
>
> Hope this makes sense.
>
> Regards,
> Dag Sonstebo
> Cloud Architect
> ShapeBlue
>
> On 15/02/2017, 14:19, "John Adams" <ad...@gmail.com> wrote:
>
>     Hi Boris,
>
>     Thanks for your response. Yes I'm building a basic zone, just for
> starters.
>
>
>     --John O. Adams
>
>     On 15 February 2017 at 16:32, Boris Stoyanov <
> boris.stoyanov@shapeblue.com>
>     wrote:
>
>     > Hi John,
>     >
>     > Maybe I misunderstood, are you building advanced or basic zone?
>     >
>     > Thanks,
>     > Boris Stoyanov
>     >
>     > boris.stoyanov@shapeblue.com
>     > www.shapeblue.com
>     > @shapeblue
>     >
>     >
>     >
>     >
>     > On Feb 15, 2017, at 12:34 PM, John Adams <ad...@gmail.com>
> wrote:
>     >
>     > Hi Boris,
>     >
>     > I think I'm actually using the Shared network offering. The VMs being
>     > created are in the same same physical network subnet. Isolation is an
>     > option but I'm not using that at this point.
>     >
>     > Thanks.
>     >
>     >
>     > --John O. Adams
>     >
>     > On 15 February 2017 at 11:50, Boris Stoyanov <
> boris.stoyanov@shapeblue.com
>     > > wrote:
>     >
>     >> Hi John,
>     >>
>     >> In isolated networks VMs should be accessed only through the virtual
>     >> router IP.
>     >>
>     >> To access the VM over ssh, you should go to network setting and
> enable a
>     >> port on the Virtual Router IP. Then create a port forwarding rule
> from that
>     >> enabled port to port 22 on the specific VM within that network.
> After that
>     >> try to ssh the enabled port on the VR and you should end-up in the
> VM
>     >>
>     >> PS. In isolated networks you shouldn’t be able to ping the VM, all
> the
>     >> traffic goes through the VR.
>     >>
>     >> Thanks,
>     >> Boris Stoyanov
>     >>
>     >>
>     >>
>     >> boris.stoyanov@shapeblue.com
>     >> www.shapeblue.com
>     >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>     >> @shapeblue
>     >>
>     >>
>     >>
>     >> > On Feb 15, 2017, at 8:37 AM, John Adams <ad...@gmail.com>
> wrote:
>     >> >
>     >> > Hi all,
>     >> >
>     >> > Still learning the ropes in a test environment here. Hitting a
> little
>     >> snag
>     >> > with networking here. The physical network has 2 VLANs.
> (192.168.10.0
>     >> and
>     >> > 192.168.30.0)
>     >> >
>     >> > This is my current ACS testing environment:
>     >> >
>     >> > 1 management server (Ubuntu 14.04): 192.168.30.14
>     >> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
>     >> >
>     >> > With that, I created 2 different zones, each with 1 pod and 1
> cluster
>     >> and 1
>     >> > host respectively.
>     >> >
>     >> > *The good:*
>     >> > I can create VMs on either of the hosts. I'm able to ping the VMs
> and
>     >> even
>     >> > ssh into them only if I'm on the host or the management server or
> from
>     >> the
>     >> > ACS console itself (within the network).
>     >> >
>     >> > *The Issue:*
>     >> > I can't ssh or even ping the VMs when in the same network outside
> the
>     >> host
>     >> > environment. What could be the problem?
>     >> >
>     >> > A. Management Server network config is as below:
>     >> > -------------------------
>     >> > *auto lo*
>     >> > *iface lo inet loopback*
>     >> >
>     >> > *auto eth0*
>     >> > *iface eth0 inet static*
>     >> > *       address 192.168.30.14*
>     >> > *       netmask 255.255.255.0*
>     >> > *       gateway 192.168.30.254*
>     >> >       *dns-nameservers 192.168.30.254 4.2.2.2*
>     >> >       *#dns-domain cloudstack.et.test.local*
>     >> > ---------------------------------------------
>     >> >
>     >> > B. The KVM host network configuration is a below:
>     >> >
>     >> > Host 1: .10
>     >> > -----------------------------------------
>     >> >
>     >> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>     >> >
>     >> > *auto lo*
>     >> >
>     >> > *iface lo inet loopback*
>     >> >
>     >> > *# The primary network interface*
>     >> >
>     >> > *auto em1*
>     >> >
>     >> > *iface em1 inet manual*
>     >> >
>     >> >
>     >> > *# Public network*
>     >> >
>     >> > *   auto cloudbr0*
>     >> >
>     >> > *   iface cloudbr0 inet static*
>     >> >
>     >> > *    address 192.168.10.12*
>     >> >
>     >> > *    network 192.168.10.0*
>     >> >
>     >> > *    netmask 255.255.255.0*
>     >> >
>     >> > *    gateway 192.168.10.254*
>     >> >
>     >> > *    broadcast 192.168.10.255*
>     >> >
>     >> > *    dns-nameservers 192.168.10.254 4.2.2.2*
>     >> >
>     >> > *    #dns-domain cloudstack.et.test.local*
>     >> >
>     >> > *    bridge_ports em1*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> >
>     >> >
>     >> > *# Private network (not in use for now. Just using 1 bridge)*
>     >> >
>     >> > *    auto cloudbr1*
>     >> >
>     >> > *    iface cloudbr1 inet manual*
>     >> >
>     >> > *    bridge_ports none*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> > -----------------------------------
>     >> >
>     >> >
>     >> > Host 2: .30
>     >> > -----------------------------------
>     >> >
>     >> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>     >> >
>     >> > *auto lo*
>     >> >
>     >> > *iface lo inet loopback*
>     >> >
>     >> > *# The primary network interface*
>     >> >
>     >> > *auto em1*
>     >> >
>     >> > *iface em1 inet manual*
>     >> >
>     >> >
>     >> > *# Public network*
>     >> >
>     >> > *   auto cloudbr0*
>     >> >
>     >> > *   iface cloudbr0 inet static*
>     >> >
>     >> > *    address 192.168.30.12*
>     >> >
>     >> > *    network 192.168.30.0*
>     >> >
>     >> > *    netmask 255.255.255.0*
>     >> >
>     >> > *    gateway 192.168.30.254*
>     >> >
>     >> > *    broadcast 192.168.30.255*
>     >> >
>     >> > *    dns-nameservers 192.168.30.254 4.2.2.2*
>     >> >
>     >> > *    #dns-domain cloudstack.et.test.local*
>     >> >
>     >> > *    bridge_ports em1*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> >
>     >> >
>     >> > *# Private network (not in use for now. Just using 1 bridge)*
>     >> >
>     >> > *    auto cloudbr1*
>     >> >
>     >> > *    iface cloudbr1 inet manual*
>     >> >
>     >> > *    bridge_ports none*
>     >> >
>     >> > *    bridge_fd 5*
>     >> >
>     >> > *    bridge_stp off*
>     >> >
>     >> > *    bridge_maxwait 1*
>     >> >
>     >> > -----------------------------------
>     >> >
>     >> >
>     >> > --John O. Adams
>     >>
>     >>
>     >
>     >
>
>
>
> Dag.Sonstebo@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
>


-- 
Best Regards,
Sanjeev N
Chief Product Engineer@Accelerite

Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by Dag Sonstebo <Da...@shapeblue.com>.
Hi John,

Thanks for clarifying. Got a few more questions regarding your design:

First of all – were you planning on using two zones, or were you planning on using one zone with two hypervisors?

Secondly – you’ve mentioned two subnets (rather than two VLANs) – 192.168.30.0/24 and 192.168.10.0/24 – how were you planning on using these? Which of these is your management network (where your hypervisors, management and storage lives) and which one is your guest network? Do you have L3 routing between these? How did you map the traffic types (management, guest) to your cloudbridges on your KVM hosts?

With regards to your questions:

> *The good:*
>I can create VMs on either of the hosts. I'm able to ping the VMs and even
> ssh into them only if I'm on the host or the management server or from the
> ACS console itself (within the network).

This doesn’t quite make sense – did you configure security groups to allow ICMP and ssh? If not your networking is not right – you should not be able to do this unless you allow the traffic through security groups.

> *The Issue:*
> I can't ssh or even ping the VMs when in the same network outside the host
> environment. What could be the problem?

As above – this all depends on your security groups.

> A. Management Server network config is as below:
> B. The KVM host network configuration is a below:

OK not sure what you are trying to achieve. With the resources you have listed I would probably do something like the following:

- Configure one zone with one pod, one cluster and two hypervisors.
- Configure 192.168.10.0/24 as your Management network – put all three hosts on this and map the traffic to cloudbr0.
- Configure 192.168.30.0/24 as your guest network, map traffic to cloudbr1. 

Hope this makes sense.

Regards,
Dag Sonstebo
Cloud Architect
ShapeBlue

On 15/02/2017, 14:19, "John Adams" <ad...@gmail.com> wrote:

    Hi Boris,
    
    Thanks for your response. Yes I'm building a basic zone, just for starters.
    
    
    --John O. Adams
    
    On 15 February 2017 at 16:32, Boris Stoyanov <bo...@shapeblue.com>
    wrote:
    
    > Hi John,
    >
    > Maybe I misunderstood, are you building advanced or basic zone?
    >
    > Thanks,
    > Boris Stoyanov
    >
    > boris.stoyanov@shapeblue.com
    > www.shapeblue.com
    > @shapeblue
    >
    >
    >
    >
    > On Feb 15, 2017, at 12:34 PM, John Adams <ad...@gmail.com> wrote:
    >
    > Hi Boris,
    >
    > I think I'm actually using the Shared network offering. The VMs being
    > created are in the same same physical network subnet. Isolation is an
    > option but I'm not using that at this point.
    >
    > Thanks.
    >
    >
    > --John O. Adams
    >
    > On 15 February 2017 at 11:50, Boris Stoyanov <boris.stoyanov@shapeblue.com
    > > wrote:
    >
    >> Hi John,
    >>
    >> In isolated networks VMs should be accessed only through the virtual
    >> router IP.
    >>
    >> To access the VM over ssh, you should go to network setting and enable a
    >> port on the Virtual Router IP. Then create a port forwarding rule from that
    >> enabled port to port 22 on the specific VM within that network. After that
    >> try to ssh the enabled port on the VR and you should end-up in the VM
    >>
    >> PS. In isolated networks you shouldn’t be able to ping the VM, all the
    >> traffic goes through the VR.
    >>
    >> Thanks,
    >> Boris Stoyanov
    >>
    >>
    >>
    >> boris.stoyanov@shapeblue.com
    >> www.shapeblue.com
    >> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
    >> @shapeblue
    >>
    >>
    >>
    >> > On Feb 15, 2017, at 8:37 AM, John Adams <ad...@gmail.com> wrote:
    >> >
    >> > Hi all,
    >> >
    >> > Still learning the ropes in a test environment here. Hitting a little
    >> snag
    >> > with networking here. The physical network has 2 VLANs. (192.168.10.0
    >> and
    >> > 192.168.30.0)
    >> >
    >> > This is my current ACS testing environment:
    >> >
    >> > 1 management server (Ubuntu 14.04): 192.168.30.14
    >> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
    >> >
    >> > With that, I created 2 different zones, each with 1 pod and 1 cluster
    >> and 1
    >> > host respectively.
    >> >
    >> > *The good:*
    >> > I can create VMs on either of the hosts. I'm able to ping the VMs and
    >> even
    >> > ssh into them only if I'm on the host or the management server or from
    >> the
    >> > ACS console itself (within the network).
    >> >
    >> > *The Issue:*
    >> > I can't ssh or even ping the VMs when in the same network outside the
    >> host
    >> > environment. What could be the problem?
    >> >
    >> > A. Management Server network config is as below:
    >> > -------------------------
    >> > *auto lo*
    >> > *iface lo inet loopback*
    >> >
    >> > *auto eth0*
    >> > *iface eth0 inet static*
    >> > *       address 192.168.30.14*
    >> > *       netmask 255.255.255.0*
    >> > *       gateway 192.168.30.254*
    >> >       *dns-nameservers 192.168.30.254 4.2.2.2*
    >> >       *#dns-domain cloudstack.et.test.local*
    >> > ---------------------------------------------
    >> >
    >> > B. The KVM host network configuration is a below:
    >> >
    >> > Host 1: .10
    >> > -----------------------------------------
    >> >
    >> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
    >> >
    >> > *auto lo*
    >> >
    >> > *iface lo inet loopback*
    >> >
    >> > *# The primary network interface*
    >> >
    >> > *auto em1*
    >> >
    >> > *iface em1 inet manual*
    >> >
    >> >
    >> > *# Public network*
    >> >
    >> > *   auto cloudbr0*
    >> >
    >> > *   iface cloudbr0 inet static*
    >> >
    >> > *    address 192.168.10.12*
    >> >
    >> > *    network 192.168.10.0*
    >> >
    >> > *    netmask 255.255.255.0*
    >> >
    >> > *    gateway 192.168.10.254*
    >> >
    >> > *    broadcast 192.168.10.255*
    >> >
    >> > *    dns-nameservers 192.168.10.254 4.2.2.2*
    >> >
    >> > *    #dns-domain cloudstack.et.test.local*
    >> >
    >> > *    bridge_ports em1*
    >> >
    >> > *    bridge_fd 5*
    >> >
    >> > *    bridge_stp off*
    >> >
    >> > *    bridge_maxwait 1*
    >> >
    >> >
    >> > *# Private network (not in use for now. Just using 1 bridge)*
    >> >
    >> > *    auto cloudbr1*
    >> >
    >> > *    iface cloudbr1 inet manual*
    >> >
    >> > *    bridge_ports none*
    >> >
    >> > *    bridge_fd 5*
    >> >
    >> > *    bridge_stp off*
    >> >
    >> > *    bridge_maxwait 1*
    >> > -----------------------------------
    >> >
    >> >
    >> > Host 2: .30
    >> > -----------------------------------
    >> >
    >> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
    >> >
    >> > *auto lo*
    >> >
    >> > *iface lo inet loopback*
    >> >
    >> > *# The primary network interface*
    >> >
    >> > *auto em1*
    >> >
    >> > *iface em1 inet manual*
    >> >
    >> >
    >> > *# Public network*
    >> >
    >> > *   auto cloudbr0*
    >> >
    >> > *   iface cloudbr0 inet static*
    >> >
    >> > *    address 192.168.30.12*
    >> >
    >> > *    network 192.168.30.0*
    >> >
    >> > *    netmask 255.255.255.0*
    >> >
    >> > *    gateway 192.168.30.254*
    >> >
    >> > *    broadcast 192.168.30.255*
    >> >
    >> > *    dns-nameservers 192.168.30.254 4.2.2.2*
    >> >
    >> > *    #dns-domain cloudstack.et.test.local*
    >> >
    >> > *    bridge_ports em1*
    >> >
    >> > *    bridge_fd 5*
    >> >
    >> > *    bridge_stp off*
    >> >
    >> > *    bridge_maxwait 1*
    >> >
    >> >
    >> > *# Private network (not in use for now. Just using 1 bridge)*
    >> >
    >> > *    auto cloudbr1*
    >> >
    >> > *    iface cloudbr1 inet manual*
    >> >
    >> > *    bridge_ports none*
    >> >
    >> > *    bridge_fd 5*
    >> >
    >> > *    bridge_stp off*
    >> >
    >> > *    bridge_maxwait 1*
    >> >
    >> > -----------------------------------
    >> >
    >> >
    >> > --John O. Adams
    >>
    >>
    >
    >
    


Dag.Sonstebo@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 


Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by John Adams <ad...@gmail.com>.
Hi Boris,

Thanks for your response. Yes I'm building a basic zone, just for starters.


--John O. Adams

On 15 February 2017 at 16:32, Boris Stoyanov <bo...@shapeblue.com>
wrote:

> Hi John,
>
> Maybe I misunderstood, are you building advanced or basic zone?
>
> Thanks,
> Boris Stoyanov
>
> boris.stoyanov@shapeblue.com
> www.shapeblue.com
> @shapeblue
>
>
>
>
> On Feb 15, 2017, at 12:34 PM, John Adams <ad...@gmail.com> wrote:
>
> Hi Boris,
>
> I think I'm actually using the Shared network offering. The VMs being
> created are in the same same physical network subnet. Isolation is an
> option but I'm not using that at this point.
>
> Thanks.
>
>
> --John O. Adams
>
> On 15 February 2017 at 11:50, Boris Stoyanov <boris.stoyanov@shapeblue.com
> > wrote:
>
>> Hi John,
>>
>> In isolated networks VMs should be accessed only through the virtual
>> router IP.
>>
>> To access the VM over ssh, you should go to network setting and enable a
>> port on the Virtual Router IP. Then create a port forwarding rule from that
>> enabled port to port 22 on the specific VM within that network. After that
>> try to ssh the enabled port on the VR and you should end-up in the VM
>>
>> PS. In isolated networks you shouldn’t be able to ping the VM, all the
>> traffic goes through the VR.
>>
>> Thanks,
>> Boris Stoyanov
>>
>>
>>
>> boris.stoyanov@shapeblue.com
>> www.shapeblue.com
>> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
>> @shapeblue
>>
>>
>>
>> > On Feb 15, 2017, at 8:37 AM, John Adams <ad...@gmail.com> wrote:
>> >
>> > Hi all,
>> >
>> > Still learning the ropes in a test environment here. Hitting a little
>> snag
>> > with networking here. The physical network has 2 VLANs. (192.168.10.0
>> and
>> > 192.168.30.0)
>> >
>> > This is my current ACS testing environment:
>> >
>> > 1 management server (Ubuntu 14.04): 192.168.30.14
>> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
>> >
>> > With that, I created 2 different zones, each with 1 pod and 1 cluster
>> and 1
>> > host respectively.
>> >
>> > *The good:*
>> > I can create VMs on either of the hosts. I'm able to ping the VMs and
>> even
>> > ssh into them only if I'm on the host or the management server or from
>> the
>> > ACS console itself (within the network).
>> >
>> > *The Issue:*
>> > I can't ssh or even ping the VMs when in the same network outside the
>> host
>> > environment. What could be the problem?
>> >
>> > A. Management Server network config is as below:
>> > -------------------------
>> > *auto lo*
>> > *iface lo inet loopback*
>> >
>> > *auto eth0*
>> > *iface eth0 inet static*
>> > *       address 192.168.30.14*
>> > *       netmask 255.255.255.0*
>> > *       gateway 192.168.30.254*
>> >       *dns-nameservers 192.168.30.254 4.2.2.2*
>> >       *#dns-domain cloudstack.et.test.local*
>> > ---------------------------------------------
>> >
>> > B. The KVM host network configuration is a below:
>> >
>> > Host 1: .10
>> > -----------------------------------------
>> >
>> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>> >
>> > *auto lo*
>> >
>> > *iface lo inet loopback*
>> >
>> > *# The primary network interface*
>> >
>> > *auto em1*
>> >
>> > *iface em1 inet manual*
>> >
>> >
>> > *# Public network*
>> >
>> > *   auto cloudbr0*
>> >
>> > *   iface cloudbr0 inet static*
>> >
>> > *    address 192.168.10.12*
>> >
>> > *    network 192.168.10.0*
>> >
>> > *    netmask 255.255.255.0*
>> >
>> > *    gateway 192.168.10.254*
>> >
>> > *    broadcast 192.168.10.255*
>> >
>> > *    dns-nameservers 192.168.10.254 4.2.2.2*
>> >
>> > *    #dns-domain cloudstack.et.test.local*
>> >
>> > *    bridge_ports em1*
>> >
>> > *    bridge_fd 5*
>> >
>> > *    bridge_stp off*
>> >
>> > *    bridge_maxwait 1*
>> >
>> >
>> > *# Private network (not in use for now. Just using 1 bridge)*
>> >
>> > *    auto cloudbr1*
>> >
>> > *    iface cloudbr1 inet manual*
>> >
>> > *    bridge_ports none*
>> >
>> > *    bridge_fd 5*
>> >
>> > *    bridge_stp off*
>> >
>> > *    bridge_maxwait 1*
>> > -----------------------------------
>> >
>> >
>> > Host 2: .30
>> > -----------------------------------
>> >
>> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
>> >
>> > *auto lo*
>> >
>> > *iface lo inet loopback*
>> >
>> > *# The primary network interface*
>> >
>> > *auto em1*
>> >
>> > *iface em1 inet manual*
>> >
>> >
>> > *# Public network*
>> >
>> > *   auto cloudbr0*
>> >
>> > *   iface cloudbr0 inet static*
>> >
>> > *    address 192.168.30.12*
>> >
>> > *    network 192.168.30.0*
>> >
>> > *    netmask 255.255.255.0*
>> >
>> > *    gateway 192.168.30.254*
>> >
>> > *    broadcast 192.168.30.255*
>> >
>> > *    dns-nameservers 192.168.30.254 4.2.2.2*
>> >
>> > *    #dns-domain cloudstack.et.test.local*
>> >
>> > *    bridge_ports em1*
>> >
>> > *    bridge_fd 5*
>> >
>> > *    bridge_stp off*
>> >
>> > *    bridge_maxwait 1*
>> >
>> >
>> > *# Private network (not in use for now. Just using 1 bridge)*
>> >
>> > *    auto cloudbr1*
>> >
>> > *    iface cloudbr1 inet manual*
>> >
>> > *    bridge_ports none*
>> >
>> > *    bridge_fd 5*
>> >
>> > *    bridge_stp off*
>> >
>> > *    bridge_maxwait 1*
>> >
>> > -----------------------------------
>> >
>> >
>> > --John O. Adams
>>
>>
>
>

Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by Boris Stoyanov <bo...@shapeblue.com>.
Hi John,

Maybe I misunderstood, are you building advanced or basic zone?

Thanks,
Boris Stoyanov

boris.stoyanov@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

On Feb 15, 2017, at 12:34 PM, John Adams <ad...@gmail.com>> wrote:

Hi Boris,

I think I'm actually using the Shared network offering. The VMs being created are in the same same physical network subnet. Isolation is an option but I'm not using that at this point.

Thanks.


--John O. Adams

On 15 February 2017 at 11:50, Boris Stoyanov <bo...@shapeblue.com>> wrote:
Hi John,

In isolated networks VMs should be accessed only through the virtual router IP.

To access the VM over ssh, you should go to network setting and enable a port on the Virtual Router IP. Then create a port forwarding rule from that enabled port to port 22 on the specific VM within that network. After that try to ssh the enabled port on the VR and you should end-up in the VM

PS. In isolated networks you shouldn’t be able to ping the VM, all the traffic goes through the VR.

Thanks,
Boris Stoyanov



boris.stoyanov@shapeblue.com<ma...@shapeblue.com>
www.shapeblue.com<http://www.shapeblue.com/>
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue



> On Feb 15, 2017, at 8:37 AM, John Adams <ad...@gmail.com>> wrote:
>
> Hi all,
>
> Still learning the ropes in a test environment here. Hitting a little snag
> with networking here. The physical network has 2 VLANs. (192.168.10.0 and
> 192.168.30.0)
>
> This is my current ACS testing environment:
>
> 1 management server (Ubuntu 14.04): 192.168.30.14
> 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
>
> With that, I created 2 different zones, each with 1 pod and 1 cluster and 1
> host respectively.
>
> *The good:*
> I can create VMs on either of the hosts. I'm able to ping the VMs and even
> ssh into them only if I'm on the host or the management server or from the
> ACS console itself (within the network).
>
> *The Issue:*
> I can't ssh or even ping the VMs when in the same network outside the host
> environment. What could be the problem?
>
> A. Management Server network config is as below:
> -------------------------
> *auto lo*
> *iface lo inet loopback*
>
> *auto eth0*
> *iface eth0 inet static*
> *       address 192.168.30.14*
> *       netmask 255.255.255.0*
> *       gateway 192.168.30.254*
>       *dns-nameservers 192.168.30.254 4.2.2.2*
>       *#dns-domain cloudstack.et.test.local*
> ---------------------------------------------
>
> B. The KVM host network configuration is a below:
>
> Host 1: .10
> -----------------------------------------
>
> *# interfaces(5) file used by ifup(8) and ifdown(8)*
>
> *auto lo*
>
> *iface lo inet loopback*
>
> *# The primary network interface*
>
> *auto em1*
>
> *iface em1 inet manual*
>
>
> *# Public network*
>
> *   auto cloudbr0*
>
> *   iface cloudbr0 inet static*
>
> *    address 192.168.10.12*
>
> *    network 192.168.10.0*
>
> *    netmask 255.255.255.0*
>
> *    gateway 192.168.10.254*
>
> *    broadcast 192.168.10.255*
>
> *    dns-nameservers 192.168.10.254 4.2.2.2*
>
> *    #dns-domain cloudstack.et.test.local*
>
> *    bridge_ports em1*
>
> *    bridge_fd 5*
>
> *    bridge_stp off*
>
> *    bridge_maxwait 1*
>
>
> *# Private network (not in use for now. Just using 1 bridge)*
>
> *    auto cloudbr1*
>
> *    iface cloudbr1 inet manual*
>
> *    bridge_ports none*
>
> *    bridge_fd 5*
>
> *    bridge_stp off*
>
> *    bridge_maxwait 1*
> -----------------------------------
>
>
> Host 2: .30
> -----------------------------------
>
> *# interfaces(5) file used by ifup(8) and ifdown(8)*
>
> *auto lo*
>
> *iface lo inet loopback*
>
> *# The primary network interface*
>
> *auto em1*
>
> *iface em1 inet manual*
>
>
> *# Public network*
>
> *   auto cloudbr0*
>
> *   iface cloudbr0 inet static*
>
> *    address 192.168.30.12*
>
> *    network 192.168.30.0*
>
> *    netmask 255.255.255.0*
>
> *    gateway 192.168.30.254*
>
> *    broadcast 192.168.30.255*
>
> *    dns-nameservers 192.168.30.254 4.2.2.2*
>
> *    #dns-domain cloudstack.et.test.local*
>
> *    bridge_ports em1*
>
> *    bridge_fd 5*
>
> *    bridge_stp off*
>
> *    bridge_maxwait 1*
>
>
> *# Private network (not in use for now. Just using 1 bridge)*
>
> *    auto cloudbr1*
>
> *    iface cloudbr1 inet manual*
>
> *    bridge_ports none*
>
> *    bridge_fd 5*
>
> *    bridge_stp off*
>
> *    bridge_maxwait 1*
>
> -----------------------------------
>
>
> --John O. Adams




Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by John Adams <ad...@gmail.com>.
Hi Boris,

I think I'm actually using the Shared network offering. The VMs being
created are in the same same physical network subnet. Isolation is an
option but I'm not using that at this point.

Thanks.


--John O. Adams

On 15 February 2017 at 11:50, Boris Stoyanov <bo...@shapeblue.com>
wrote:

> Hi John,
>
> In isolated networks VMs should be accessed only through the virtual
> router IP.
>
> To access the VM over ssh, you should go to network setting and enable a
> port on the Virtual Router IP. Then create a port forwarding rule from that
> enabled port to port 22 on the specific VM within that network. After that
> try to ssh the enabled port on the VR and you should end-up in the VM
>
> PS. In isolated networks you shouldn’t be able to ping the VM, all the
> traffic goes through the VR.
>
> Thanks,
> Boris Stoyanov
>
>
>
> boris.stoyanov@shapeblue.com
> www.shapeblue.com
> 53 Chandos Place, Covent Garden, London  WC2N 4HSUK
> @shapeblue
>
>
>
> > On Feb 15, 2017, at 8:37 AM, John Adams <ad...@gmail.com> wrote:
> >
> > Hi all,
> >
> > Still learning the ropes in a test environment here. Hitting a little
> snag
> > with networking here. The physical network has 2 VLANs. (192.168.10.0 and
> > 192.168.30.0)
> >
> > This is my current ACS testing environment:
> >
> > 1 management server (Ubuntu 14.04): 192.168.30.14
> > 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
> >
> > With that, I created 2 different zones, each with 1 pod and 1 cluster
> and 1
> > host respectively.
> >
> > *The good:*
> > I can create VMs on either of the hosts. I'm able to ping the VMs and
> even
> > ssh into them only if I'm on the host or the management server or from
> the
> > ACS console itself (within the network).
> >
> > *The Issue:*
> > I can't ssh or even ping the VMs when in the same network outside the
> host
> > environment. What could be the problem?
> >
> > A. Management Server network config is as below:
> > -------------------------
> > *auto lo*
> > *iface lo inet loopback*
> >
> > *auto eth0*
> > *iface eth0 inet static*
> > *       address 192.168.30.14*
> > *       netmask 255.255.255.0*
> > *       gateway 192.168.30.254*
> >       *dns-nameservers 192.168.30.254 4.2.2.2*
> >       *#dns-domain cloudstack.et.test.local*
> > ---------------------------------------------
> >
> > B. The KVM host network configuration is a below:
> >
> > Host 1: .10
> > -----------------------------------------
> >
> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
> >
> > *auto lo*
> >
> > *iface lo inet loopback*
> >
> > *# The primary network interface*
> >
> > *auto em1*
> >
> > *iface em1 inet manual*
> >
> >
> > *# Public network*
> >
> > *   auto cloudbr0*
> >
> > *   iface cloudbr0 inet static*
> >
> > *    address 192.168.10.12*
> >
> > *    network 192.168.10.0*
> >
> > *    netmask 255.255.255.0*
> >
> > *    gateway 192.168.10.254*
> >
> > *    broadcast 192.168.10.255*
> >
> > *    dns-nameservers 192.168.10.254 4.2.2.2*
> >
> > *    #dns-domain cloudstack.et.test.local*
> >
> > *    bridge_ports em1*
> >
> > *    bridge_fd 5*
> >
> > *    bridge_stp off*
> >
> > *    bridge_maxwait 1*
> >
> >
> > *# Private network (not in use for now. Just using 1 bridge)*
> >
> > *    auto cloudbr1*
> >
> > *    iface cloudbr1 inet manual*
> >
> > *    bridge_ports none*
> >
> > *    bridge_fd 5*
> >
> > *    bridge_stp off*
> >
> > *    bridge_maxwait 1*
> > -----------------------------------
> >
> >
> > Host 2: .30
> > -----------------------------------
> >
> > *# interfaces(5) file used by ifup(8) and ifdown(8)*
> >
> > *auto lo*
> >
> > *iface lo inet loopback*
> >
> > *# The primary network interface*
> >
> > *auto em1*
> >
> > *iface em1 inet manual*
> >
> >
> > *# Public network*
> >
> > *   auto cloudbr0*
> >
> > *   iface cloudbr0 inet static*
> >
> > *    address 192.168.30.12*
> >
> > *    network 192.168.30.0*
> >
> > *    netmask 255.255.255.0*
> >
> > *    gateway 192.168.30.254*
> >
> > *    broadcast 192.168.30.255*
> >
> > *    dns-nameservers 192.168.30.254 4.2.2.2*
> >
> > *    #dns-domain cloudstack.et.test.local*
> >
> > *    bridge_ports em1*
> >
> > *    bridge_fd 5*
> >
> > *    bridge_stp off*
> >
> > *    bridge_maxwait 1*
> >
> >
> > *# Private network (not in use for now. Just using 1 bridge)*
> >
> > *    auto cloudbr1*
> >
> > *    iface cloudbr1 inet manual*
> >
> > *    bridge_ports none*
> >
> > *    bridge_fd 5*
> >
> > *    bridge_stp off*
> >
> > *    bridge_maxwait 1*
> >
> > -----------------------------------
> >
> >
> > --John O. Adams
>
>

Re: Basic Networking (ACS 4.9) --Allow VMs access from Local Area Network

Posted by Boris Stoyanov <bo...@shapeblue.com>.
Hi John,

In isolated networks VMs should be accessed only through the virtual router IP. 

To access the VM over ssh, you should go to network setting and enable a port on the Virtual Router IP. Then create a port forwarding rule from that enabled port to port 22 on the specific VM within that network. After that try to ssh the enabled port on the VR and you should end-up in the VM

PS. In isolated networks you shouldn’t be able to ping the VM, all the traffic goes through the VR. 

Thanks,
Boris Stoyanov

 

boris.stoyanov@shapeblue.com 
www.shapeblue.com
53 Chandos Place, Covent Garden, London  WC2N 4HSUK
@shapeblue
  
 

> On Feb 15, 2017, at 8:37 AM, John Adams <ad...@gmail.com> wrote:
> 
> Hi all,
> 
> Still learning the ropes in a test environment here. Hitting a little snag
> with networking here. The physical network has 2 VLANs. (192.168.10.0 and
> 192.168.30.0)
> 
> This is my current ACS testing environment:
> 
> 1 management server (Ubuntu 14.04): 192.168.30.14
> 2 KVM  Hosts (Ubuntu 14.04): 192.168.10.12 and 192.168.30.12
> 
> With that, I created 2 different zones, each with 1 pod and 1 cluster and 1
> host respectively.
> 
> *The good:*
> I can create VMs on either of the hosts. I'm able to ping the VMs and even
> ssh into them only if I'm on the host or the management server or from the
> ACS console itself (within the network).
> 
> *The Issue:*
> I can't ssh or even ping the VMs when in the same network outside the host
> environment. What could be the problem?
> 
> A. Management Server network config is as below:
> -------------------------
> *auto lo*
> *iface lo inet loopback*
> 
> *auto eth0*
> *iface eth0 inet static*
> *       address 192.168.30.14*
> *       netmask 255.255.255.0*
> *       gateway 192.168.30.254*
>       *dns-nameservers 192.168.30.254 4.2.2.2*
>       *#dns-domain cloudstack.et.test.local*
> ---------------------------------------------
> 
> B. The KVM host network configuration is a below:
> 
> Host 1: .10
> -----------------------------------------
> 
> *# interfaces(5) file used by ifup(8) and ifdown(8)*
> 
> *auto lo*
> 
> *iface lo inet loopback*
> 
> *# The primary network interface*
> 
> *auto em1*
> 
> *iface em1 inet manual*
> 
> 
> *# Public network*
> 
> *   auto cloudbr0*
> 
> *   iface cloudbr0 inet static*
> 
> *    address 192.168.10.12*
> 
> *    network 192.168.10.0*
> 
> *    netmask 255.255.255.0*
> 
> *    gateway 192.168.10.254*
> 
> *    broadcast 192.168.10.255*
> 
> *    dns-nameservers 192.168.10.254 4.2.2.2*
> 
> *    #dns-domain cloudstack.et.test.local*
> 
> *    bridge_ports em1*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> 
> 
> *# Private network (not in use for now. Just using 1 bridge)*
> 
> *    auto cloudbr1*
> 
> *    iface cloudbr1 inet manual*
> 
> *    bridge_ports none*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> -----------------------------------
> 
> 
> Host 2: .30
> -----------------------------------
> 
> *# interfaces(5) file used by ifup(8) and ifdown(8)*
> 
> *auto lo*
> 
> *iface lo inet loopback*
> 
> *# The primary network interface*
> 
> *auto em1*
> 
> *iface em1 inet manual*
> 
> 
> *# Public network*
> 
> *   auto cloudbr0*
> 
> *   iface cloudbr0 inet static*
> 
> *    address 192.168.30.12*
> 
> *    network 192.168.30.0*
> 
> *    netmask 255.255.255.0*
> 
> *    gateway 192.168.30.254*
> 
> *    broadcast 192.168.30.255*
> 
> *    dns-nameservers 192.168.30.254 4.2.2.2*
> 
> *    #dns-domain cloudstack.et.test.local*
> 
> *    bridge_ports em1*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> 
> 
> *# Private network (not in use for now. Just using 1 bridge)*
> 
> *    auto cloudbr1*
> 
> *    iface cloudbr1 inet manual*
> 
> *    bridge_ports none*
> 
> *    bridge_fd 5*
> 
> *    bridge_stp off*
> 
> *    bridge_maxwait 1*
> 
> -----------------------------------
> 
> 
> --John O. Adams