You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@cloudstack.apache.org by Marek Wieckowski <ma...@o2.pl> on 2014/09/10 01:41:47 UTC

Management server and opvswitch setup

Hi guys

Below is my current setup and configuration:
- renting 2 servers, both of them have 2 network cards
	- eth0 is the external interface with statically assigned external
ip
	- eth1 is the internal interface with statically assigned internal
ip 
	- both servers have Centos 6.5 installed

I want 1st server to be a management server as well as kvm hypervisor This
server has already cloudstack management installed and gluster package
Gluster is setup as well. I have 2 volumes; one for primary and one for
secondary storage. It contains bricks from server1 and server2.

Server2 has gluster agent and cloudstack agent installed. This one will be
purely KVM hypervisor.
My provider gives me possibility to buy additional external ip addresses,
and of course I will buy few.
Since eth1 cards are able to communicate only in local network, I am
thinking of setup like this:

Eth0 - will serve management and public traffic
Eth1 - will serve guest and storage traffic

As far as I don't hesitate to start installing openvswitch on server2, I
have some doubts over setting up openvswitch on server1 which will be kvm as
well as management server.

Could you recommend some optimal setup for this particular scenario? I don't
mind removing management role from server1 as later on I want to make
management server a virtual machine anyway.
I also consider renting a 3rd, lower spec server just for management,
however this one will have only one network card with external ip address.

Regards
Marek