You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users-cn@cloudstack.apache.org by "linuxbqj@gmail.com" <li...@gmail.com> on 2014/07/14 09:27:47 UTC

使用Apache CloudStack and Gluster 搭建测试环境

This is an example of how to configure an environment where you can
test CloudStack and Gluster. It uses two machines on the same LAN, one
acts as a KVM hypervisor and the other as storage and management
server. Because the (virtual) networking in the hypervisor is a little
more complex than the networking on the management server, the
hypervisor will be setup with an OpenVPN connection so that the local
LAN is not affected with 'foreign' network traffic.

I am not a CloudStack specialist, so this configuration may not be
optimal for real world usage. It is the intention to be able to test
CloudStack and its Gluster integration in existing networks. The
CloudStack installation and configuration done is suitable for testing
and development systems, for production environments it is highly
recommended to follow the CloudStack documentation instead.


 .----------------.                       .-------------------.
 |                |                       |                   |
 | KVM Hypervisor | <------- LAN -------> | Management Server |
 |                |    ^-- OpenVPN --^    |                   |
 '----------------'                       '-------------------'
agent.cloudstack.tld                      storage.cloudstack.tld

Both systems have one network interface with a static IP-address. In
the LAN, other IP-addresses can not be used. This makes it difficult
to access virtual machines, but that does not matter too much for this
testing.

Both systems need a basic installation:

Red Hat Enterprise Linux 6.5 (CentOS 6.5 should work too)
Fedora EPEL enabled (howto install epel-release)
enable ssh access
SELinux in permissive mode (or disabled)
firewall enabled, but not restricting anything
Java 1.7 from the standard java-1.7.0-openjdk packages (not Java 1.6)

On the hypervisor, an additional (internal only) bridge needs to be
setup. This bridge will be used for providing IP-addresses to the
virtual machines. Each virtual machine seems to need at least 3
IP-addresses. This is a default in CloudStack. This example uses
virtual networks 192.168.N.0/24, where N is 0 to 4.

Configuration for the main cloudbr0 device:


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0
DEVICE=cloudbr0
TYPE=Bridge
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.0.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

And the additional IP-addresses on the cloudbr0 bridge (create 4
files, replace N by 1, 2, 3 and 4):


#file: /etc/sysconfig/network-scripts/ifcfg-cloudbr0:N
DEVICE=cloudbr0:N
ONBOOT=yes
BOOTPROTO=static
IPADDR=192.168.N.1
NETMASK=255.255.255.0
NM_CONTROLLED=no

Enable the new cloudbr0 bridge with all its IP-addresses:


# ifup cloudbr0

Any of the VMs that have a 192.168.*.* address, should be able to get
to the real LAN, and ultimately also the internet. Enabling NAT for
the internal virtual networks is the easiest:


# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.0.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.1.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.2.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.3.0/24 -j MASQUERADE
# iptables -t nat -A POSTROUTING -o eth0 -s 192.168.4.0/24 -j MASQUERADE
# service iptables save

The hypervisor will need to be setup to act as a gateway to the
virtual machines on the cloudbr0 bridge. In order to so do, a very
basic OpenVPN service does the trick:


# yum install openvpn
# openvpn --genkey --secret /etc/openvpn/static.key
# cat << EOF > /etc/openvpn/server.conf
dev tun
ifconfig 192.168.200.1 192.168.200.2
secret static.key
EOF
# chkconfig openvpn on
# service openvpn start

On the management server, it is needed to configure OpenVPN as a
client, so that routing to the virtual networks is possible:


# yum install openvpn
# cat << EOF > /etc/openvpn/client.conf
remote real-hostname-of-hypervisor.example.net
dev tun
ifconfig 192.168.200.2 192.168.200.1
secret static.key
EOF
# scp real-hostname-of-hypervisor.example.net:/etc/openvpn/static.key
/etc/openvpn
# chkconfig opennvpn on
# service openvpn start

In /etc/hosts (on both the hypervisor and management server) the
internal hostnames for the environment should be added:


#file: /etc/hosts
192.168.200.1 agent.cloudstack.tld
192.168.200.2 storage.cloudstack.tld

The hypervisor will also function as a DNS-server for the virtual
machines. The easiest is to use dnsmasq which uses /etc/hosts and
/etc/resolv.conf for resolving:


# yum install dnsmasq
# chkconfig dnsmasq on
# service dnsmasq start

The management server is also used as a Gluster Storage Server.
Therefor it needs to have some Gluster packages:


# wget -O /etc/yum.repo.d/glusterfs-epel.repo \
http://download.gluster.org/pub/gluster/glusterfs/3.4/LATEST/RHEL/glusterfs-epel.repo
# yum install glusterfs-server
# vim /etc/glusterfs/glusterd.vol

# service glusterd restart

Create two volumes where CloudStack will store disk images. Before
starting the volumes, apply the required settings too. Note that the
hostname that holds the bricks should be resolvable by the hypervisor
and the Secondary Storage VMs. This example does not show how to
create volumes for production usage, do not create volumes like this
for anything else than testing and scratch data.


# mkdir -p /bricks/primary/data
# mkdir -p /bricks/secondary/data
# gluster volume create primary storage.cloudstack.tld:/bricks/primary/data
# gluster volume set primary storage.owner-uid 36
# gluster volume set primary storage.owner-gid 36
# gluster volume set primary server.allow-insecure on
# gluster volume set primary nfs.disable true
# gluster volume start primary
# gluster volume create secondary storage.cloudstack.tld:/bricks/secondary/data
# gluster volume set secondary storage.owner-uid 36
# gluster volume set secondary storage.owner-gid 36
# gluster volume start secondary

When the preparation is all done, it is time to install Apache
CloudStack. It is planned to have support for Gluster in CloudStack
4.4. At the moment not all required changes are included in the
CloudStack git repository. Therefor, is is needed to build the RPM
packages from the Gluster Forge repository where the development is
happening. On a system running RHEL-6.5, checkout the sources and
build the packages (this needs a standard CloudStack development
environment, including java-1.7.0-openjdk-devel, Apache Maven and
others):


$ git clone git://forge.gluster.org/cloudstack-gluster/cloudstack.git
$ cd cloudstack
$ git checkout -t -b wip/master/gluster
$ cd packaging/centos63
$ ./package.sh

In the end, these packages should have been build:

cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm
cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm
cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
cloudstack-usage-4.4.0-SNAPSHOT.el6.x86_64.rpm
cloudstack-cli-4.4.0-SNAPSHOT.el6.x86_64.rpm
cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

On the management server, install the following packages:


# yum localinstall cloudstack-management-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-awsapi-4.4.0-SNAPSHOT.el6.x86_64.rpm

Install and configure the database:


# yum install mysql-server
# chkconfig mysqld on
# service mysqld start
# vim /etc/cloudstack/management/classpath.conf

# cloudstack-setup-databases cloud:secret --deploy-as=root:

Install the systemvm templates:


# mount -t nfs storage.cloudstack.tld:/secondary /mnt
# /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt
\
-m /mnt \
-h kvm \
-u http://jenkins.buildacloud.org/view/master/job/build-systemvm-master/lastSuccessfulBuild/artifact/tools/appliance/dist/systemvmtemplate-master-kvm.qcow2.bz2
# umount /mnt

The management server is now prepared, and the webui can get configured:


# cloudstack-setup-management

On the hypervisor, install the following additional packages:


# yum install qemu-kvm libvirt glusterfs-fuse
# yum localinstall cloudstack-common-4.4.0-SNAPSHOT.el6.x86_64.rpm \
cloudstack-agent-4.4.0-SNAPSHOT.el6.x86_64.rpm
# cloudstack-setup-agent

Make sure that in /etc/cloudstack/agent/agent.properties the right
NICs are being used:


guest.network.device=cloudbr0
private.bridge.name=cloudbr0
private.network.device=cloudbr0
network.direct.device=cloudbr0
public.network.device=cloudbr0

Go to the CloudStack webinterface, this should be running on the
management server:
http://real-hostname-of-mgmt.example.net:8080/client The default
username/password is: admin / password

It is easiest to skip the configuration wizard (not sure if that
supports Gluster already). When the normal interface is shown, under
'Infrastructure' a new 'Zone' can get added. The Zone wizard will need
the following input:

DNS 1: 192.168.0.1
Internal DNS 1: 192.168.0.1
Hypervisor: KVM

Under POD, use these options:

Reserved system gateway: 192.168.0.1
Reserved system netmask: 255.255.255.0
Start reserved system IP: 192.168.0.10
End reserved system IP: 192.168.0.250

Next the network config for the virtual machines:

Guest gateway: 192.168.1.1
Guest system netmask: 255.255.255.0
Guest start IP: 192.168.1.10
Guest end IP: 192.168.1.250

Primary storage:

Type: Gluster
Server: storage.cloudstack.tld
Volume: primary

Secondary Storage:

Type: nfs
Server: storage.cloudstack.tld
path: /secondary

Hypervisor agent:

hostname: agent.cloudstack.tld
username: root
password: password

If this all succeeded, the newly created Zone can get enabled. After a
while, there should be two system VMs listed in the Infrastructure. It
is possible to log in on these system VMs and check if all is working.
To do so, log in over SSH on the hypervisor and connect to the VMs
through libvirt:


# virsh list
 Id    Name                           State
----------------------------------------------------
 1     s-1-VM                         running
 2     v-2-VM                         running

# virsh console 1
Connected to domain s-1-VM
Escape character is ^]

Debian GNU/Linux 7 s-1-VM ttyS0

s-1-VM login: root
Password: password
...
root@s-1-VM:~#

Log out from the shell, and press CTRL+] to disconnect from the console.

To verify that this VM indeed runs with the QEMU+libgfapi integration,
check the log file that libvirt writes and confirm that there is a
-drive with a glusterfs+tcp:// URL in
/var/log/libvirt/qemu/s-1-VM.log:


... /usr/libexec/qemu-kvm -name s-1-VM ... -drive
file=gluster+tcp://storage.cloudstack.tld:24007/primary/d691ac19-4ec1-47c1-b765-55f804b78bec,




-- 
白清杰 (Born Bai)

Mail: linuxbqj@gmail.com

敬畏耶和华是智慧的开端