You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by se...@apache.org on 2014/02/20 18:15:41 UTC

[2/3] Create RTD docs

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/ansible.rst
----------------------------------------------------------------------
diff --git a/rtd/source/ansible.rst b/rtd/source/ansible.rst
new file mode 100644
index 0000000..7f0f9bd
--- /dev/null
+++ b/rtd/source/ansible.rst
@@ -0,0 +1,412 @@
+Deploying CloudStack with Ansible
+=================================
+
+In this article, `Paul Angus <https://twitter.com/CloudyAngus>`__ Cloud
+Architect at ShapeBlue takes a look at using Ansible to Deploy an
+Apache CloudStack cloud. 
+
+What is Ansible
+---------------
+
+Ansible is a deployment and configuration management tool similar in
+intent to Chef and Puppet. It allows (usually) DevOps teams to
+orchestrate the deployment and configuration of their environments
+without having to re-write custom scripts to make changes.
+
+Like Chef and Puppet, Ansible is designed to be idempotent, these means
+that you determine the state you want a host to be in and Ansible will
+decide if it needs to act in order to achieve that state.
+
+There’s already Chef and Puppet, so what’s the fuss about Ansible?
+------------------------------------------------------------------
+
+Let’s take it as a given that configuration management makes life much
+easier (and is quite cool), Ansible only needs an SSH connection to the
+hosts that you’re going to manage to get started. While Ansible requires
+Python 2.4 or greater to on the host you’re going to manage in order to
+leverage the vast majority of its functionality, it is able to connect
+to hosts which don’t have Python installed in order to then install
+Python, so it’s not really a problem. This greatly simplifies the
+deployment procedure for hosts, avoiding the need to pre-install agents
+onto the clients before the configuration management can take over.
+
+Ansible will allow you to connect as any user to a managed host (with
+that user’s privileges) or by using public/private keys – allowing fully
+automated management.
+
+There also doesn’t need to be a central server to run everything, as
+long as your playbooks and inventories are in-sync you can create as
+many Ansible servers as you need (generally a bit of Git pushing and
+pulling will do the trick).
+
+Finally – its structure and language is pretty simple and clean. I’ve
+found it a bit tricky to get the syntax correct for variables in some
+circumstances, but otherwise I’ve found it one of the easier tools to
+get my head around.
+
+So let’s see something
+----------------------
+
+For this example we’re going to create an Ansible server which will then
+deploy a CloudStack server. Both of these servers will be CentOS 6.4
+virtual machines.
+
+Installing Ansible
+------------------
+
+Installing Ansible is blessedly easy. We generally prefer to use CentOS
+so to install Ansible you run the following commands on the Ansible
+server.
+
+::
+ 
+    # rpm -ivh http://www.mirrorservice.org/sites/dl.fedoraproject.org/pub/epel/6/i386/epel-release-6-8.noarch.rpm
+    # yum install -y ansible
+
+And that’s it.
+
+*(There is a commercial version which has more features such as callback
+to request configurations and a RESTful API and also support. The
+installation of this is different)*
+
+By default Ansible uses /etc/ansible to store your playbooks, I tend to
+move it, but there’s no real problem with using the default location.
+Create yourself a little directory structure to get started with. The
+documentation recommends something like this:
+
+
+Playbooks
+---------
+
+Ansible uses playbooks to specify the state in which you wish the target
+host to be in to be able to accomplish its role. Ansible playbooks are
+written in YAML format.
+
+Modules
+-------
+
+To get Ansible to do things you specify the hosts a playbook will act
+upon and then call modules and supply arguments which determine what
+Ansible will do to those hosts.
+
+To keep things simple, this example is a cut-down version of a full
+deployment. This example creates a single management server with a local
+MySQL server and assumes you have your secondary storage already
+provisioned somewhere. For this example I’m also not going to include
+securing the MySQL server, configuring NTP or using Ansible to configure
+the networking on the hosts either. Although normally we’d use Ansible
+to do exactly that.
+
+The pre-requisites to this CloudStack build are:
+
+-  A CentOS 6.4 host to install CloudStack on
+-  An IP address already assigned on the ACS management host
+-  The ACS management host should have a resolvable FQDN (either through
+   DNS or the host file on the ACS management host)
+-  Internet connectivity on the ACS management host
+
+Planning
+--------
+
+The first step I use is to list all of the tasks I think I’ll need and
+group them or split them into logical blocks. So for this deployment of
+CloudStack I’d start with:
+
+-  Configure selinux
+-  (libselinux-python required for Ansible to work with selinux enabled
+   hosts)
+-  Install and configure MySQL
+-  (Python MySQL-DB required for Ansible MySQL module)
+-  Install cloud-client
+-  Seed secondary storage
+
+Ansible is built around the idea of hosts having roles, so generally you
+would group or manage your hosts by their roles. So now to create some
+roles for these tasks
+
+I’ve created:
+
+-  cloudstack-manager
+-  mysql
+
+First up we need to tell Ansible where to find our CloudStack management
+host. In the root Ansible directory there is a file called ‘hosts’
+(/etc/Ansible/hosts) add a section like this:
+
+::
+
+    [acs-manager]
+    xxx.xxx.xxx.xxx
+
+where xxx.xxx.xxx.xxx is the ip address of your ACS management host.
+
+MySQL
+-----
+
+So let’s start with the MySQL server.  We’ll need to create a task
+within the mysql role directory called main.yml. The ‘task’ in this case
+to have MySQL running and configured on the target host. The contents of
+the file will look like this:
+
+::
+
+    -name: Ensure mysql server is installed
+
+    yum: name=mysql-server state=present
+
+    - name: Ensure mysql python is installed
+
+    yum: name=MySQL-python state=present
+
+
+    - name: Ensure selinux python bindings are installed
+
+    yum: name=libselinux-python state=present
+
+    - name: Ensure cloudstack specfic my.cnf lines are present
+
+    lineinfile: dest=/etc/my.cnf regexp=’$item’ insertafter=”symbolic-links=0″ line=’$item’ 
+
+    with\_items:
+
+    – skip-name-resolve
+
+    – default-time-zone=’+00:00′
+
+    – innodb\_rollback\_on\_timeout=1
+
+    – innodb\_lock\_wait\_timeout=600
+
+    – max\_connections=350
+
+    – log-bin=mysql-bin
+
+     – binlog-format = ‘ROW’
+
+
+    - name: Ensure MySQL service is started
+
+    service: name=mysqld state=started
+
+    - name: Ensure MySQL service is enabled at boot
+
+    service: name=mysqld enabled=yes
+
+     
+
+    - name: Ensure root password is set
+
+    mysql\_user: user=root password=$mysql\_root\_password host=localhost
+
+    ignore\_errors: true
+
+    - name: Ensure root has sufficient privileges
+
+    mysql\_user: login\_user=root login\_password=$mysql\_root\_password user=root host=% password=$mysql\_root\_password priv=\*.\*:GRANT,ALL state=present
+
+This needs to be saved as `/etc/ansible/roles/mysql/tasks/main.yml`
+
+As explained earlier, this playbook in fact describes the state of the
+host rather than setting out commands to be run. For instance, we
+specify certain lines which must be in the my.cnf file and allow Ansible
+to decide whether or not it needs to add them.
+
+Most of the modules are self-explanatory once you see them, but to run
+through them briefly;
+
+The ‘yum’ module is used to specify which packages are required, the
+‘service’ module controls the running of services, while the
+‘mysql\_user’ module controls mysql user configuration. The ‘lineinfile’
+module controls the contents in a file.
+
+ We have a couple of variables which need declaring.  You could do that
+within this playbook or its ‘parent’ playbook, or as a higher level
+variable. I’m going to declare them in a higher level playbook. More on
+this later.
+
+ That’s enough to provision a MySQL server. Now for the management
+server.
+
+ 
+CloudStack Management server service
+------------------------------------
+
+For the management server role we create a main.yml task like this:
+
+::
+
+    - name: Ensure selinux python bindings are installed
+
+      yum: name=libselinux-python state=present
+
+
+    - name: Ensure the Apache Cloudstack Repo file exists as per template
+
+      template: src=cloudstack.repo.j2 dest=/etc/yum.repos.d/cloudstack.repo
+
+
+    - name: Ensure selinux is in permissive mode
+
+      command: setenforce permissive
+
+
+    - name: Ensure selinux is set permanently
+
+      selinux: policy=targeted state=permissive
+
+
+    -name: Ensure CloudStack packages are installed
+
+      yum: name=cloud-client state=present
+
+
+    - name: Ensure vhdutil is in correct location
+
+      get\_url: url=http://download.cloud.com.s3.amazonaws.com/tools/vhd-util dest=/usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/vhd-util mode=0755
+
+
+Save this as `/etc/ansible/roles/cloudstack-management/tasks/main.yml`
+
+Now we have some new elements to deal with. The Ansible template module
+uses Jinja2 based templating.  As we’re doing a simplified example here,
+the Jinja template for the cloudstack.repo won’t have any variables in
+it, so it would simply look like this:
+
+::
+
+    [cloudstack]
+    name=cloudstack
+    baseurl=http://cloudstack.apt-get.eu/rhel/4.2/
+    enabled=1
+    gpgcheck=0
+
+This is saved in `/etc/ansible/roles/cloudstack-manager/templates/cloudstack.repo.j2`
+
+That gives us the packages installed, we need to set up the database. To
+do this I’ve created a separate task called setupdb.yml
+
+::
+
+    - name: cloudstack-setup-databases
+    command: /usr/bin/cloudstack-setup-databases cloud:{{mysql\_cloud\_password }}@localhost –deploy-as=root:{{mysql\_root\_password }}
+
+    - name: Setup CloudStack manager
+    command: /usr/bin/cloudstack-setup-management
+
+
+Save this as: `/etc/ansible/roles/cloudstack-management/tasks/setupdb.yml`
+
+As there isn’t (as yet) a CloudStack module, Ansible doesn’t inherently
+know whether or not the databases have already been provisioned,
+therefore this step is not currently idempotent and will overwrite any
+previously provisioned databases.
+
+There are some more variables here for us to declare later.
+
+ 
+System VM Templates:
+--------------------
+
+
+Finally we would want to seed the system VM templates into the secondary
+storage.  The playbook for this would look as follows:
+
+::
+
+    - name: Ensure secondary storage mount exists
+      file: path={{ tmp\_nfs\_path }} state=directory
+
+
+    - name: Ensure  NFS storage is mounted
+      mount: name={{ tmp\_nfs\_path }} src={{ sec\_nfs\_ip }}:{{sec\_nfs\_path }} fstype=nfs state=mounted opts=nolock
+
+
+    - name: Seed secondary storage
+      command:
+    /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp\_nfs\_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-06-12-master-kvm.qcow2.bz2 -h kvm -F
+
+      command:
+    /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp\_nfs\_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-2013-07-12-master-xen.vhd.bz2 -h xenserver -F
+
+      command:
+    /usr/share/cloudstack-common/scripts/storage/secondary/cloud-install-sys-tmplt -m {{ tmp\_nfs\_path }} -u http://download.cloud.com/templates/4.2/systemvmtemplate-4.2-vh7.ov -h vmware -F
+
+
+Save this as `/etc/ansible/roles/cloudstack-manager/tasks/seedstorage.yml`
+
+ Again, there isn’t a CloudStack module so Ansible will always run this
+even if the secondary storage already has the templates in it.
+
+ 
+Bringing it all together
+------------------------
+
+Ansible can use playbooks which run other playbooks, this allows us to
+group these playbooks together and declare variables across all of the
+individual playbooks. So in the Ansible playbook directory create a file
+called deploy-cloudstack.yml, which would look like this:
+
+::
+
+    -hosts: acs-manager
+
+     vars:
+
+        mysql\_root\_password: Cl0ud5tack
+        mysql\_cloud\_password: Cl0ud5tack
+        tmp\_nfs\_path: /mnt/secondary
+        sec\_nfs\_ip: IP\_OF\_YOUR\_SECONDARY\_STORAGE
+        sec\_nfs\_path: PATH\_TO\_YOUR\_SECONDARY\_STORAGE\_MOUNT
+
+
+     roles:
+
+       – mysql
+       – cloudstack-manager
+
+     tasks:
+
+       – include: /etc/ansible/roles/cloudstack-manager/tasks/setupdb.yml
+       – include: /etc/ansible/roles/cloudstack-manager/tasks/seedstorage.yml
+
+
+Save this as `/etc/ansible/deploy-cloudstack.yml`  inserting the IP
+address and path for your secondary storage and changing the passwords
+if you wish to.
+
+ 
+
+To run this go to the Ansible directory (cd /etc/ansible ) and run:
+
+::
+
+    # ansible-playbook deploy-cloudstack.yml -k
+
+ ‘-k’ tells Ansible to ask you for the root password to connect to the
+remote host.
+
+ Now log in to the CloudStack UI on the new management server.
+
+ 
+
+How is this example different from a production deployment?
+-----------------------------------------------------------
+
+In a production deployment, the Ansible playbooks would configure
+multiple management servers connected to master/slave replicating MySQL
+databases along with any other infrastructure components required and
+deploy and configure the hypervisor hosts. We would also have a
+dedicated file describing the hosts in the environment and a dedicated
+file containing variables which describe the environment.
+
+The advantage of using a configuration management tool such as Ansible
+is that we can specify components like the MySQL database VIP once and
+use it multiple times when configuring the MySQL server itself and other
+components which need to use that information.
+
+
+Acknowledgements
+----------------
+
+Thanks to Shanker Balan for introducing me to Ansible and a load of
+handy hints along the way.

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/conf.py
----------------------------------------------------------------------
diff --git a/rtd/source/conf.py b/rtd/source/conf.py
new file mode 100644
index 0000000..eb253ef
--- /dev/null
+++ b/rtd/source/conf.py
@@ -0,0 +1,344 @@
+# Licensed to the Apache Software Foundation (ASF) under one
+# or more contributor license agreements.  See the NOTICE file
+# distributed with this work for additional information#
+# regarding copyright ownership.  The ASF licenses this file
+# to you under the Apache License, Version 2.0 (the
+# "License"); you may not use this file except in compliance
+# with the License.  You may obtain a copy of the License at
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing,
+# software distributed under the License is distributed on an
+# "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+# KIND, either express or implied.  See the License for the
+# specific language governing permissions and limitations
+# under the License.
+
+# -*- coding: utf-8 -*-
+#
+# CloudStack Release Notes documentation build configuration file, created by
+# sphinx-quickstart on Fri Feb  7 16:00:59 2014.
+#
+# This file is execfile()d with the current directory set to its
+# containing dir.
+#
+# Note that not all possible configuration values are present in this
+# autogenerated file.
+#
+# All configuration values have a default; values that are commented out
+# serve to show the default.
+
+import sys
+import os
+
+# If extensions (or modules to document with autodoc) are in another directory,
+# add these directories to sys.path here. If the directory is relative to the
+# documentation root, use os.path.abspath to make it absolute, like shown here.
+#sys.path.insert(0, os.path.abspath('.'))
+
+# -- General configuration ------------------------------------------------
+
+# If your documentation needs a minimal Sphinx version, state it here.
+#needs_sphinx = '1.0'
+
+# Add any Sphinx extension module names here, as strings. They can be
+# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
+# ones.
+extensions = []
+
+# Add any paths that contain templates here, relative to this directory.
+templates_path = ['_templates']
+
+# The suffix of source filenames.
+source_suffix = '.rst'
+
+# The encoding of source files.
+#source_encoding = 'utf-8-sig'
+
+# The master toctree document.
+master_doc = 'index'
+
+# General information about the project.
+project = u'Apache CloudStack'
+copyright = u'2014, Apache CloudStack'
+
+# The version info for the project you're documenting, acts as replacement for
+# |version| and |release|, also used in various other places throughout the
+# built documents.
+#
+# The short X.Y version.
+version = '4.3'
+# The full version, including alpha/beta/rc tags.
+release = '4.3.0'
+
+# The language for content autogenerated by Sphinx. Refer to documentation
+# for a list of supported languages.
+#language = None
+
+# There are two options for replacing |today|: either, you set today to some
+# non-false value, then it is used:
+#today = ''
+# Else, today_fmt is used as the format for a strftime call.
+#today_fmt = '%B %d, %Y'
+
+# List of patterns, relative to source directory, that match files and
+# directories to ignore when looking for source files.
+exclude_patterns = []
+
+# The reST default role (used for this markup: `text`) to use for all
+# documents.
+#default_role = None
+
+# If true, '()' will be appended to :func: etc. cross-reference text.
+#add_function_parentheses = True
+
+# If true, the current module name will be prepended to all description
+# unit titles (such as .. function::).
+#add_module_names = True
+
+# If true, sectionauthor and moduleauthor directives will be shown in the
+# output. They are ignored by default.
+#show_authors = False
+
+# The name of the Pygments (syntax highlighting) style to use.
+pygments_style = 'sphinx'
+
+# A list of ignored prefixes for module index sorting.
+#modindex_common_prefix = []
+
+# If true, keep warnings as "system message" paragraphs in the built documents.
+#keep_warnings = False
+
+
+# -- Options for HTML output ----------------------------------------------
+
+# The theme to use for HTML and HTML Help pages.  See the documentation for
+# a list of builtin themes.
+html_theme = 'default'
+
+# Theme options are theme-specific and customize the look and feel of a theme
+# further.  For a list of options available for each theme, see the
+# documentation.
+#html_theme_options = {}
+
+# Add any paths that contain custom themes here, relative to this directory.
+#html_theme_path = []
+
+# The name for this set of Sphinx documents.  If None, it defaults to
+# "<project> v<release> documentation".
+#html_title = None
+
+# A shorter title for the navigation bar.  Default is the same as html_title.
+#html_short_title = None
+
+# The name of an image file (relative to this directory) to place at the top
+# of the sidebar.
+#html_logo = None
+
+# The name of an image file (within the static path) to use as favicon of the
+# docs.  This file should be a Windows icon file (.ico) being 16x16 or 32x32
+# pixels large.
+#html_favicon = None
+
+# Add any paths that contain custom static files (such as style sheets) here,
+# relative to this directory. They are copied after the builtin static files,
+# so a file named "default.css" will overwrite the builtin "default.css".
+html_static_path = ['_static']
+
+# Add any extra paths that contain custom files (such as robots.txt or
+# .htaccess) here, relative to this directory. These files are copied
+# directly to the root of the documentation.
+#html_extra_path = []
+
+# If not '', a 'Last updated on:' timestamp is inserted at every page bottom,
+# using the given strftime format.
+#html_last_updated_fmt = '%b %d, %Y'
+
+# If true, SmartyPants will be used to convert quotes and dashes to
+# typographically correct entities.
+#html_use_smartypants = True
+
+# Custom sidebar templates, maps document names to template names.
+#html_sidebars = {}
+
+# Additional templates that should be rendered to pages, maps page names to
+# template names.
+#html_additional_pages = {}
+
+# If false, no module index is generated.
+#html_domain_indices = True
+
+# If false, no index is generated.
+#html_use_index = True
+
+# If true, the index is split into individual pages for each letter.
+#html_split_index = False
+
+# If true, links to the reST sources are added to the pages.
+#html_show_sourcelink = True
+
+# If true, "Created using Sphinx" is shown in the HTML footer. Default is True.
+#html_show_sphinx = True
+
+# If true, "(C) Copyright ..." is shown in the HTML footer. Default is True.
+#html_show_copyright = True
+
+# If true, an OpenSearch description file will be output, and all pages will
+# contain a <link> tag referring to it.  The value of this option must be the
+# base URL from which the finished HTML is served.
+#html_use_opensearch = ''
+
+# This is the file name suffix for HTML files (e.g. ".xhtml").
+#html_file_suffix = None
+
+# Output file base name for HTML help builder.
+htmlhelp_basename = 'CloudStackReleaseNotesdoc'
+
+
+# -- Options for LaTeX output ---------------------------------------------
+
+latex_elements = {
+# The paper size ('letterpaper' or 'a4paper').
+#'papersize': 'letterpaper',
+
+# The font size ('10pt', '11pt' or '12pt').
+#'pointsize': '10pt',
+
+# Additional stuff for the LaTeX preamble.
+#'preamble': '',
+}
+
+# Grouping the document tree into LaTeX files. List of tuples
+# (source start file, target name, title,
+#  author, documentclass [howto, manual, or own class]).
+latex_documents = [
+  ('index', 'CloudStackReleaseNotes.tex', u'CloudStack Release Notes Documentation',
+   u'Apache CloudStack', 'manual'),
+]
+
+# The name of an image file (relative to this directory) to place at the top of
+# the title page.
+#latex_logo = None
+
+# For "manual" documents, if this is true, then toplevel headings are parts,
+# not chapters.
+#latex_use_parts = False
+
+# If true, show page references after internal links.
+#latex_show_pagerefs = False
+
+# If true, show URL addresses after external links.
+#latex_show_urls = False
+
+# Documents to append as an appendix to all manuals.
+#latex_appendices = []
+
+# If false, no module index is generated.
+#latex_domain_indices = True
+
+
+# -- Options for manual page output ---------------------------------------
+
+# One entry per manual page. List of tuples
+# (source start file, name, description, authors, manual section).
+man_pages = [
+    ('index', 'cloudstackreleasenotes', u'CloudStack Release Notes Documentation',
+     [u'Apache CloudStack'], 1)
+]
+
+# If true, show URL addresses after external links.
+#man_show_urls = False
+
+
+# -- Options for Texinfo output -------------------------------------------
+
+# Grouping the document tree into Texinfo files. List of tuples
+# (source start file, target name, title, author,
+#  dir menu entry, description, category)
+texinfo_documents = [
+  ('index', 'CloudStackReleaseNotes', u'CloudStack Release Notes Documentation',
+   u'Apache CloudStack', 'CloudStackReleaseNotes', 'One line description of project.',
+   'Miscellaneous'),
+]
+
+# Documents to append as an appendix to all manuals.
+#texinfo_appendices = []
+
+# If false, no module index is generated.
+#texinfo_domain_indices = True
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#texinfo_show_urls = 'footnote'
+
+# If true, do not generate a @detailmenu in the "Top" node's menu.
+#texinfo_no_detailmenu = False
+
+
+# -- Options for Epub output ----------------------------------------------
+
+# Bibliographic Dublin Core info.
+epub_title = u'CloudStack Release Notes'
+epub_author = u'Apache CloudStack'
+epub_publisher = u'Apache CloudStack'
+epub_copyright = u'2014, Apache CloudStack'
+
+# The basename for the epub file. It defaults to the project name.
+#epub_basename = u'CloudStack Release Notes'
+
+# The HTML theme for the epub output. Since the default themes are not optimized
+# for small screen space, using the same theme for HTML and epub output is
+# usually not wise. This defaults to 'epub', a theme designed to save visual
+# space.
+#epub_theme = 'epub'
+
+# The language of the text. It defaults to the language option
+# or en if the language is not set.
+#epub_language = ''
+
+# The scheme of the identifier. Typical schemes are ISBN or URL.
+#epub_scheme = ''
+
+# The unique identifier of the text. This can be a ISBN number
+# or the project homepage.
+#epub_identifier = ''
+
+# A unique identification for the text.
+#epub_uid = ''
+
+# A tuple containing the cover image and cover page html template filenames.
+#epub_cover = ()
+
+# A sequence of (type, uri, title) tuples for the guide element of content.opf.
+#epub_guide = ()
+
+# HTML files that should be inserted before the pages created by sphinx.
+# The format is a list of tuples containing the path and title.
+#epub_pre_files = []
+
+# HTML files shat should be inserted after the pages created by sphinx.
+# The format is a list of tuples containing the path and title.
+#epub_post_files = []
+
+# A list of files that should not be packed into the epub file.
+#epub_exclude_files = []
+
+# The depth of the table of contents in toc.ncx.
+#epub_tocdepth = 3
+
+# Allow duplicate toc entries.
+#epub_tocdup = True
+
+# Choose between 'default' and 'includehidden'.
+#epub_tocscope = 'default'
+
+# Fix unsupported image types using the PIL.
+#epub_fix_images = False
+
+# Scale large images.
+#epub_max_image_width = 0
+
+# How to display URL addresses: 'footnote', 'no', or 'inline'.
+#epub_show_urls = 'inline'
+
+# If false, no index is generated.
+#epub_use_index = True

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/developer_guide.rst
----------------------------------------------------------------------
diff --git a/rtd/source/developer_guide.rst b/rtd/source/developer_guide.rst
new file mode 100644
index 0000000..7ccb6be
--- /dev/null
+++ b/rtd/source/developer_guide.rst
@@ -0,0 +1,653 @@
+CloudStack Installation from Source for Developers
+==================================================
+
+This book is aimed at CloudStack developers who need to build the code.
+These instructions are valid on a Ubuntu 12.04 and CentOS 6.4 systems
+and were tested with the 4.2 release of Apache CloudStack, please adapt
+them if you are on a different operating system or using a newer/older
+version of CloudStack. This book is composed of the following sections:
+
+1. Installation of the prerequisites
+2. Compiling and installation from source
+3. Using the CloudStack simulator
+4. Installation with DevCloud the CloudStack sandbox
+5. Building your own packages
+6. The CloudStack API
+7. Testing the AWS API interface
+
+
+Prerequisites
+-------------
+
+In this section we'll look at installing the dependencies you'll need
+for Apache CloudStack development.
+
+On Ubuntu 12.04
+~~~~~~~~~~~~~~~
+
+First update and upgrade your system:
+
+::
+
+    apt-get update 
+    apt-get upgrade
+
+NTP might already be installed, check it with ``service ntp status``. If
+it's not then install NTP to synchronize the clocks:
+
+::
+
+    apt-get install openntpd
+
+Install ``openjdk``. As we're using Linux, OpenJDK is our first choice.
+
+::
+
+    apt-get install openjdk-6-jdk
+
+Install ``tomcat6``, note that the new version of tomcat on
+`Ubuntu <http://packages.ubuntu.com/precise/all/tomcat6>`__ is the
+6.0.35 version.
+
+::
+
+    apt-get install tomcat6
+
+Next, we'll install MySQL if it's not already present on the system.
+
+::
+
+    apt-get install mysql-server
+
+Remember to set the correct ``mysql`` password in the CloudStack
+properties file. Mysql should be running but you can check it's status
+with:
+
+::
+
+    service mysql status
+
+Developers wanting to build CloudStack from source will want to install
+the following additional packages. If you dont' want to build from
+source just jump to the next section.
+
+Install ``git`` to later clone the CloudStack source code:
+
+::
+
+    apt-get install git
+
+Install ``Maven`` to later build CloudStack
+
+::
+
+    apt-get install maven
+
+This should have installed Maven 3.0, check the version number with
+``mvn --version``
+
+A little bit of Python can be used (e.g simulator), install the Python
+package management tools:
+
+::
+
+    apt-get install python-pip python-setuptools
+
+Finally install ``mkisofs`` with:
+
+::
+
+    apt-get install genisoimage
+
+On centOS 6.4
+~~~~~~~~~~~~~
+
+First update and upgrade your system:
+
+::
+
+    yum -y update
+    yum -y upgrade
+
+If not already installed, install NTP for clock synchornization
+
+::
+
+    yum -y install ntp
+
+Install ``openjdk``. As we're using Linux, OpenJDK is our first choice.
+
+::
+
+    yum -y install java-1.6.0-openjdk
+
+Install ``tomcat6``, note that the version of tomcat6 in the default
+CentOS 6.4 repo is 6.0.24, so we will grab the 6.0.35 version. The
+6.0.24 version will be installed anyway as a dependency to cloudstack.
+
+::
+
+    wget https://archive.apache.org/dist/tomcat/tomcat-6/v6.0.35/bin/apache-tomcat-6.0.35.tar.gz
+    tar xzvf apache-tomcat-6.0.35.tar.gz -C /usr/local
+
+Setup tomcat6 system wide by creating a file
+``/etc/profile.d/tomcat.sh`` with the following content:
+
+::
+
+    export CATALINA_BASE=/usr/local/apache-tomcat-6.0.35
+    export CATALINA_HOME=/usr/local/apache-tomcat-6.0.35
+
+Next, we'll install MySQL if it's not already present on the system.
+
+::
+
+    yum -y install mysql mysql-server
+
+Remember to set the correct ``mysql`` password in the CloudStack
+properties file. Mysql should be running but you can check it's status
+with:
+
+::
+
+    service mysqld status
+
+Install ``git`` to later clone the CloudStack source code:
+
+::
+
+    yum -y install git
+
+Install ``Maven`` to later build CloudStack. Grab the 3.0.5 release from
+the Maven `website <http://maven.apache.org/download.cgi>`__
+
+::
+
+    wget http://mirror.cc.columbia.edu/pub/software/apache/maven/maven-3/3.0.5/binaries/apache-maven-3.0.5-bin.tar.gz
+    tar xzf apache-maven-3.0.5-bin.tar.gz -C /usr/local
+    cd /usr/local
+    ln -s apache-maven-3.0.5 maven
+
+Setup Maven system wide by creating a ``/etc/profile.d/maven.sh`` file
+with the following content:
+
+::
+
+    export M2_HOME=/usr/local/maven
+    export PATH=${M2_HOME}/bin:${PATH}
+
+Log out and log in again and you will have maven in your PATH:
+
+::
+
+    mvn --version
+
+This should have installed Maven 3.0, check the version number with
+``mvn --version``
+
+A little bit of Python can be used (e.g simulator), install the Python
+package management tools:
+
+::
+
+    yum -y install python-setuptools
+
+To install python-pip you might want to setup the Extra Packages for
+Enterprise Linux (EPEL) repo
+
+::
+
+    cd /tmp
+    wget http://mirror-fpt-telecom.fpt.net/fedora/epel/6/i386/epel-release-6-8.noarch.rpm
+    rpm -ivh epel-release-6-8.noarch.rpm
+
+Then update you repository cache ``yum update`` and install pip
+``yum -y install python-pip``
+
+Finally install ``mkisofs`` with:
+
+::
+
+    yum -y install genisoimage
+
+
+Installing from Source
+----------------------
+
+CloudStack uses git for source version control, if you know little about
+`git <http://book.git-scm.com/>`__ is a good start. Once you have git
+setup on your machine, pull the source with:
+
+::
+
+    git clone https://git-wip-us.apache.org/repos/asf/cloudstack.git
+
+To build the latest stable release:
+
+::
+
+    git checkout 4.2
+
+To compile Apache CloudStack, go to the cloudstack source folder and
+run:
+
+::
+
+    mvn -Pdeveloper,systemvm clean install
+
+If you want to skip the tests add ``-DskipTests`` to the command above
+
+You will have made sure to set the proper db password in
+``utils/conf/db.properties``
+
+Deploy the database next:
+
+::
+
+    mvn -P developer -pl developer -Ddeploydb
+
+Run Apache CloudStack with jetty for testing. Note that ``tomcat`` maybe
+be running on port 8080, stop it before you use ``jetty``
+
+::
+
+    mvn -pl :cloud-client-ui jetty:run
+
+Log Into Apache CloudStack:
+
+Open your Web browser and use this URL to connect to CloudStack:
+
+::
+
+    http://localhost:8080/client/
+
+Replace ``localhost`` with the IP of your management server if need be.
+
+.. note:: If you have iptables enabled, you may have to open the ports used by CloudStack. Specifically, ports 8080, 8250, and 9090.
+
+You can now start configuring a Zone, playing with the API. Of course we
+did not setup any infrastructure, there is no storage, no
+hypervisors...etc. However you can run tests using the simulator. The
+following section shows you how to use the simulator so that you don't
+have to setup a physical infrastructure.
+
+Using the Simulator
+-------------------
+
+CloudStack comes with a simulator based on Python bindings called
+*Marvin*. Marvin is available in the CloudStack source code or on Pypi.
+With Marvin you can simulate your data center infrastructure by
+providing CloudStack with a configuration file that defines the number
+of zones/pods/clusters/hosts, types of storage etc. You can then develop
+and test the CloudStack management server *as if* it was managing your
+production infrastructure.
+
+Do a clean build:
+
+::
+
+    mvn -Pdeveloper -Dsimulator -DskipTests clean install
+
+Deploy the database:
+
+::
+
+    mvn -Pdeveloper -pl developer -Ddeploydb
+    mvn -Pdeveloper -pl developer -Ddeploydb-simulator
+
+Install marvin. Note that you will need to have installed ``pip``
+properly in the prerequisites step.
+
+::
+
+    pip install tools/marvin/dist/Marvin-0.1.0.tar.gz
+
+Stop jetty (from any previous runs)
+
+::
+
+    mvn -pl :cloud-client-ui jetty:stop
+
+Start jetty
+
+::
+
+    mvn -pl client jetty:run
+
+Setup a basic zone with Marvin. In a separate shell://
+
+::
+
+    mvn -Pdeveloper,marvin.setup -Dmarvin.config=setup/dev/basic.cfg -pl :cloud-marvin integration-test
+
+At this stage log in the CloudStack management server at
+http://localhost:8080/client with the credentials admin/password, you
+should see a fully configured basic zone infrastructure. To simulate an
+advanced zone replace ``basic.cfg`` with ``advanced.cfg``.
+
+You can now run integration tests, use the API etc...
+
+Using DevCloud
+--------------
+
+The Installing from source section will only get you to the point of
+runnign the management server, it does not get you any hypervisors. The
+simulator section gets you a simulated datacenter for testing. With
+DevCloud you can run at least one hypervisor and add it to your
+management server the way you would a real physical machine.
+
+`DevCloud <https://cwiki.apache.org/confluence/display/CLOUDSTACK/DevCloud>`__
+is the CloudStack sandbox, the standard version is a VirtualBox based
+image. There is also a KVM based image for it. Here we only show steps
+with the VirtualBox image. For KVM see the
+`wiki <https://cwiki.apache.org/confluence/display/CLOUDSTACK/devcloud-kvm>`__.
+
+\*\* DevCloud Pre-requisites
+
+1. Install `VirtualBox <http://www.virtualbox.org>`__ on your machine
+
+2. Run VirtualBox and under >Preferences create a *host-only interface*
+   on which you disable the DHCP server
+
+3. Download the DevCloud
+   `image <http://people.apache.org/~bhaisaab/cloudstack/devcloud/devcloud2.ova>`__
+
+4. In VirtualBox, under File > Import Appliance import the DevCloud
+   image.
+
+5. Verify the settings under > Settings and check the ``enable PAE``
+   option in the processor menu
+
+6. Once the VM has booted try to ``ssh`` to it with credentials:
+   ``root/password``
+
+   ssh root@192.168.56.10
+
+Adding DevCloud as an Hypervisor
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Picking up from a clean build:
+
+::
+
+    mvn -Pdeveloper,systemvm clean install
+    mvn -P developer -pl developer,tools/devcloud -Ddeploydb
+
+At this stage install marvin similarly than with the simulator:
+
+::
+
+    pip install tools/marvin/dist/Marvin-0.1.0.tar.gz
+
+Start the management server
+
+::
+
+    mvn -pl client jetty:run
+
+Then you are going to configure CloudStack to use the running DevCloud
+instance:
+
+::
+
+    cd tools/devcloud
+    python ../marvin/marvin/deployDataCenter.py -i devcloud.cfg
+
+If you are curious, check the ``devcloud.cfg`` file and see how the data
+center is defined: 1 Zone, 1 Pod, 1 Cluster, 1 Host, 1 primary Storage,
+1 Seondary Storage, all provided by Devcloud.
+
+You can now log in the management server at
+``http://localhost:8080/client`` and start experimenting with the UI or
+the API.
+
+Do note that the management server is running in your local machine and
+that DevCloud is used only as a n Hypervisor. You could potentially run
+the management server within DevCloud as well, or memory granted, run
+multiple DevClouds.
+
+Building Packages
+-----------------
+
+Working from source is necessary when developing CloudStack. As
+mentioned earlier this is not primarily intended for users. However some
+may want to modify the code for their own use and specific
+infrastructure. The may also need to build their own packages for
+security reasons and due to network connectivity constraints. This
+section shows you the gist of how to build packages. We assume that the
+reader will know how to create a repository to serve this packages. The
+complete documentation is available on the
+`website <http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.2.0/html/Installation_Guide/sect-source-builddebs.html>`__
+
+To build debian packages you will need couple extra packages that we did
+not need to install for source compilation:
+
+::
+
+    apt-get install python-mysqldb
+    apt-get install debhelper
+
+Then build the packages with:
+
+::
+
+    dpkg-buildpackage -uc -us
+
+One directory up from the CloudStack root dir you will find:
+
+::
+
+    cloudstack_4.2.0_amd64.changes
+    cloudstack_4.2.0.dsc
+    cloudstack_4.2.0.tar.gz
+    cloudstack-agent_4.2.0_all.deb
+    cloudstack-awsapi_4.2.0_all.deb
+    cloudstack-cli_4.2.0_all.deb
+    cloudstack-common_4.2.0_all.deb
+    cloudstack-docs_4.2.0_all.deb
+    cloudstack-management_4.2.0_all.deb
+    cloudstack-usage_4.2.0_all.deb
+
+Of course the community provides a repository for these packages and you
+can use it instead of building your own packages and putting them in
+your own repo. Instructions on how to use this community repository are
+available in the installation book.
+
+The CloudStack API
+------------------
+
+The CloudStack API is a query based API using http that return results
+in XML or JSON. It is used to implement the default web UI. This API is
+not a standard like `OGF
+OCCI <http://www.ogf.org/gf/group_info/view.php?group=occi-wg>`__ or
+`DMTF CIMI <http://dmtf.org/standards/cloud>`__ but is easy to learn.
+Mapping exists between the AWS API and the CloudStack API as will be
+seen in the next section. Recently a Google Compute Engine interface was
+also developed that maps the GCE REST API to the CloudStack API
+described here. The API
+`docs <http://cloudstack.apache.org/docs/api/>`__ are a good start to
+learn the extent of the API. Multiple clients exist on
+`github <https://github.com/search?q=cloudstack+client&ref=cmdform>`__
+to use this API, you should be able to find one in your favorite
+language. The reference documentation for the API and changes that might
+occur from version to version is availble
+`on-line <http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.1.1/html/Developers_Guide/index.html>`__.
+This short section is aimed at providing a quick summary to give you a
+base understanding of how to use this API. As a quick start, a good way
+to explore the API is to navigate the dashboard with a firebug console
+(or similar developer console) to study the queries.
+
+In a succint statement, the CloudStack query API can be used via http
+GET requests made against your cloud endpoint (e.g
+http://localhost:8080/client/api). The API name is passed using the
+``command`` key and the various parameters for this API call are passed
+as key value pairs. The request is signed using the access key and
+secret key of the user making the call. Some calls are synchronous while
+some are asynchronous, this is documented in the API
+`docs <http://cloudstack.apache.org/docs/api/>`__. Asynchronous calls
+return a ``jobid``, the status and result of a job can be queried with
+the ``queryAsyncJobResult`` call. Let's get started and give an example
+of calling the ``listUsers`` API in Python.
+
+First you will need to generate keys to make requests. Going through the
+dashboard, go under ``Accounts`` select the appropriate account then
+click on ``Show Users`` select the intended users and generate keys
+using the ``Generate Keys`` icon. You will see an ``API Key`` and
+``Secret Key`` field being generated. The keys will be of the form:
+
+::
+
+    API Key : XzAz0uC0t888gOzPs3HchY72qwDc7pUPIO8LxC-VkIHo4C3fvbEBY_Ccj8fo3mBapN5qRDg_0_EbGdbxi8oy1A
+    Secret Key: zmBOXAXPlfb-LIygOxUVblAbz7E47eukDS_0JYUxP3JAmknOYo56T0R-AcM7rK7SMyo11Y6XW22gyuXzOdiybQ
+
+Open a Python shell and import the basic modules necessary to make the
+request. Do note that this request could be made many different ways,
+this is just a low level example. The ``urllib*`` modules are used to
+make the http request and do url encoding. The ``hashlib`` module gives
+us the sha1 hash function. It used to geenrate the ``hmac`` (Keyed
+Hashing for Message Authentication) using the secretkey. The result is
+encoded using the ``base64`` module.
+
+::
+
+    $python
+    Python 2.7.3 (default, Nov 17 2012, 19:54:34) 
+    [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
+    Type "help", "copyright", "credits" or "license" for more information.
+    >>> import urllib2
+    >>> import urllib
+    >>> import hashlib
+    >>> import hmac
+    >>> import base64
+
+Define the endpoint of the Cloud, the command that you want to execute,
+the type of the response (i.e XML or JSON) and the keys of the user.
+Note that we do not put the secretkey in our request dictionary because
+it is only used to compute the hmac.
+
+::
+
+    >>> baseurl='http://localhost:8080/client/api?'
+    >>> request={}
+    >>> request['command']='listUsers'
+    >>> request['response']='json'
+    >>> request['apikey']='plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg'
+    >>> secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ'
+
+Build the base request string, the combination of all the key/pairs of
+the request, url encoded and joined with ampersand.
+
+::
+
+    >>> request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])]) for k in request.keys()])
+    >>> request_str
+    'apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json'
+
+Compute the signature with hmac, do a 64 bit encoding and a url
+encoding, the string used for the signature is similar to the base
+request string shown above but the keys/values are lower cased and
+joined in a sorted order
+
+::
+
+    >>> sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower().replace('+','%20'))])for k in sorted(request.iterkeys())]) 
+    >>> sig_str
+    'apikey=plgwjfzk4gys3momtvmjuvg-x-jlwlnfauj9gabbbf9edm-kaymmailqzzq1elzlyq_u38zcm0bewzgudp66mg&command=listusers&response=json'
+    >>> sig=hmac.new(secretkey,sig_str,hashlib.sha1).digest()
+    >>> sig
+    'M:]\x0e\xaf\xfb\x8f\xf2y\xf1p\x91\x1e\x89\x8a\xa1\x05\xc4A\xdb'
+    >>> sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest())
+    >>> sig
+    'TTpdDq/7j/J58XCRHomKoQXEQds=\n'
+    >>> sig=base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()).strip()
+    >>> sig
+    'TTpdDq/7j/J58XCRHomKoQXEQds='
+    >>> sig=urllib.quote_plus(base64.encodestring(hmac.new(secretkey,sig_str,hashlib.sha1).digest()).strip())
+
+Finally, build the entire string by joining the baseurl, the request str
+and the signature. Then do an http GET:
+
+::
+
+    >>> req=baseurl+request_str+'&signature='+sig
+    >>> req
+    'http://localhost:8080/client/api?apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json&signature=TTpdDq%2F7j%2FJ58XCRHomKoQXEQds%3D'
+    >>> res=urllib2.urlopen(req)
+    >>> res.read()
+    '{ "listusersresponse" : { "count":1 ,"user" : [  {"id":"7ed6d5da-93b2-4545-a502-23d20b48ef2a","username":"admin","firstname":"admin",
+                                                       "lastname":"cloud","created":"2012-07-05T12:18:27-0700","state":"enabled","account":"admin",
+                                                       "accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT",
+                                                       "apikey":"plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg",
+                                                       "secretkey":"VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ",
+                                                       "accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}]}}
+                                                       
+
+All the clients that you will find on github will implement this
+signature technique, you should not have to do it by hand. Now that you
+have explored the API through the UI and that you understand how to make
+low level calls, pick your favorite client of use
+`CloudMonkey <https://pypi.python.org/pypi/cloudmonkey/>`__. CloudMonkey
+is a sub-project of Apache CloudStack and gives operators/developers the
+ability to use any of the API methods. It has nice auto-completion and
+help feature as well as an API discovery mechanism since 4.2.
+
+Testing the AWS API interface
+-----------------------------
+
+While the native CloudStack API is not a standard, CloudStack provides a
+AWS EC2 compatible interface. It has the great advantage that existing
+tools written with EC2 libraries can be re-used against a CloudStack
+based cloud. In the installation books we described how to run this
+interface from installing packages. In this section we show you how to
+compile the interface with ``maven`` and test it with Python boto
+module.
+
+Starting from a running management server (with DevCloud for instance),
+start the AWS API interface in a separate shell with:
+
+::
+
+    mvn -Pawsapi -pl :cloud-awsapi jetty:run
+
+Log into the CloudStack UI ``http://localhost:8080/client``, go to
+*Service Offerings* and edit one of the compute offerings to have the
+name ``m1.small`` or any of the other AWS EC2 instance types.
+
+With access and secret keys generated for a user you should now be able
+to use Python `Boto <http://docs.pythonboto.org/en/latest/>`__ module:
+
+::
+
+    import boto
+    import boto.ec2
+
+    accesskey="2IUSA5xylbsPSnBQFoWXKg3RvjHgsufcKhC1SeiCbeEc0obKwUlwJamB_gFmMJkFHYHTIafpUx0pHcfLvt-dzw"
+    secretkey="oxV5Dhhk5ufNowey7OVHgWxCBVS4deTl9qL0EqMthfPBuy3ScHPo2fifDxw1aXeL5cyH10hnLOKjyKphcXGeDA"
+
+    region = boto.ec2.regioninfo.RegionInfo(name="ROOT", endpoint="localhost")
+    conn = boto.connect_ec2(aws_access_key_id=accesskey, aws_secret_access_key=secretkey, is_secure=False, region=region, port=7080, path="/awsapi", api_version="2012-08-15")
+
+    images=conn.get_all_images()
+    print images
+
+    res = images[0].run(instance_type='m1.small',security_groups=['default'])
+
+Note the new ``api_version`` number in the connection object and also
+note that there was no user registration to make like in previous
+CloudStack releases.
+
+Conclusions
+-----------
+
+CloudStack is a mostly Java application running with Tomcat and Mysql.
+It consists of a management server and depending on the hypervisors
+being used, an agent installed on the hypervisor farm. To complete a
+Cloud infrastructure however you will also need some Zone wide storage
+a.k.a Secondary Storage and some Cluster wide storage a.k.a Primary
+storage. The choice of hypervisor, storage solution and type of Zone
+(i.e Basic vs. Advanced) will dictate how complex your installation can
+be. As a quick start, you might want to consider KVM+NFS and a Basic
+Zone.
+
+If you've run into any problems with this, please ask on the
+cloudstack-dev `mailing list </mailing-lists.html>`__.

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/index.rst
----------------------------------------------------------------------
diff --git a/rtd/source/index.rst b/rtd/source/index.rst
new file mode 100644
index 0000000..8e61e62
--- /dev/null
+++ b/rtd/source/index.rst
@@ -0,0 +1,48 @@
+.. CloudStack Documentation documentation master file, created by
+   sphinx-quickstart on Sat Nov  2 11:17:30 2013.
+   You can adapt this file completely to your liking, but it should at least
+   contain the root `toctree` directive.
+
+Welcome to CloudStack Documentation !
+=======================================
+
+.. figure:: /_static/images/acslogo.png
+    :align: center
+
+Networking Guides
+------------------
+
+.. toctree::
+    :maxdepth: 2
+
+    networking/nicira-plugin 
+    networking/midonet
+    networking/ovs-plugin
+    networking/autoscale_without_netscaler.rst
+    networking/troubleshoot_internet_traffic.rst
+    networking/vxlan.rst
+
+Allocator Guide
+---------------
+
+.. toctree::
+    :maxdepth: 2
+
+    alloc.rst
+
+Developer's Guide
+------------------
+
+.. toctree::
+    :maxdepth: 2
+
+    developer_guide
+    ansible
+
+Indices and tables
+==================
+
+* :ref:`genindex`
+* :ref:`modindex`
+* :ref:`search`
+

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/networking/autoscale_without_netscaler.rst
----------------------------------------------------------------------
diff --git a/rtd/source/networking/autoscale_without_netscaler.rst b/rtd/source/networking/autoscale_without_netscaler.rst
new file mode 100644
index 0000000..2f8a1e3
--- /dev/null
+++ b/rtd/source/networking/autoscale_without_netscaler.rst
@@ -0,0 +1,85 @@
+Configuring AutoScale without using NetScaler
+=============================================
+
+What is AutoScaling?
+~~~~~~~~~~~~~~~~~~~~
+
+AutoScaling allows you to scale your back-end services or application VMs up or down seamlessly and automatically according to the conditions you define. With AutoScaling enabled, you can ensure that the number of VMs you are using seamlessly scale up when demand increases, and automatically decreases when demand subsides. Thus it helps you save compute costs by terminating underused VMs automatically and launching new VMs when you need them, without the need for manual intervention.
+
+Hypervisor support
+~~~~~~~~~~~~~~~~~~
+
+At that time, AutoScaling without NetScaler only supports for Xenserver. We are working to support KVM also.
+
+Prerequisites
+~~~~~~~~~~~~~
+
+Before you configure an AutoScale rule, consider the following:
+
+* Ensure that the necessary template is prepared before configuring AutoScale. Firstly you must install the PV-driver, which helps Xenserver collect performance parameters (CPU and memory) into VMs. Beside, When a VM is deployed by using a template and when it comes up, the application should be up and running.
+
+Configuration
+~~~~~~~~~~~~~
+
+Specify the following:
+
+.. image:: ../_static/images/autoscale-config.png
+
+* Template: A template consists of a base OS image and application. A template is used to provision the new instance of an application on a scaleup action. When a VM is deployed from a template, the VM can start taking the traffic from the load balancer without any admin intervention. For example, if the VM is deployed for a Web service, it should have the Web server running, the database connected, and so on.
+
+* Compute offering: A predefined set of virtual hardware attributes, including CPU speed, number of CPUs, and RAM size, that the user can select when creating a new virtual machine instance. Choose one of the compute offerings to be used while provisioning a VM instance as part of scaleup action.
+
+* Min Instance: The minimum number of active VM instances that is assigned to a load balancing rule. The active VM instances are the application instances that are up and serving the traffic, and are being load balanced. This parameter ensures that a load balancing rule has at least the configured number of active VM instances are available to serve the traffic.
+
+* Max Instance: Maximum number of active VM instances that should be assigned to a load balancing rule. This parameter defines the upper limit of active VM instances that can be assigned to a load balancing rule.
+
+Specifying a large value for the maximum instance parameter might result in provisioning large number of VM instances, which in turn leads to a single load balancing rule exhausting the VM instances limit specified at the account or domain level.
+
+Specify the following scale-up and scale-down policies:
+
+* Duration: The duration, in seconds, for which the conditions you specify must be true to trigger a scaleup action. The conditions defined should hold true for the entire duration you specify for an AutoScale action to be invoked.
+
+* Counter: The performance counters expose the state of the monitored instances. We added two new counter to work with that feature:
+
+- Linux User CPU [native] - percentage
+- Linux User RAM [native] - percentage
+
+Remember to choose one of them. If you choose anything else, the autoscaling will not work.
+
+* Operator: The following five relational operators are supported in AutoScale feature: Greater than, Less than, Less than or equal to, Greater than or equal to, and Equal to.
+
+* Threshold: Threshold value to be used for the counter. Once the counter defined above breaches the threshold value, the AutoScale feature initiates a scaleup or scaledown action.
+
+* Add: Click Add to add the condition.
+
+Additionally, if you want to configure the advanced settings, click Show advanced settings, and specify the following:
+
+* Polling interval: Frequency in which the conditions, combination of counter, operator and threshold, are to be evaluated before taking a scale up or down action. The default polling interval is 30 seconds.
+
+* Quiet Time: This is the cool down period after an AutoScale action is initiated. The time includes the time taken to complete provisioning a VM instance from its template and the time taken by an application to be ready to serve traffic. This quiet time allows the fleet to come up to a stable state before any action can take place. The default is 300 seconds.
+
+* Destroy VM Grace Period: The duration in seconds, after a scaledown action is initiated, to wait before the VM is destroyed as part of scaledown action. This is to ensure graceful close of any pending sessions or transactions being served by the VM marked for destroy. The default is 120 seconds.
+
+* Apply: Click Apply to create the AutoScale configuration.
+
+Disabling and Enabling an AutoScale Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+If you want to perform any maintenance operation on the AutoScale VM instances, disable the AutoScale configuration. When the AutoScale configuration is disabled, no scaleup or scaledown action is performed. You can use this downtime for the maintenance activities. To disable the AutoScale configuration, click the Disable AutoScale button.
+
+The button toggles between enable and disable, depending on whether AutoScale is currently enabled or not. After the maintenance operations are done, you can enable the AutoScale configuration back. To enable, open the AutoScale configuration page again, then click the Enable AutoScale button.
+
+Updating an AutoScale Configuration
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+You can update the various parameters and add or delete the conditions in a scaleup or scaledown rule. Before you update an AutoScale configuration, ensure that you disable the AutoScale load balancer rule by clicking the Disable AutoScale button.
+After you modify the required AutoScale parameters, click Apply. To apply the new AutoScale policies, open the AutoScale configuration page again, then click the Enable AutoScale button.
+
+Runtime Considerations
+~~~~~~~~~~~~~~~~~~~~~~
+
+An administrator should not assign a VM to a load balancing rule which is configured for AutoScale.
+
+Making API calls outside the context of AutoScale, such as destroyVM, on an autoscaled VM leaves the load balancing configuration in an inconsistent state. Though VM is destroyed from the load balancer rule, it continues be showed as a service assigned to a rule inside the context of AutoScale.
+
+

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/networking/midonet.rst
----------------------------------------------------------------------
diff --git a/rtd/source/networking/midonet.rst b/rtd/source/networking/midonet.rst
new file mode 100644
index 0000000..73d7ab2
--- /dev/null
+++ b/rtd/source/networking/midonet.rst
@@ -0,0 +1,143 @@
+The MidoNet Plugin
+==================
+
+Introduction to the MidoNet Plugin
+----------------------------------
+
+The MidoNet plugin allows CloudStack to use the MidoNet virtualized
+networking solution as a provider for CloudStack networks and services. For
+more information on MidoNet and how it works, see
+http://www.midokura.com/midonet/.
+
+Features of the MidoNet Plugin
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+.. note::    In CloudStack 4.2.0 only the KVM hypervisor is supported for use in
+    combination with MidoNet.
+
+In CloudStack release 4.2.0 this plugin supports several services in the
+Advanced Isolated network mode.
+
+When tenants create new isolated layer 3 networks, instead of spinning
+up extra Virtual Router VMs, the relevant L3 elements (routers etc) are
+created in the MidoNet virtual topology by making the appropriate calls
+to the MidoNet API. Instead of using VLANs, isolation is provided by
+MidoNet.
+
+Aside from the above service (Connectivity), several extra features are
+supported in the 4.2.0 release:
+
+-  DHCP
+
+-  Firewall (ingress)
+
+-  Source NAT
+
+-  Static NAT
+
+-  Port Forwarding
+
+The plugin has been tested with MidoNet version 12.12. (Caddo).
+
+Using the MidoNet Plugin
+------------------------
+
+Prerequisites
+~~~~~~~~~~~~~
+
+In order to use the MidoNet plugin, the compute hosts must be running
+the MidoNet Agent, and the MidoNet API server must be available. Please
+consult the MidoNet User Guide for more information. The following
+section describes the CloudStack side setup.
+
+1. CloudStack needs to have at least one physical network with the
+   isolation method set to "MIDO". This network should be enabled for
+   the Guest and Public traffic types.
+
+2. Next, we need to set the following CloudStack settings under "Global
+   Settings" in the UI:
+
+   +-----------------------------+------------------------------------------------------------------------+--------------------------------------------+
+   | Setting Name                | Description                                                            | Example                                    |
+   +=============================+========================================================================+============================================+
+   | midonet.apiserver.address   | Specify the address at which the Midonet API server can be contacted   | http://192.168.1.144:8081/midolmanj-mgmt   |
+   +-----------------------------+------------------------------------------------------------------------+--------------------------------------------+
+   | midonet.providerrouter.id   | Specifies the UUID of the Midonet provider router                      | d7c5e6a3-e2f4-426b-b728-b7ce6a0448e5       |
+   +-----------------------------+------------------------------------------------------------------------+--------------------------------------------+
+
+   Table: CloudStack settings
+
+3. We also want MidoNet to take care of public traffic, so in
+   *componentContext.xml* we need to replace this line:
+
+   ::
+
+       <bean id="PublicNetworkGuru" class="com.cloud.network.guru.PublicNetworkGuru">
+         
+
+   With this:
+
+   ::
+
+       <bean id="PublicNetworkGuru" class="com.cloud.network.guru.MidoNetPublicNetworkGuru">
+         
+
+.. note::    On the compute host, MidoNet takes advantage of per-traffic type VIF
+    driver support in CloudStack KVM.
+
+    In agent.properties, we set the following to make MidoNet take care
+    of Guest and Public traffic:
+
+    ::
+
+        libvirt.vif.driver.Guest=com.cloud.network.resource.MidoNetVifDriver
+        libvirt.vif.driver.Public=com.cloud.network.resource.MidoNetVifDriver
+
+    This is explained further in MidoNet User Guide.
+
+Enabling the MidoNet service provider via the UI
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To allow CloudStack to use the MidoNet Plugin the network service provider
+needs to be enabled on the physical network.
+
+The steps to enable via the UI are as follows:
+
+1. In the left navbar, click Infrastructure
+
+2. In Zones, click View All
+
+3. Click the name of the Zone on which you are setting up MidoNet
+
+4. Click the Physical Network tab
+
+5. Click the Name of the Network on which you are setting up MidoNet
+
+6. Click Configure on the Network Service Providers box
+
+7. Click on the name MidoNet
+
+8. Click the Enable Provider button in the Network tab
+
+Enabling the MidoNet service provider via the API
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To enable via the API, use the following API calls:
+
+*addNetworkServiceProvider*
+
+-  name = "MidoNet"
+
+-  physicalnetworkid = <the uuid of the physical network>
+
+*updateNetworkServiceProvider*
+
+-  id = <the provider uuid returned by the previous call>
+
+-  state = "Enabled"
+
+Revision History
+----------------
+
+0-0 Wed Mar 13 2013 Dave Cahill dcahill@midokura.com Documentation
+created for 4.2.0 version of the MidoNet Plugin

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/networking/nicira-plugin.rst
----------------------------------------------------------------------
diff --git a/rtd/source/networking/nicira-plugin.rst b/rtd/source/networking/nicira-plugin.rst
new file mode 100644
index 0000000..b644f16
--- /dev/null
+++ b/rtd/source/networking/nicira-plugin.rst
@@ -0,0 +1,348 @@
+The Nicira NVP Plugin
+=====================
+
+Introduction to the Nicira NVP Plugin
+-------------------------------------
+
+The Nicira NVP plugin adds Nicira NVP as one of the available SDN
+implementations in CloudStack. With the plugin an exisiting Nicira NVP
+setup can be used by CloudStack to implement isolated guest networks and
+to provide additional services like routing and NAT.
+
+Features of the Nicira NVP Plugin
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following table lists the CloudStack network services provided by
+the Nicira NVP Plugin.
+
++----------------------+----------------------+---------------+
+| Network Service      | CloudStack version   | NVP version   |
++======================+======================+===============+
+| Virtual Networking   | >= 4.0               | >= 2.2.1      |
++----------------------+----------------------+---------------+
+| Source NAT           | >= 4.1               | >= 3.0.1      |
++----------------------+----------------------+---------------+
+| Static NAT           | >= 4.1               | >= 3.0.1      |
++----------------------+----------------------+---------------+
+| Port Forwarding      | >= 4.1               | >= 3.0.1      |
++----------------------+----------------------+---------------+
+
+Table: Supported Services
+
+.. note::   The Virtual Networking service was originally called 'Connectivity'
+    in CloudStack 4.0
+
+The following hypervisors are supported by the Nicira NVP Plugin.
+
++--------------+----------------------+
+| Hypervisor   | CloudStack version   |
++==============+======================+
+| XenServer    | >= 4.0               |
++--------------+----------------------+
+| KVM          | >= 4.1               |
++--------------+----------------------+
+
+Table: Supported Hypervisors
+
+.. note::    Please refer to the Nicira NVP configuration guide on how to prepare
+    the hypervisors for Nicira NVP integration.
+
+Configuring the Nicira NVP Plugin
+---------------------------------
+
+Prerequisites
+~~~~~~~~~~~~~
+
+Before enabling the Nicira NVP plugin the NVP Controller needs to be
+configured. Please review the NVP User Guide on how to do that.
+
+Make sure you have the following information ready:
+
+-  The IP address of the NVP Controller
+
+-  The username to access the API
+
+-  The password to access the API
+
+-  The UUID of the Transport Zone that contains the hypervisors in this
+   Zone
+
+-  The UUID of the Gateway Service used to provide router and NAT
+   services.
+
+
+.. note::    The gateway service uuid is optional and is used for Layer 3
+    services only (SourceNat, StaticNat and PortForwarding)
+
+Zone Configuration
+~~~~~~~~~~~~~~~~~~
+
+CloudStack needs to have at least one physical network with the isolation
+method set to "STT". This network should be enabled for the Guest
+traffic type.
+
+.. note::    The Guest traffic type should be configured with the traffic label
+    that matches the name of the Integration Bridge on the hypervisor.
+    See the Nicira NVP User Guide for more details on how to set this up
+    in XenServer or KVM.
+
+.. figure:: /_static/images/nvp-physical-network-stt.png
+    :align: center
+    :alt: a screenshot of a physical network with the STT isolation type
+
+Enabling the service provider
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The Nicira NVP provider is disabled by default. Navigate to the "Network
+Service Providers" configuration of the physical network with the STT
+isolation type. Navigate to the Nicira NVP provider and press the
+"Enable Provider" button.
+
+.. note::    CloudStack 4.0 does not have the UI interface to configure the
+    Nicira NVP plugin. Configuration needs to be done using the API
+    directly.
+
+.. figure:: /_static/images/nvp-physical-network-stt.png
+    :align: center
+    :alt: a screenshot of an enabled Nicira NVP provider
+
+Device Management
+~~~~~~~~~~~~~~~~~
+
+In CloudStack a Nicira NVP setup is considered a "device" that can be added
+and removed from a physical network. To complete the configuration of
+the Nicira NVP plugin a device needs to be added to the physical
+network. Press the "Add NVP Controller" button on the provider panel and
+enter the configuration details.
+
+.. figure:: /_static/images/nvp-physical-network-stt.png
+    :align: center
+    :alt: a screenshot of the device configuration popup.
+
+Network Offerings
+~~~~~~~~~~~~~~~~~
+
+Using the Nicira NVP plugin requires a network offering with Virtual
+Networking enabled and configured to use the NiciraNvp element. Typical
+use cases combine services from the Virtual Router appliance and the
+Nicira NVP plugin.
+
++----------------------+-----------------+
+| Service              | Provider        |
++======================+=================+
+| VPN                  | VirtualRouter   |
++----------------------+-----------------+
+| DHCP                 | VirtualRouter   |
++----------------------+-----------------+
+| DNS                  | VirtualRouter   |
++----------------------+-----------------+
+| Firewall             | VirtualRouter   |
++----------------------+-----------------+
+| Load Balancer        | VirtualRouter   |
++----------------------+-----------------+
+| User Data            | VirtualRouter   |
++----------------------+-----------------+
+| Source NAT           | VirtualRouter   |
++----------------------+-----------------+
+| Static NAT           | VirtualRouter   |
++----------------------+-----------------+
+| Post Forwarding      | VirtualRouter   |
++----------------------+-----------------+
+| Virtual Networking   | NiciraNVP       |
++----------------------+-----------------+
+
+Table: Isolated network offering with regular services from the Virtual
+Router.
+
+.. figure:: /_static/images/nvp-physical-network-stt.png
+    :align: center
+    :alt: a screenshot of a network offering.
+
+
+.. note::    The tag in the network offering should be set to the name of the
+    physical network with the NVP provider.
+
+Isolated network with network services. The virtual router is still
+required to provide network services like dns and dhcp.
+
++----------------------+-----------------+
+| Service              | Provider        |
++======================+=================+
+| DHCP                 | VirtualRouter   |
++----------------------+-----------------+
+| DNS                  | VirtualRouter   |
++----------------------+-----------------+
+| User Data            | VirtualRouter   |
++----------------------+-----------------+
+| Source NAT           | NiciraNVP       |
++----------------------+-----------------+
+| Static NAT           | NiciraNVP       |
++----------------------+-----------------+
+| Post Forwarding      | NiciraNVP       |
++----------------------+-----------------+
+| Virtual Networking   | NiciraNVP       |
++----------------------+-----------------+
+
+Table: Isolated network offering with network services
+
+Using the Nicira NVP plugin with VPC
+------------------------------------
+
+Supported VPC features
+~~~~~~~~~~~~~~~~~~~~~~
+
+The Nicira NVP plugin supports CloudStack VPC to a certain extent. Starting
+with CloudStack version 4.1 VPCs can be deployed using NVP isolated
+networks.
+
+It is not possible to use a Nicira NVP Logical Router for as a VPC
+Router
+
+It is not possible to connect a private gateway using a Nicira NVP
+Logical Switch
+
+VPC Offering with Nicira NVP
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+To allow a VPC to use the Nicira NVP plugin to provision networks, a new
+VPC offering needs to be created which allows the Virtual Networking
+service to be implemented by NiciraNVP.
+
+This is not currently possible with the UI. The API does provide the
+proper calls to create a VPC offering with Virtual Networking enabled.
+However due to a limitation in the 4.1 API it is not possible to select
+the provider for this network service. To configure the VPC offering
+with the NiciraNVP provider edit the database table
+'vpc\_offering\_service\_map' and change the provider to NiciraNvp for
+the service 'Connectivity'
+
+It is also possible to update the default VPC offering by adding a row
+to the 'vpc\_offering\_service\_map' with service 'Connectivity' and
+provider 'NiciraNvp'
+
+.. figure:: /_static/images/nvp-physical-network-stt.png
+    :align: center
+    :alt: a screenshot of the mysql table.
+
+
+.. note::    When creating a new VPC offering please note that the UI does not
+    allow you to select a VPC offering yet. The VPC needs to be created
+    using the API with the offering UUID.
+
+VPC Network Offerings
+~~~~~~~~~~~~~~~~~~~~~
+
+The VPC needs specific network offerings with the VPC flag enabled.
+Otherwise these network offerings are identical to regular network
+offerings. To allow VPC networks with a Nicira NVP isolated network the
+offerings need to support the Virtual Networking service with the
+NiciraNVP provider.
+
+In a typical configuration two network offerings need to be created. One
+with the loadbalancing service enabled and one without loadbalancing.
+
++----------------------+--------------------+
+| Service              | Provider           |
++======================+====================+
+| VPN                  | VpcVirtualRouter   |
++----------------------+--------------------+
+| DHCP                 | VpcVirtualRouter   |
++----------------------+--------------------+
+| DNS                  | VpcVirtualRouter   |
++----------------------+--------------------+
+| Load Balancer        | VpcVirtualRouter   |
++----------------------+--------------------+
+| User Data            | VpcVirtualRouter   |
++----------------------+--------------------+
+| Source NAT           | VpcVirtualRouter   |
++----------------------+--------------------+
+| Static NAT           | VpcVirtualRouter   |
++----------------------+--------------------+
+| Post Forwarding      | VpcVirtualRouter   |
++----------------------+--------------------+
+| NetworkACL           | VpcVirtualRouter   |
++----------------------+--------------------+
+| Virtual Networking   | NiciraNVP          |
++----------------------+--------------------+
+
+Table: VPC Network Offering with Loadbalancing
+
+Troubleshooting the Nicira NVP Plugin
+-------------------------------------
+
+UUID References
+~~~~~~~~~~~~~~~
+
+The plugin maintains several references in the CloudStack database to items
+created on the NVP Controller.
+
+Every guest network that is created will have its broadcast type set to
+Lswitch and if the network is in state "Implemented", the broadcast URI
+will have the UUID of the Logical Switch that was created for this
+network on the NVP Controller.
+
+The Nics that are connected to one of the Logical Switches will have
+their Logical Switch Port UUID listed in the nicira\_nvp\_nic\_map table
+
+.. note::    All devices created on the NVP Controller will have a tag set to
+    domain-account of the owner of the network, this string can be used
+    to search for items in the NVP Controller.
+
+Database tables
+~~~~~~~~~~~~~~~
+
+The following tables are added to the cloud database for the Nicira NVP
+Plugin
+
++---------------------+--------------------------------------------------------------+
+| id                  | auto incrementing id                                         |
++---------------------+--------------------------------------------------------------+
+| logicalswitch       | uuid of the logical switch this port is connected to         |
++---------------------+--------------------------------------------------------------+
+| logicalswitchport   | uuid of the logical switch port for this nic                 |
++---------------------+--------------------------------------------------------------+
+| nic                 | the CloudStack uuid for this nic, reference to the nics table| 
++---------------------+--------------------------------------------------------------+
+
+Table: nicira\_nvp\_nic\_map
+
++-------------------------+-------------------------------------------------------------+
+| id                      | auto incrementing id                                        |
++-------------------------+-------------------------------------------------------------+
+| uuid                    | UUID identifying this device                                |
++-------------------------+-------------------------------------------------------------+
+| physical\_network\_id   | the physical network this device is configured on           |
++-------------------------+-------------------------------------------------------------+
+| provider\_name          | NiciraNVP                                                   |
++-------------------------+-------------------------------------------------------------+
+| device\_name            | display name for this device                                |
++-------------------------+-------------------------------------------------------------+
+| host\_id                | reference to the host table with the device configuration   |
++-------------------------+-------------------------------------------------------------+
+
+Table: external\_nicira\_nvp\_devices
+
++-----------------------+----------------------------------------------+
+| id                    | auto incrementing id                         |
++-----------------------+----------------------------------------------+
+| logicalrouter\_uuid   | uuid of the logical router                   |
++-----------------------+----------------------------------------------+
+| network\_id           | id of the network this router is linked to   |
++-----------------------+----------------------------------------------+
+
+Table: nicira\_nvp\_router\_map
+
+.. note::    nicira\_nvp\_router\_map is only available in CloudStack 4.1 and above
+
+Revision History
+----------------
+
+0-0 Wed Oct 03 2012 Hugo Trippaers hugo@apache.org Documentation created
+for 4.0.0-incubating version of the NVP Plugin 1-0 Wed May 22 2013 Hugo
+Trippaers hugo@apache.org Documentation updated for CloudStack 4.1.0
+
+.. | nvp-physical-network-stt.png: a screenshot of a physical network with the STT isolation type | image:: ./images/nvp-physical-network-stt.png
+.. | nvp-physical-network-stt.png: a screenshot of an enabled Nicira NVP provider | image:: ./images/nvp-enable-provider.png
+.. | nvp-physical-network-stt.png: a screenshot of the device configuration popup. | image:: ./images/nvp-add-controller.png
+.. | nvp-physical-network-stt.png: a screenshot of a network offering. | image:: ./images/nvp-network-offering.png
+.. | nvp-physical-network-stt.png: a screenshot of the mysql table. | image:: ./images/nvp-vpc-offering-edit.png

http://git-wip-us.apache.org/repos/asf/cloudstack-docs/blob/5fddad01/rtd/source/networking/ovs-plugin.rst
----------------------------------------------------------------------
diff --git a/rtd/source/networking/ovs-plugin.rst b/rtd/source/networking/ovs-plugin.rst
new file mode 100644
index 0000000..495b304
--- /dev/null
+++ b/rtd/source/networking/ovs-plugin.rst
@@ -0,0 +1,229 @@
+The OVS Plugin
+==============
+
+Introduction to the OVS Plugin
+------------------------------
+
+The OVS plugin is the native SDN
+implementations in CloudStack, using GRE isolation method. The plugin can be used by CloudStack to implement isolated guest networks and
+to provide additional services like NAT, port forwarding and load balancing.
+
+Features of the OVS Plugin
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The following table lists the CloudStack network services provided by
+the OVS Plugin.
+
++----------------------+----------------------+
+| Network Service      | CloudStack version   |
++======================+======================+
+| Virtual Networking   | >= 4.0               |
++----------------------+----------------------+
+| Static NAT           | >= 4.3               |
++----------------------+----------------------+
+| Port Forwarding      | >= 4.3               |
++----------------------+----------------------+
+| Load Balancing       | >= 4.3               |
++----------------------+----------------------+
+
+Table: Supported Services
+
+.. note::   The Virtual Networking service was originally called 'Connectivity'
+    in CloudStack 4.0
+
+The following hypervisors are supported by the OVS Plugin.
+
++--------------+----------------------+
+| Hypervisor   | CloudStack version   |
++==============+======================+
+| XenServer    | >= 4.0               |
++--------------+----------------------+
+| KVM          | >= 4.3               |
++--------------+----------------------+
+
+Table: Supported Hypervisors
+
+
+Configuring the OVS Plugin
+--------------------------
+
+Prerequisites
+~~~~~~~~~~~~~
+
+Before enabling the OVS plugin the hypervisor needs to be install OpenvSwitch. Default, XenServer has already installed OpenvSwitch. However, you must install OpenvSwitch manually on KVM. CentOS 6.4 and OpenvSwitch 1.10 are recommended.
+
+KVM hypervisor:
+
+- CentOS 6.4 is recommended.
+- To make sure that the native bridge module will not interfere with openvSwitch the bridge module should be added to the blacklist. See the modprobe documentation for your distribution on where to find the blacklist. Make sure the module is not loaded either by rebooting or executing rmmod bridge before executing next steps.
+
+
+Zone Configuration
+~~~~~~~~~~~~~~~~~~
+
+CloudStack needs to have at least one physical network with the isolation
+method set to “GRE”. This network should be enabled for the Guest
+traffic type.
+
+.. note::
+         With KVM, the traffic type should be configured with the traffic label
+         that matches the name of the Integration Bridge on the hypervisor. For example, you should set the traffic label as following:
+    	 - Management & Storage traffic: cloudbr0
+    	 - Guest & Public traffic: cloudbr1
+         See KVM networking configuration guide for more detail.
+
+
+.. figure:: /_static/images/ovs-physical-network-gre.png
+    :align: center
+    :alt: a screenshot of a physical network with the GRE isolation type
+
+Agent Configuration
+~~~~~~~~~~~~~~~~~~~
+
+.. note::   Only for KVM hypervisor
+
+* Configure network interfaces:
+
+::
+	
+	/etc/sysconfig/network-scripts/ifcfg-eth0
+ 	DEVICE=eth0
+ 	BOOTPROTO=none
+ 	IPV6INIT=no
+ 	NM_CONTROLLED=no
+ 	ONBOOT=yes
+ 	TYPE=OVSPort
+ 	DEVICETYPE=ovs
+ 	OVS_BRIDGE=cloudbr0
+ 
+	/etc/sysconfig/network-scripts/ifcfg-eth1
+ 	DEVICE=eth1
+ 	BOOTPROTO=none
+ 	IPV6INIT=no
+ 	NM_CONTROLLED=no
+ 	ONBOOT=yes
+ 	TYPE=OVSPort
+ 	DEVICETYPE=ovs
+ 	OVS_BRIDGE=cloudbr1
+ 
+	/etc/sysconfig/network-scripts/ifcfg-cloudbr0
+ 	DEVICE=cloudbr0
+ 	ONBOOT=yes
+ 	DEVICETYPE=ovs
+ 	TYPE=OVSBridge
+ 	BOOTPROTO=static
+ 	IPADDR=172.16.10.10
+ 	GATEWAY=172.16.10.1
+ 	NETMASK=255.255.255.0
+ 	HOTPLUG=no
+ 
+	/etc/sysconfig/network-scripts/ifcfg-cloudbr1
+ 	DEVICE=cloudbr1
+ 	ONBOOT=yes
+ 	DEVICETYPE=ovs
+ 	TYPE=OVSBridge
+ 	BOOTPROTO=none
+ 	HOTPLUG=no
+ 
+	/etc/sysconfig/network
+ 	NETWORKING=yes
+ 	HOSTNAME=testkvm1
+ 	GATEWAY=172.10.10.1
+
+* Edit /etc/cloudstack/agent/agent.properties
+
+::
+	
+	network.bridge.type=openvswitch
+	libvirt.vif.driver=com.cloud.hypervisor.kvm.resource.OvsVifDriver
+
+Enabling the service provider
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+The OVS provider is disabled by default. Navigate to the "Network
+Service Providers" configuration of the physical network with the GRE
+isolation type. Navigate to the OVS provider and press the
+"Enable Provider" button.
+
+.. figure:: /_static/images/ovs-physical-network-gre-enable.png
+    :align: center
+    :alt: a screenshot of an enabled OVS provider
+
+Network Offerings
+~~~~~~~~~~~~~~~~~
+
+Using the OVS plugin requires a network offering with Virtual
+Networking enabled and configured to use the OVS element. Typical
+use cases combine services from the Virtual Router appliance and the
+OVS plugin.
+
++----------------------+-----------------+
+| Service              | Provider        |
++======================+=================+
+| VPN                  | VirtualRouter   |
++----------------------+-----------------+
+| DHCP                 | VirtualRouter   |
++----------------------+-----------------+
+| DNS                  | VirtualRouter   |
++----------------------+-----------------+
+| Firewall             | VirtualRouter   |
++----------------------+-----------------+
+| Load Balancer        | OVS   		 |
++----------------------+-----------------+
+| User Data            | VirtualRouter   |
++----------------------+-----------------+
+| Source NAT           | VirtualRouter   |
++----------------------+-----------------+
+| Static NAT           | OVS   		 |
++----------------------+-----------------+
+| Post Forwarding      | OVS   		 |
++----------------------+-----------------+
+| Virtual Networking   | OVS       	 |
++----------------------+-----------------+
+
+Table: Isolated network offering with regular services from the Virtual
+Router.
+
+.. figure:: /_static/images/ovs-network-offering.png
+    :align: center
+    :alt: a screenshot of a network offering.
+
+
+.. note::    The tag in the network offering should be set to the name of the
+    physical network with the OVS provider.
+
+Isolated network with network services. The virtual router is still
+required to provide network services like dns and dhcp.
+
++----------------------+-----------------+
+| Service              | Provider        |
++======================+=================+
+| DHCP                 | VirtualRouter   |
++----------------------+-----------------+
+| DNS                  | VirtualRouter   |
++----------------------+-----------------+
+| User Data            | VirtualRouter   |
++----------------------+-----------------+
+| Source NAT           | VirtualRouter   |
++----------------------+-----------------+
+| Static NAT           | OVS	         |
++----------------------+-----------------+
+| Post Forwarding      | OVS      	 |
++----------------------+-----------------+
+| Load Balancing       | OVS      	 |
++----------------------+-----------------+
+| Virtual Networking   | OVS       	 |
++----------------------+-----------------+
+
+Table: Isolated network offering with network services
+
+Using the OVS plugin with VPC
+-----------------------------
+
+OVS plugin does not work with VPC at that time
+
+Revision History
+----------------
+
+0-0 Mon Dec 2 2013 Nguyen Anh Tu tuna@apache.org Documentation
+created for 4.3.0 version of the OVS Plugin