You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by se...@apache.org on 2014/05/09 12:36:15 UTC

[3/4] Split hypervisor installation in multiple files

http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/blob/dac80e2b/source/hypervisor/vsphere.rst
----------------------------------------------------------------------
diff --git a/source/hypervisor/vsphere.rst b/source/hypervisor/vsphere.rst
new file mode 100644
index 0000000..186f9e5
--- /dev/null
+++ b/source/hypervisor/vsphere.rst
@@ -0,0 +1,1140 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information#
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+   http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+VMware vSphere Installation and Configuration
+---------------------------------------------
+
+If you want to use the VMware vSphere hypervisor to run guest virtual
+machines, install vSphere on the host(s) in your cloud.
+
+System Requirements for vSphere Hosts
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Software requirements:
+^^^^^^^^^^^^^^^^^^^^^^
+
+-  
+
+   vSphere and vCenter, both version 4.1 or 5.0.
+
+   vSphere Standard is recommended. Note however that customers need to
+   consider the CPU constraints in place with vSphere licensing. See
+   `http://www.vmware.com/files/pdf/vsphere\_pricing.pdf <http://www.vmware.com/files/pdf/vsphere_pricing.pdf>`_
+   and discuss with your VMware sales representative.
+
+   vCenter Server Standard is recommended.
+
+-  
+
+   Be sure all the hotfixes provided by the hypervisor vendor are
+   applied. Track the release of hypervisor patches through your
+   hypervisor vendor's support channel, and apply patches as soon as
+   possible after they are released. CloudStack will not track or notify
+   you of required hypervisor patches. It is essential that your hosts
+   are completely up to date with the provided hypervisor patches. The
+   hypervisor vendor is likely to refuse to support any system that is
+   not up to date with patches.
+
+.. warning:: Apply All Necessary Hotfixes. The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
+
+Hardware requirements:
+^^^^^^^^^^^^^^^^^^^^^^
+
+-  
+
+   The host must be certified as compatible with vSphere. See the VMware
+   Hardware Compatibility Guide at
+   `http://www.vmware.com/resources/compatibility/search.php <http://www.vmware.com/resources/compatibility/search.php>`_.
+
+-  
+
+   All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V
+   enabled).
+
+-  
+
+   All hosts within a cluster must be homogenous. That means the CPUs
+   must be of the same type, count, and feature flags.
+
+-  
+
+   64-bit x86 CPU (more cores results in better performance)
+
+-  
+
+   Hardware virtualization support required
+
+-  
+
+   4 GB of memory
+
+-  
+
+   36 GB of local disk
+
+-  
+
+   At least 1 NIC
+
+-  
+
+   Statically allocated IP Address
+
+vCenter Server requirements:
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  
+
+   Processor - 2 CPUs 2.0GHz or higher Intel or AMD x86 processors.
+   Processor requirements may be higher if the database runs on the same
+   machine.
+
+-  
+
+   Memory - 3GB RAM. RAM requirements may be higher if your database
+   runs on the same machine.
+
+-  
+
+   Disk storage - 2GB. Disk requirements may be higher if your database
+   runs on the same machine.
+
+-  
+
+   Microsoft SQL Server 2005 Express disk requirements. The bundled
+   database requires up to 2GB free disk space to decompress the
+   installation archive.
+
+-  
+
+   Networking - 1Gbit or 10Gbit.
+
+For more information, see `"vCenter Server and the vSphere Client Hardware Requirements" <http://pubs.vmware.com/vsp40/wwhelp/wwhimpl/js/html/wwhelp.htm#href=install/c_vc_hw.html>`_.
+
+Other requirements:
+^^^^^^^^^^^^^^^^^^^
+
+-  
+
+   VMware vCenter Standard Edition 4.1 or 5.0 must be installed and
+   available to manage the vSphere hosts.
+
+-  
+
+   vCenter must be configured to use the standard port 443 so that it
+   can communicate with the CloudStack Management Server.
+
+-  
+
+   You must re-install VMware ESXi if you are going to re-use a host
+   from a previous install.
+
+-  
+
+   CloudStack requires VMware vSphere 4.1 or 5.0. VMware vSphere 4.0 is
+   not supported.
+
+-  
+
+   All hosts must be 64-bit and must support HVM (Intel-VT or AMD-V
+   enabled). All hosts within a cluster must be homogeneous. That means
+   the CPUs must be of the same type, count, and feature flags.
+
+-  
+
+   The CloudStack management network must not be configured as a
+   separate virtual network. The CloudStack management network is the
+   same as the vCenter management network, and will inherit its
+   configuration. See :ref:`configure-vcenter-management-network`.
+
+-  
+
+   CloudStack requires ESXi. ESX is not supported.
+
+-  
+
+   All resources used for CloudStack must be used for CloudStack only.
+   CloudStack cannot share instance of ESXi or storage with other
+   management consoles. Do not share the same storage volumes that will
+   be used by CloudStack with a different set of ESXi servers that are
+   not managed by CloudStack.
+
+-  
+
+   Put all target ESXi hypervisors in a cluster in a separate Datacenter
+   in vCenter.
+
+-  
+
+   The cluster that will be managed by CloudStack should not contain any
+   VMs. Do not run the management server, vCenter or any other VMs on
+   the cluster that is designated for CloudStack use. Create a separate
+   cluster for use of CloudStack and make sure that they are no VMs in
+   this cluster.
+
+-  
+
+   All the required VLANS must be trunked into all network switches that
+   are connected to the ESXi hypervisor hosts. These would include the
+   VLANS for Management, Storage, vMotion, and guest VLANs. The guest
+   VLAN (used in Advanced Networking; see Network Setup) is a contiguous
+   range of VLANs that will be managed by CloudStack.
+
+Preparation Checklist for VMware
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+For a smoother installation, gather the following information before you
+start:
+
+-  
+
+   Information listed in :ref:`vcenter-checklist`
+
+-  
+
+   Information listed in :ref:`networking-checklist-for-vmware`
+
+.. _vcenter-checklist:
+
+vCenter Checklist
+^^^^^^^^^^^^^^^^^
+
+You will need the following information about vCenter.
+
+========================  =====================================
+vCenter Requirement       Notes
+========================  =====================================
+vCenter User              This user must have admin privileges.
+vCenter User Password     Password for the above user.
+vCenter Datacenter Name   Name of the datacenter.
+vCenter Cluster Name      Name of the cluster.
+========================  =====================================
+
+.. _networking-checklist-for-vmware:
+
+Networking Checklist for VMware
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You will need the following information about VLAN.
+
+============================  ==========================================================================================
+VLAN Information              Notes
+============================  ==========================================================================================
+ESXi VLAN                     VLAN on which all your ESXi hypervisors reside.
+ESXI VLAN IP Address          IP Address Range in the ESXi VLAN. One address per Virtual Router is used from this range.
+ESXi VLAN IP Gateway
+ESXi VLAN Netmask
+Management Server VLAN        VLAN on which the CloudStack Management server is installed.
+Public VLAN                   VLAN for the Public Network.
+Public VLAN Gateway
+Public VLAN Netmask
+Public VLAN IP Address Range  Range of Public IP Addresses available for CloudStack use. These
+                              addresses will be used for virtual router on CloudStack to route private
+                              traffic to external networks.
+VLAN Range for Customer use   A contiguous range of non-routable VLANs. One VLAN will be assigned for
+                              each customer.
+============================  ==========================================================================================
+
+
+vSphere Installation Steps
+~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. 
+
+   If you haven't already, you'll need to download and purchase vSphere
+   from the VMware Website
+   (`https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1 <https://www.vmware.com/tryvmware/index.php?p=vmware-vsphere&lp=1>`_)
+   and install it by following the VMware vSphere Installation Guide.
+
+#. 
+
+   Following installation, perform the following configuration, which
+   are described in the next few sections:
+
+   ====================================================================================================== ===================
+   Required                                                                                                Optional
+   ====================================================================================================== ===================
+   ESXi host setup                                                                                         NIC bonding
+   Configure host physical networking,virtual switch, vCenter Management Network, and extended port range  Multipath storage
+   Prepare storage for iSCSI
+   Configure clusters in vCenter and add hosts to them, or add hosts without clusters to vCenter
+   ====================================================================================================== ===================
+
+ESXi Host setup
+~~~~~~~~~~~~~~~
+
+All ESXi hosts should enable CPU hardware virtualization support in
+BIOS. Please note hardware virtualization support is not enabled by
+default on most servers.
+
+Physical Host Networking
+~~~~~~~~~~~~~~~~~~~~~~~~
+
+You should have a plan for cabling the vSphere hosts. Proper network
+configuration is required before adding a vSphere host to CloudStack. To
+configure an ESXi host, you can use vClient to add it as standalone host
+to vCenter first. Once you see the host appearing in the vCenter
+inventory tree, click the host node in the inventory tree, and navigate
+to the Configuration tab.
+
+|vspherephysicalnetwork.png: vSphere client|
+
+In the host configuration tab, click the "Hardware/Networking" link to
+bring up the networking configuration page as above.
+
+Configure Virtual Switch
+^^^^^^^^^^^^^^^^^^^^^^^^
+
+A default virtual switch vSwitch0 is created. CloudStack requires all
+ESXi hosts in the cloud to use the same set of virtual switch names. If
+you change the default virtual switch name, you will need to configure
+one or more CloudStack configuration variables as well.
+
+Separating Traffic
+''''''''''''''''''
+
+CloudStack allows you to use vCenter to configure three separate
+networks per ESXi host. These networks are identified by the name of the
+vSwitch they are connected to. The allowed networks for configuration
+are public (for traffic to/from the public internet), guest (for
+guest-guest traffic), and private (for management and usually storage
+traffic). You can use the default virtual switch for all three, or
+create one or two other vSwitches for those traffic types.
+
+If you want to separate traffic in this way you should first create and
+configure vSwitches in vCenter according to the vCenter instructions.
+Take note of the vSwitch names you have used for each traffic type. You
+will configure CloudStack to use these vSwitches.
+
+Increasing Ports
+''''''''''''''''
+
+By default a virtual switch on ESXi hosts is created with 56 ports. We
+recommend setting it to 4088, the maximum number of ports allowed. To do
+that, click the "Properties..." link for virtual switch (note this is
+not the Properties link for Networking).
+
+|vsphereincreaseports.png: vSphere client|
+
+In vSwitch properties dialog, select the vSwitch and click Edit. You
+should see the following dialog:
+
+|vspherevswitchproperties.png: vSphere client|
+
+In this dialog, you can change the number of switch ports. After you've
+done that, ESXi hosts are required to reboot in order for the setting to
+take effect.
+
+.. _configure-vcenter-management-network:
+
+Configure vCenter Management Network
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+In the vSwitch properties dialog box, you may see a vCenter management
+network. This same network will also be used as the CloudStack
+management network. CloudStack requires the vCenter management network
+to be configured properly. Select the management network item in the
+dialog, then click Edit.
+
+|vspheremgtnetwork.png: vSphere client|
+
+Make sure the following values are set:
+
+-  
+
+   VLAN ID set to the desired ID
+
+-  
+
+   vMotion enabled.
+
+-  
+
+   Management traffic enabled.
+
+If the ESXi hosts have multiple VMKernel ports, and ESXi is not using
+the default value "Management Network" as the management network name,
+you must follow these guidelines to configure the management network
+port group so that CloudStack can find it:
+
+-  
+
+   Use one label for the management network port across all ESXi hosts.
+
+-  
+
+   In the CloudStack UI, go to Configuration - Global Settings and set
+   vmware.management.portgroup to the management network label from the
+   ESXi hosts.
+
+Extend Port Range for CloudStack Console Proxy
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+(Applies only to VMware vSphere version 4.x)
+
+You need to extend the range of firewall ports that the console proxy
+works with on the hosts. This is to enable the console proxy to work
+with VMware-based VMs. The default additional port range is 59000-60000.
+To extend the port range, log in to the VMware ESX service console on
+each host and run the following commands:
+
+.. sourcecode:: bash
+
+    esxcfg-firewall -o 59000-60000,tcp,in,vncextras
+    esxcfg-firewall -o 59000-60000,tcp,out,vncextras
+
+Configure NIC Bonding for vSphere
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+NIC bonding on vSphere hosts may be done according to the vSphere
+installation guide.
+
+Configuring a vSphere Cluster with Nexus 1000v Virtual Switch
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+CloudStack supports Cisco Nexus 1000v dvSwitch (Distributed Virtual
+Switch) for virtual network configuration in a VMware vSphere
+environment. This section helps you configure a vSphere cluster with
+Nexus 1000v virtual switch in a VMware vCenter environment. For
+information on creating a vSphere cluster, see 
+`"VMware vSphere Installation and Configuration" <#vmware-vsphere-installation-and-configuration>`_
+
+About Cisco Nexus 1000v Distributed Virtual Switch
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The Cisco Nexus 1000V virtual switch is a software-based virtual machine
+access switch for VMware vSphere environments. It can span multiple
+hosts running VMware ESXi 4.0 and later. A Nexus virtual switch consists
+of two components: the Virtual Supervisor Module (VSM) and the Virtual
+Ethernet Module (VEM). The VSM is a virtual appliance that acts as the
+switch's supervisor. It controls multiple VEMs as a single network
+device. The VSM is installed independent of the VEM and is deployed in
+redundancy mode as pairs or as a standalone appliance. The VEM is
+installed on each VMware ESXi server to provide packet-forwarding
+capability. It provides each virtual machine with dedicated switch
+ports. This VSM-VEM architecture is analogous to a physical Cisco
+switch's supervisor (standalone or configured in high-availability mode)
+and multiple linecards architecture.
+
+Nexus 1000v switch uses vEthernet port profiles to simplify network
+provisioning for virtual machines. There are two types of port profiles:
+Ethernet port profile and vEthernet port profile. The Ethernet port
+profile is applied to the physical uplink ports-the NIC ports of the
+physical NIC adapter on an ESXi server. The vEthernet port profile is
+associated with the virtual NIC (vNIC) that is plumbed on a guest VM on
+the ESXi server. The port profiles help the network administrators
+define network policies which can be reused for new virtual machines.
+The Ethernet port profiles are created on the VSM and are represented as
+port groups on the vCenter server.
+
+Prerequisites and Guidelines
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+This section discusses prerequisites and guidelines for using Nexus
+virtual switch in CloudStack. Before configuring Nexus virtual switch,
+ensure that your system meets the following requirements:
+
+-  
+
+   A cluster of servers (ESXi 4.1 or later) is configured in the
+   vCenter.
+
+-  
+
+   Each cluster managed by CloudStack is the only cluster in its vCenter
+   datacenter.
+
+-  
+
+   A Cisco Nexus 1000v virtual switch is installed to serve the
+   datacenter that contains the vCenter cluster. This ensures that
+   CloudStack doesn't have to deal with dynamic migration of virtual
+   adapters or networks across other existing virtual switches. See
+   `Cisco Nexus 1000V Installation and Upgrade
+   Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_5_1/install_upgrade/vsm_vem/guide/n1000v_installupgrade.html>`_
+   for guidelines on how to install the Nexus 1000v VSM and VEM modules.
+
+-  
+
+   The Nexus 1000v VSM is not deployed on a vSphere host that is managed
+   by CloudStack.
+
+-  
+
+   When the maximum number of VEM modules per VSM instance is reached,
+   an additional VSM instance is created before introducing any more
+   ESXi hosts. The limit is 64 VEM modules for each VSM instance.
+
+-  
+
+   CloudStack expects that the Management Network of the ESXi host is
+   configured on the standard vSwitch and searches for it in the
+   standard vSwitch. Therefore, ensure that you do not migrate the
+   management network to Nexus 1000v virtual switch during
+   configuration.
+
+-  
+
+   All information given in :ref:`nexus-vswift-preconf`
+
+.. _nexus-vswift-preconf:
+
+Nexus 1000v Virtual Switch Preconfiguration
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Preparation Checklist
+'''''''''''''''''''''
+
+For a smoother configuration of Nexus 1000v switch, gather the following
+information before you start:
+
+-  
+
+   vCenter credentials
+
+-  
+
+   Nexus 1000v VSM IP address
+
+-  
+
+   Nexus 1000v VSM Credentials
+
+-  
+
+   Ethernet port profile names
+
+vCenter Credentials Checklist
+'''''''''''''''''''''''''''''                                          
+
+You will need the following information about vCenter:
+
+=============================  =========  =============================================================================
+Nexus vSwitch Requirements     Value      Notes
+=============================  =========  =============================================================================
+vCenter IP                                The IP address of the vCenter.
+Secure HTTP Port Number        443        Port 443 is configured by default; however, you can change the port if needed.
+vCenter User ID                           The vCenter user with administrator-level privileges. The vCenter User ID is 
+                                          required when you configure the virtual switch in CloudStack.
+vCenter Password                          The password for the vCenter user specified above. The password for this
+                                          vCenter user is required when you configure the switch in CloudStack.
+=============================  =========  =============================================================================
+
+
+Network Configuration Checklist
+'''''''''''''''''''''''''''''''                                            
+
+The following information specified in the Nexus Configure Networking
+screen is displayed in the Details tab of the Nexus dvSwitch in the
+CloudStack UI:
+
+**Control Port Group VLAN ID**
+                        The VLAN ID of the Control Port Group. The control VLAN is used for communication between the VSM and the VEMs.
+
+**Management Port Group VLAN ID**
+                        The VLAN ID of the Management Port Group. The management VLAN corresponds to the mgmt0 interface that is used to establish and maintain the connection between the VSM and VMware vCenter Server.
+
+**Packet Port Group VLAN ID**
+                        The VLAN ID of the Packet Port Group. The packet VLAN forwards relevant data packets from the VEMs to the VSM.
+
+.. note:: The VLANs used for control, packet, and management port groups can be the same.
+
+For more information, see `Cisco Nexus 1000V Getting Started Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_b/getting_started/configuration/guide/n1000v_gsg.pdf>`_.
+
+VSM Configuration Checklist
+'''''''''''''''''''''''''''                                        
+
+You will need the following VSM configuration parameters:
+
+**Admin Name and Password**
+                       The admin name and password to connect to the VSM appliance. You must specify these credentials while configuring Nexus virtual switch.
+**Management IP Address**
+                       This is the IP address of the VSM appliance. This is the IP address you specify in the virtual switch IP Address field while configuting Nexus virtual switch.
+**SSL**
+                       Should be set to Enable.Always enable SSL. SSH is usually enabled by default during the VSM
+                       installation. However, check whether the SSH connection to the VSM is
+                       working, without which CloudStack failes to connect to the VSM.
+
+Creating a Port Profile
+'''''''''''''''''''''''
+
+-  
+
+   Whether you create a Basic or Advanced zone configuration, ensure
+   that you always create an Ethernet port profile on the VSM after you
+   install it and before you create the zone.
+
+   -  
+
+      The Ethernet port profile created to represent the physical
+      network or networks used by an Advanced zone configuration trunk
+      all the VLANs including guest VLANs, the VLANs that serve the
+      native VLAN, and the packet/control/data/management VLANs of the
+      VSM.
+
+   -  
+
+      The Ethernet port profile created for a Basic zone configuration
+      does not trunk the guest VLANs because the guest VMs do not get
+      their own VLANs provisioned on their network interfaces in a Basic
+      zone.
+
+-  
+
+   An Ethernet port profile configured on the Nexus 1000v virtual switch
+   should not use in its set of system VLANs, or any of the VLANs
+   configured or intended to be configured for use towards VMs or VM
+   resources in the CloudStack environment.
+
+-  
+
+   You do not have to create any vEthernet port profiles – CloudStack
+   does that during VM deployment.
+
+-  
+
+   Ensure that you create required port profiles to be used by
+   CloudStack for different traffic types of CloudStack, such as
+   Management traffic, Guest traffic, Storage traffic, and Public
+   traffic. The physical networks configured during zone creation should
+   have a one-to-one relation with the Ethernet port profiles.
+
+|vmwarenexusportprofile.png: vSphere client|
+
+For information on creating a port profile, see `Cisco Nexus 1000V Port
+Profile Configuration
+Guide <http://www.cisco.com/en/US/docs/switches/datacenter/nexus1000/sw/4_2_1_s_v_1_4_a/port_profile/configuration/guide/n1000v_port_profile.html>`_.
+
+Assigning Physical NIC Adapters
+'''''''''''''''''''''''''''''''
+
+Assign ESXi host's physical NIC adapters, which correspond to each
+physical network, to the port profiles. In each ESXi host that is part
+of the vCenter cluster, observe the physical networks assigned to each
+port profile and note down the names of the port profile for future use.
+This mapping information helps you when configuring physical networks
+during the zone configuration on CloudStack. These Ethernet port profile
+names are later specified as VMware Traffic Labels for different traffic
+types when configuring physical networks during the zone configuration.
+For more information on configuring physical networks, see
+`"Configuring a vSphere Cluster with Nexus 1000v Virtual Switch" <#configuring-a-vsphere-cluster-with-nexus-1000v-virtual-switch>`_.
+
+Adding VLAN Ranges
+''''''''''''''''''
+
+Determine the public VLAN, System VLAN, and Guest VLANs to be used by
+the CloudStack. Ensure that you add them to the port profile database.
+Corresponding to each physical network, add the VLAN range to port
+profiles. In the VSM command prompt, run the switchport trunk allowed
+vlan<range> command to add the VLAN ranges to the port profile.
+
+For example:
+
+.. sourcecode:: bash
+
+    switchport trunk allowed vlan 1,140-147,196-203
+
+In this example, the allowed VLANs added are 1, 140-147, and 196-203
+
+You must also add all the public and private VLANs or VLAN ranges to the
+switch. This range is the VLAN range you specify in your zone.
+
+.. note:: Before you run the vlan command, ensure that the configuration mode is enabled in Nexus 1000v virtual switch.
+
+For example:
+
+If you want the VLAN 200 to be used on the switch, run the following
+command:
+
+.. sourcecode:: bash
+
+    vlan 200
+
+If you want the VLAN range 1350-1750 to be used on the switch, run the
+following command:
+
+.. sourcecode:: bash
+
+    vlan 1350-1750
+
+Refer to Cisco Nexus 1000V Command Reference of specific product
+version.
+
+Enabling Nexus Virtual Switch in CloudStack
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To make a CloudStack deployment Nexus enabled, you must set the
+vmware.use.nexus.vswitch parameter true by using the Global Settings
+page in the CloudStack UI. Unless this parameter is set to "true" and
+restart the management server, you cannot see any UI options specific to
+Nexus virtual switch, and CloudStack ignores the Nexus virtual switch
+specific parameters specified in the AddTrafficTypeCmd,
+UpdateTrafficTypeCmd, and AddClusterCmd API calls.
+
+Unless the CloudStack global parameter "vmware.use.nexus.vswitch" is set
+to "true", CloudStack by default uses VMware standard vSwitch for
+virtual network infrastructure. In this release, CloudStack doesn’t
+support configuring virtual networks in a deployment with a mix of
+standard vSwitch and Nexus 1000v virtual switch. The deployment can have
+either standard vSwitch or Nexus 1000v virtual switch.
+
+Configuring Nexus 1000v Virtual Switch in CloudStack
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can configure Nexus dvSwitch by adding the necessary resources while
+the zone is being created.
+
+|vmwarenexusaddcluster.png: vmware nexus add cluster|
+
+After the zone is created, if you want to create an additional cluster
+along with Nexus 1000v virtual switch in the existing zone, use the Add
+Cluster option. For information on creating a cluster, see
+`"Add Cluster: vSphere" <configuration.html#add-cluster-vsphere>`_.
+
+In both these cases, you must specify the following parameters to
+configure Nexus virtual switch:
+
+=========================  =======================================================================================================================
+Parameters                 Description
+=========================  =======================================================================================================================
+Cluster Name               Enter the name of the cluster you created in vCenter. For example,"cloud.cluster".
+vCenter Host               Enter the host name or the IP address of the vCenter host where you have deployed the Nexus virtual switch.
+vCenter User name          Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.
+vCenter Password           Enter the password for the user named above.
+vCenter Datacenter         Enter the vCenter datacenter that the cluster is in. For example, "cloud.dc.VM".
+Nexus dvSwitch IP Address  The IP address of the VSM component of the Nexus 1000v virtual switch.
+Nexus dvSwitch Username    The admin name to connect to the VSM appliance.
+Nexus dvSwitch Password    The corresponding password for the admin user specified above.
+=========================  =======================================================================================================================
+
+
+Removing Nexus Virtual Switch
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+#. 
+
+   In the vCenter datacenter that is served by the Nexus virtual switch,
+   ensure that you delete all the hosts in the corresponding cluster.
+
+#. 
+
+   Log in with Admin permissions to the CloudStack administrator UI.
+
+#. 
+
+   In the left navigation bar, select Infrastructure.
+
+#. 
+
+   In the Infrastructure page, click View all under Clusters.
+
+#. 
+
+   Select the cluster where you want to remove the virtual switch.
+
+#. 
+
+   In the dvSwitch tab, click the name of the virtual switch.
+
+#. 
+
+   In the Details page, click Delete Nexus dvSwitch icon.
+   |DeleteButton.png: button to delete dvSwitch|
+
+   Click Yes in the confirmation dialog box.
+
+Configuring a VMware Datacenter with VMware Distributed Virtual Switch
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+CloudStack supports VMware vNetwork Distributed Switch (VDS) for virtual
+network configuration in a VMware vSphere environment. This section
+helps you configure VMware VDS in a CloudStack deployment. Each vCenter
+server instance can support up to 128 VDS instances and each VDS
+instance can manage up to 500 VMware hosts.
+
+About VMware Distributed Virtual Switch
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+VMware VDS is an aggregation of host-level virtual switches on a VMware
+vCenter server. VDS abstracts the configuration of individual virtual
+switches that span across a large number of hosts, and enables
+centralized provisioning, administration, and monitoring for your entire
+datacenter from a centralized interface. In effect, a VDS acts as a
+single virtual switch at the datacenter level and manages networking for
+a number of hosts in a datacenter from a centralized VMware vCenter
+server. Each VDS maintains network runtime state for VMs as they move
+across multiple hosts, enabling inline monitoring and centralized
+firewall services. A VDS can be deployed with or without Virtual
+Standard Switch and a Nexus 1000V virtual switch.
+
+Prerequisites and Guidelines
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+-  
+
+   VMware VDS is supported only on Public and Guest traffic in
+   CloudStack.
+
+-  
+
+   VMware VDS does not support multiple VDS per traffic type. If a user
+   has many VDS switches, only one can be used for Guest traffic and
+   another one for Public traffic.
+
+-  
+
+   Additional switches of any type can be added for each cluster in the
+   same zone. While adding the clusters with different switch type,
+   traffic labels is overridden at the cluster level.
+
+-  
+
+   Management and Storage network does not support VDS. Therefore, use
+   Standard Switch for these networks.
+
+-  
+
+   When you remove a guest network, the corresponding dvportgroup will
+   not be removed on the vCenter. You must manually delete them on the
+   vCenter.
+
+Preparation Checklist
+^^^^^^^^^^^^^^^^^^^^^
+
+For a smoother configuration of VMware VDS, note down the VDS name you
+have added in the datacenter before you start:
+
+|vds-name.png: Name of the dvSwitch as specified in the vCenter.|
+
+Use this VDS name in the following:
+
+-  
+
+   The switch name in the Edit traffic label dialog while configuring a
+   public and guest traffic during zone creation.
+
+   During a zone creation, ensure that you select VMware vNetwork
+   Distributed Virtual Switch when you configure guest and public
+   traffic type.
+
+   |traffic-type.png|
+
+-  
+
+   The Public Traffic vSwitch Type field when you add a VMware
+   VDS-enabled cluster.
+
+-  
+
+   The switch name in the traffic label while updating the switch type
+   in a zone.
+
+Traffic label format in the last case is [["Name of vSwitch/dvSwitch/EthernetPortProfile"][,"VLAN ID"[,"vSwitch Type"]]]
+
+The possible values for traffic labels are:
+
+-  
+
+   empty string
+
+-  
+
+   dvSwitch0
+
+-  
+
+   dvSwitch0,200
+
+-  
+
+   dvSwitch1,300,vmwaredvs
+
+-  
+
+   myEthernetPortProfile,,nexusdvs
+
+-  
+
+   dvSwitch0,,vmwaredvs
+
+
+The three fields to fill in are:
+
+- 
+
+   Name of the virtual / distributed virtual switch at vCenter.
+
+   The default value depends on the type of virtual switch:
+
+   **vSwitch0**: If type of virtual switch is VMware vNetwork Standard virtual switch
+
+   **dvSwitch0**: If type of virtual switch is VMware vNetwork Distributed virtual switch
+
+   **epp0**: If type of virtual switch is Cisco Nexus 1000v Distributed virtual switch
+
+-
+
+   VLAN ID to be used for this traffic wherever applicable.
+
+   This field would be used for only public traffic as of now. In case of guest traffic this field would be ignored and could be left empty for guest traffic. By default empty string would be assumed which translates to untagged VLAN for that specific traffic type.
+
+-
+
+   Type of virtual switch. Specified as string.
+
+   Possible valid values are vmwaredvs, vmwaresvs, nexusdvs.
+
+   **vmwaresvs**: Represents VMware vNetwork Standard virtual switch
+
+   **vmwaredvs**: Represents VMware vNetwork distributed virtual switch
+
+   **nexusdvs**: Represents Cisco Nexus 1000v distributed virtual switch.
+
+   If nothing specified (left empty), zone-level default virtual switchwould be defaulted, based on the value of global parameter you specify.
+
+   Following are the global configuration parameters:
+
+   **vmware.use.dvswitch**: Set to true to enable any kind (VMware DVS and Cisco Nexus 1000v) of distributed virtual switch in a CloudStack deployment. If set to false, the virtual switch that can be used in that CloudStack deployment is Standard virtual switch.
+
+   **vmware.use.nexus.vswitch**: This parameter is ignored if vmware.use.dvswitch is set to false. Set to true to enable Cisco Nexus 1000v distributed virtual switch in a CloudStack deployment.
+
+Enabling Virtual Distributed Switch in CloudStack
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+To make a CloudStack deployment VDS enabled, set the vmware.use.dvswitch
+parameter to true by using the Global Settings page in the CloudStack UI
+and restart the Management Server. Unless you enable the
+vmware.use.dvswitch parameter, you cannot see any UI options specific to
+VDS, and CloudStack ignores the VDS-specific parameters that you
+specify. Additionally, CloudStack uses VDS for virtual network
+infrastructure if the value of vmware.use.dvswitch parameter is true and
+the value of vmware.use.nexus.dvswitch parameter is false. Another
+global parameter that defines VDS configuration is
+vmware.ports.per.dvportgroup. This is the default number of ports per
+VMware dvPortGroup in a VMware environment. Default value is 256. This
+number directly associated with the number of guest network you can
+create.
+
+CloudStack supports orchestration of virtual networks in a deployment
+with a mix of Virtual Distributed Switch, Standard Virtual Switch and
+Nexus 1000v Virtual Switch.
+
+Configuring Distributed Virtual Switch in CloudStack
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can configure VDS by adding the necessary resources while a zone is
+created.
+
+Alternatively, at the cluster level, you can create an additional
+cluster with VDS enabled in the existing zone. Use the Add Cluster
+option. For information as given in `“Add Cluster: vSphere” <configuration.html#add-cluster-vsphere>`_.
+
+In both these cases, you must specify the following parameters to
+configure VDS:
+
+|dvSwitchConfig.png: Configuring dvSwitch|
+
+=================================   ===================================================================================================================
+Parameters Description
+=================================   ===================================================================================================================
+Cluster Name                        Enter the name of the cluster you created in vCenter. For example, "cloudcluster".
+vCenter Host                        Enter the name or the IP address of the vCenter host where you have deployed the VMware VDS.
+vCenter User name                   Enter the username that CloudStack should use to connect to vCenter. This user must have all administrative privileges.
+vCenter Password                    Enter the password for the user named above.
+vCenter Datacenter                  Enter the vCenter datacenter that the cluster is in. For example, "clouddcVM".
+Override Public Traffic             Enable this option to override the zone-wide public traffic for the cluster you are creating.
+Public Traffic vSwitch Type         This option is displayed only if you enable the Override Public Traffic option. Select VMware vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.
+Public Traffic vSwitch Name         Name of virtual switch to be used for the public traffic.
+Override Guest Traffic              Enable the option to override the zone-wide guest traffic for the cluster you are creating.
+Guest Traffic vSwitch Type          This option is displayed only if you enable the Override Guest Traffic option. Select VMware vNetwork Distributed Virtual Switch. If the vmware.use.dvswitch global parameter is true, the default option will be VMware vNetwork Distributed Virtual Switch.
+Guest Traffic vSwitch Name          Name of virtual switch to be used for guest traffic.
+=================================   ===================================================================================================================
+
+
+Storage Preparation for vSphere (iSCSI only)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Use of iSCSI requires preparatory work in vCenter. You must add an iSCSI
+target and create an iSCSI datastore.
+
+If you are using NFS, skip this section.
+
+Enable iSCSI initiator for ESXi hosts
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+#. 
+
+   In vCenter, go to hosts and Clusters/Configuration, and click Storage
+   Adapters link. You will see:
+
+   |vmwareiscsiinitiator.png: iscsi initiator|
+
+#. 
+
+   Select iSCSI software adapter and click Properties.
+
+   |vmwareiscsiinitiatorproperties.png: iscsi initiator properties|
+
+#. 
+
+   Click the Configure... button.
+
+   |vmwareiscsigeneral.png: iscsi general|
+
+#. 
+
+   Check Enabled to enable the initiator.
+
+#. 
+
+   Click OK to save.
+
+Add iSCSI target
+^^^^^^^^^^^^^^^^
+
+Under the properties dialog, add the iSCSI target info:
+
+|vmwareiscsitargetadd.png: iscsi target add|
+   
+Repeat these steps for all ESXi hosts in the cluster.
+
+Create an iSCSI datastore
+^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You should now create a VMFS datastore. Follow these steps to do so:
+
+#. 
+
+   Select Home/Inventory/Datastores.
+
+#. 
+
+   Right click on the datacenter node.
+
+#. 
+
+   Choose Add Datastore... command.
+
+#. 
+
+   Follow the wizard to create a iSCSI datastore.
+
+This procedure should be done on one host in the cluster. It is not
+necessary to do this on all hosts.
+
+|vmwareiscsidatastore.png: iscsi datastore|
+
+Multipathing for vSphere (Optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Storage multipathing on vSphere nodes may be done according to the
+vSphere installation guide.
+
+Add Hosts or Configure Clusters (vSphere)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Use vCenter to create a vCenter cluster and add your desired hosts to
+the cluster. You will later add the entire cluster to CloudStack. (see
+`“Add Cluster: vSphere” <configuration.html#add-cluster-vsphere>`_).
+
+Applying Hotfixes to a VMware vSphere Host
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. 
+
+   Disconnect the VMware vSphere cluster from CloudStack. It should
+   remain disconnected long enough to apply the hotfix on the host.
+
+   #. 
+
+      Log in to the CloudStack UI as root.
+
+      See `“Log In to the UI” <http://docs.cloudstack.apache.org/projects/cloudstack-administration/en/latest/ui.html#log-in-to-the-ui>`_.
+
+   #. 
+
+      Navigate to the VMware cluster, click Actions, and select
+      Unmanage.
+
+   #. 
+
+      Watch the cluster status until it shows Unmanaged.
+
+#. 
+
+   Perform the following on each of the ESXi hosts in the cluster:
+
+   #. 
+
+      Move each of the ESXi hosts in the cluster to maintenance mode.
+
+   #. 
+
+      Ensure that all the VMs are migrated to other hosts in that
+      cluster.
+
+   #. 
+
+      If there is only one host in that cluster, shutdown all the VMs
+      and move the host into maintenance mode.
+
+   #. 
+
+      Apply the patch on the ESXi host.
+
+   #. 
+
+      Restart the host if prompted.
+
+   #. 
+
+      Cancel the maintenance mode on the host.
+
+#. 
+
+   Reconnect the cluster to CloudStack:
+
+   #. 
+
+      Log in to the CloudStack UI as root.
+
+   #. 
+
+      Navigate to the VMware cluster, click Actions, and select Manage.
+
+   #. 
+
+      Watch the status to see that all the hosts come up. It might take
+      several minutes for the hosts to come up.
+
+      Alternatively, verify the host state is properly synchronized and
+      updated in the CloudStack database.
+
+.. |DeleteButton.png: button to delete dvSwitch| image:: ../_static/images/delete-button.png
+.. |vds-name.png: Name of the dvSwitch as specified in the vCenter.| image:: ../_static/images/vds-name.png
+.. |traffic-type.png| image:: ../_static/images/traffic-type.png
+.. |dvSwitchConfig.png: Configuring dvSwitch| image:: ../_static/images/dvswitchconfig.png
+.. |vsphereclient.png: vSphere client| image:: ../_static/images/vsphere-client.png
+.. |vspherephysicalnetwork.png: vSphere client| image:: ../_static/images/vmware-physical-network.png
+.. |vsphereincreaseports.png: vSphere client| image:: ../_static/images/vmware-increase-ports.png
+.. |vspherevswitchproperties.png: vSphere client| image:: ../_static/images/vmware-vswitch-properties.png
+.. |vspheremgtnetwork.png: vSphere client| image:: ../_static/images/vmware-mgt-network-properties.png
+.. |vmwarenexusportprofile.png: vSphere client| image:: ../_static/images/vmware-nexus-port-profile.png
+.. |vmwarenexusaddcluster.png: vmware nexus add cluster| image:: ../_static/images/vmware-nexus-add-cluster.png
+.. |vmwareiscsiinitiator.png: iscsi initiator| image:: ../_static/images/vmware-iscsi-initiator.png
+.. |vmwareiscsiinitiatorproperties.png: iscsi initiator properties| image:: ../_static/images/vmware-iscsi-initiator-properties.png
+.. |vmwareiscsigeneral.png: iscsi general| image:: ../_static/images/vmware-iscsi-general.png
+.. |vmwareiscsitargetadd.png: iscsi target add| image:: ../_static/images/vmware-iscsi-target-add.png
+.. |vmwareiscsidatastore.png: iscsi datastore| image:: ../_static/images/vmware-iscsi-datastore.png

http://git-wip-us.apache.org/repos/asf/cloudstack-docs-install/blob/dac80e2b/source/hypervisor/xenserver.rst
----------------------------------------------------------------------
diff --git a/source/hypervisor/xenserver.rst b/source/hypervisor/xenserver.rst
new file mode 100644
index 0000000..2c28599
--- /dev/null
+++ b/source/hypervisor/xenserver.rst
@@ -0,0 +1,934 @@
+.. Licensed to the Apache Software Foundation (ASF) under one
+   or more contributor license agreements.  See the NOTICE file
+   distributed with this work for additional information#
+   regarding copyright ownership.  The ASF licenses this file
+   to you under the Apache License, Version 2.0 (the
+   "License"); you may not use this file except in compliance
+   with the License.  You may obtain a copy of the License at
+   http://www.apache.org/licenses/LICENSE-2.0
+   Unless required by applicable law or agreed to in writing,
+   software distributed under the License is distributed on an
+   "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+   KIND, either express or implied.  See the License for the
+   specific language governing permissions and limitations
+   under the License.
+
+Citrix XenServer Installation for CloudStack
+--------------------------------------------
+
+If you want to use the Citrix XenServer hypervisor to run guest virtual
+machines, install XenServer 6.0 or XenServer 6.0.2 on the host(s) in
+your cloud. For an initial installation, follow the steps below. If you
+have previously installed XenServer and want to upgrade to another
+version, see :ref:`upgrading-xenserver-version`.
+
+System Requirements for XenServer Hosts
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+-  
+
+   The host must be certified as compatible with one of the following.
+   See the Citrix Hardware Compatibility Guide:
+   `http://hcl.xensource.com <http://hcl.xensource.com>`_
+
+    -  XenServer 5.6 SP2
+    -  XenServer 6.0
+    -  XenServer 6.0.2
+    -  XenServer 6.1.0
+    -  XenServer 6.2.0
+
+-  
+
+   You must re-install Citrix XenServer if you are going to re-use a
+   host from a previous install.
+
+-  
+
+   Must support HVM (Intel-VT or AMD-V enabled)
+
+-  
+
+   Be sure all the hotfixes provided by the hypervisor vendor are
+   applied. Track the release of hypervisor patches through your
+   hypervisor vendor’s support channel, and apply patches as soon as
+   possible after they are released. CloudStack will not track or notify
+   you of required hypervisor patches. It is essential that your hosts
+   are completely up to date with the provided hypervisor patches. The
+   hypervisor vendor is likely to refuse to support any system that is
+   not up to date with patches.
+
+-  
+
+   All hosts within a cluster must be homogeneous. The CPUs must be of
+   the same type, count, and feature flags.
+
+-  
+
+   Must support HVM (Intel-VT or AMD-V enabled in BIOS)
+
+-  
+
+   64-bit x86 CPU (more cores results in better performance)
+
+-  
+
+   Hardware virtualization support required
+
+-  
+
+   4 GB of memory
+
+-  
+
+   36 GB of local disk
+
+-  
+
+   At least 1 NIC
+
+-  
+
+   Statically allocated IP Address
+
+-  
+
+   When you deploy CloudStack, the hypervisor host must not have any VMs
+   already running
+
+.. warning:: The lack of up-do-date hotfixes can lead to data corruption and lost VMs.
+
+XenServer Installation Steps
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+#. 
+
+   From
+   `https://www.citrix.com/English/ss/downloads/ <https://www.citrix.com/English/ss/downloads/>`_,
+   download the appropriate version of XenServer for your CloudStack
+   version (see `"System Requirements for XenServer Hosts" <#system-requirements-for-xenserver-hosts>`_). Install it using
+   the Citrix XenServer Installation Guide.
+
+   Older Versions of XenServer:
+
+   Note that you can download the most recent release of XenServer
+   without having a Citrix account. If you wish to download older
+   versions, you will need to create an account and look through the
+   download archives.
+
+Configure XenServer dom0 Memory
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Configure the XenServer dom0 settings to allocate more memory to dom0.
+This can enable XenServer to handle larger numbers of virtual machines.
+We recommend 2940 MB of RAM for XenServer dom0. For instructions on how
+to do this, see
+`http://support.citrix.com/article/CTX126531 <http://support.citrix.com/article/CTX126531>`_.
+The article refers to XenServer 5.6, but the same information applies to
+XenServer 6.0.
+
+Username and Password
+~~~~~~~~~~~~~~~~~~~~~
+
+All XenServers in a cluster must have the same username and password as
+configured in CloudStack.
+
+Time Synchronization
+~~~~~~~~~~~~~~~~~~~~
+
+The host must be set to use NTP. All hosts in a pod must have the same
+time.
+
+#. 
+
+   Install NTP.
+
+   .. sourcecode:: bash
+
+       # yum install ntp
+
+#. 
+
+   Edit the NTP configuration file to point to your NTP server.
+
+   .. sourcecode:: bash
+
+       # vi /etc/ntp.conf
+
+   Add one or more server lines in this file with the names of the NTP
+   servers you want to use. For example:
+
+   .. sourcecode:: bash
+
+       server 0.xenserver.pool.ntp.org
+       server 1.xenserver.pool.ntp.org
+       server 2.xenserver.pool.ntp.org
+       server 3.xenserver.pool.ntp.org
+
+#. 
+
+   Restart the NTP client.
+
+   .. sourcecode:: bash
+
+       # service ntpd restart
+
+#. 
+
+   Make sure NTP will start again upon reboot.
+
+   .. sourcecode:: bash
+
+       # chkconfig ntpd on
+
+
+Install CloudStack XenServer Support Package (CSP)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+(Optional)
+
+To enable security groups, elastic load balancing, and elastic IP on
+XenServer, download and install the CloudStack XenServer Support Package
+(CSP). After installing XenServer, perform the following additional
+steps on each XenServer host.
+
+**For XenServer 6.1:**
+
+CSP functionality is already present in XenServer 6.1
+
+#. Run the below command
+   
+   .. sourcecode:: bash
+
+      xe-switch-network-backend bridge
+
+#. update sysctl.conf with the following
+
+   .. sourcecode:: bash
+   
+      net.bridge.bridge-nf-call-iptables = 1
+      net.bridge.bridge-nf-call-ip6tables = 0
+      net.bridge.bridge-nf-call-arptables = 1
+      
+      $ sysctl -p /etc/sysctl.conf
+
+
+**For XenServer 6.0.2, 6.0, 5.6 SP2:**
+
+#.
+
+   Download the CSP software onto the XenServer host from one of the
+   following links:
+
+   For XenServer 6.0.2:
+
+   `http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/3.0.1/XS-6.0.2/xenserver-cloud-supp.tgz>`_
+
+   For XenServer 5.6 SP2:
+
+   `http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/2.2.0/xenserver-cloud-supp.tgz>`_
+
+   For XenServer 6.0:
+
+   `http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz <http://download.cloud.com/releases/3.0/xenserver-cloud-supp.tgz>`_
+
+ 
+#.
+
+   Extract the file:
+
+   .. sourcecode:: bash
+
+       # tar xf xenserver-cloud-supp.tgz
+
+#. 
+
+   Run the following script:
+
+   .. sourcecode:: bash
+
+       # xe-install-supplemental-pack xenserver-cloud-supp.iso
+
+#. 
+
+   If the XenServer host is part of a zone that uses basic networking,
+   disable Open vSwitch (OVS):
+
+   .. sourcecode:: bash
+
+       # xe-switch-network-backend  bridge
+
+   Restart the host machine when prompted.
+
+The XenServer host is now ready to be added to CloudStack.
+
+Primary Storage Setup for XenServer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+CloudStack natively supports NFS, iSCSI and local storage. If you are
+using one of these storage types, there is no need to create the
+XenServer Storage Repository ("SR").
+
+If, however, you would like to use storage connected via some other
+technology, such as FiberChannel, you must set up the SR yourself. To do
+so, perform the following steps. If you have your hosts in a XenServer
+pool, perform the steps on the master node. If you are working with a
+single XenServer which is not part of a cluster, perform the steps on
+that XenServer.
+
+#. 
+
+   Connect FiberChannel cable to all hosts in the cluster and to the
+   FiberChannel storage host.
+
+#. 
+
+   Rescan the SCSI bus. Either use the following command or use
+   XenCenter to perform an HBA rescan.
+
+   .. sourcecode:: bash
+
+       # scsi-rescan
+
+#. 
+
+   Repeat step 2 on every host.
+
+#. 
+
+   Check to be sure you see the new SCSI disk.
+
+   .. sourcecode:: bash
+
+       # ls /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -l
+
+   The output should look like this, although the specific file name
+   will be different (scsi-<scsiID>):
+
+   .. sourcecode:: bash
+
+       lrwxrwxrwx 1 root root 9 Mar 16 13:47
+       /dev/disk/by-id/scsi-360a98000503365344e6f6177615a516b -> ../../sdc
+
+#. 
+
+   Repeat step 4 on every host.
+
+#. 
+
+   On the storage server, run this command to get a unique ID for the
+   new SR.
+
+   .. sourcecode:: bash
+
+       # uuidgen
+
+   The output should look like this, although the specific ID will be
+   different:
+
+   .. sourcecode:: bash
+
+       e6849e96-86c3-4f2c-8fcc-350cc711be3d
+
+#. 
+
+   Create the FiberChannel SR. In name-label, use the unique ID you just
+   generated.
+
+   .. sourcecode:: bash
+
+       # xe sr-create type=lvmohba shared=true
+       device-config:SCSIid=360a98000503365344e6f6177615a516b
+       name-label="e6849e96-86c3-4f2c-8fcc-350cc711be3d"
+
+   This command returns a unique ID for the SR, like the following
+   example (your ID will be different):
+
+   .. sourcecode:: bash
+
+       7a143820-e893-6c6a-236e-472da6ee66bf
+
+#. 
+
+   To create a human-readable description for the SR, use the following
+   command. In uuid, use the SR ID returned by the previous command. In
+   name-description, set whatever friendly text you prefer.
+
+   .. sourcecode:: bash
+
+       # xe sr-param-set uuid=7a143820-e893-6c6a-236e-472da6ee66bf name-description="Fiber Channel storage repository"
+
+   Make note of the values you will need when you add this storage to
+   CloudStack later (see `"Add Primary Storage" <configuration.html#add-primary-storage>`_). In the Add Primary Storage
+   dialog, in Protocol, you will choose PreSetup. In SR Name-Label, you
+   will enter the name-label you set earlier (in this example,
+   e6849e96-86c3-4f2c-8fcc-350cc711be3d).
+
+#. 
+
+   (Optional) If you want to enable multipath I/O on a FiberChannel SAN,
+   refer to the documentation provided by the SAN vendor.
+
+iSCSI Multipath Setup for XenServer (Optional)
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+When setting up the storage repository on a Citrix XenServer, you can
+enable multipath I/O, which uses redundant physical components to
+provide greater reliability in the connection between the server and the
+SAN. To enable multipathing, use a SAN solution that is supported for
+Citrix servers and follow the procedures in Citrix documentation. The
+following links provide a starting point:
+
+-  
+
+   `http://support.citrix.com/article/CTX118791 <http://support.citrix.com/article/CTX118791>`_
+
+-  
+
+   `http://support.citrix.com/article/CTX125403 <http://support.citrix.com/article/CTX125403>`_
+
+You can also ask your SAN vendor for advice about setting up your Citrix
+repository for multipathing.
+
+Make note of the values you will need when you add this storage to the
+CloudStack later (see `"Add Primary Storage" <configuration.html#add-primary-storage>`_). In the Add Primary Storage dialog,
+in Protocol, you will choose PreSetup. In SR Name-Label, you will enter
+the same name used to create the SR.
+
+If you encounter difficulty, address the support team for the SAN
+provided by your vendor. If they are not able to solve your issue, see
+Contacting Support.
+
+Physical Networking Setup for XenServer
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+Once XenServer has been installed, you may need to do some additional
+network configuration. At this point in the installation, you should
+have a plan for what NICs the host will have and what traffic each NIC
+will carry. The NICs should be cabled as necessary to implement your
+plan.
+
+If you plan on using NIC bonding, the NICs on all hosts in the cluster
+must be cabled exactly the same. For example, if eth0 is in the private
+bond on one host in a cluster, then eth0 must be in the private bond on
+all hosts in the cluster.
+
+The IP address assigned for the management network interface must be
+static. It can be set on the host itself or obtained via static DHCP.
+
+CloudStack configures network traffic of various types to use different
+NICs or bonds on the XenServer host. You can control this process and
+provide input to the Management Server through the use of XenServer
+network name labels. The name labels are placed on physical interfaces
+or bonds and configured in CloudStack. In some simple cases the name
+labels are not required.
+
+When configuring networks in a XenServer environment, network traffic
+labels must be properly configured to ensure that the virtual interfaces
+are created by CloudStack are bound to the correct physical device. The
+name-label of the XenServer network must match the XenServer traffic
+label specified while creating the CloudStack network. This is set by
+running the following command:
+
+.. sourcecode:: bash
+
+    xe network-param-set uuid=<network id> name-label=<CloudStack traffic label>
+
+Configuring Public Network with a Dedicated NIC for XenServer (Optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+CloudStack supports the use of a second NIC (or bonded pair of NICs,
+described in :ref:`nic-bonding-for-xenserver`) for the public network. If
+bonding is not used, the public network can be on any NIC and can be on
+different NICs on the hosts in a cluster. For example, the public
+network can be on eth0 on node A and eth1 on node B. However, the
+XenServer name-label for the public network must be identical across all
+hosts. The following examples set the network label to "cloud-public".
+After the management server is installed and running you must configure
+it with the name of the chosen network label (e.g. "cloud-public"); this
+is discussed in `"Management Server Installation" <installation.html#management-server-installation>`_.
+
+If you are using two NICs bonded together to create a public network,
+see :ref:`nic-bonding-for-xenserver`.
+
+If you are using a single dedicated NIC to provide public network
+access, follow this procedure on each new host that is added to
+CloudStack before adding the host.
+
+#. 
+
+   Run xe network-list and find the public network. This is usually
+   attached to the NIC that is public. Once you find the network make
+   note of its UUID. Call this <UUID-Public>.
+
+#. 
+
+   Run the following command.
+
+   .. sourcecode:: bash
+
+       # xe network-param-set name-label=cloud-public uuid=<UUID-Public>
+
+Configuring Multiple Guest Networks for XenServer (Optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+CloudStack supports the use of multiple guest networks with the
+XenServer hypervisor. Each network is assigned a name-label in
+XenServer. For example, you might have two networks with the labels
+"cloud-guest" and "cloud-guest2". After the management server is
+installed and running, you must add the networks and use these labels so
+that CloudStack is aware of the networks.
+
+Follow this procedure on each new host before adding the host to
+CloudStack:
+
+#. 
+
+   Run xe network-list and find one of the guest networks. Once you find
+   the network make note of its UUID. Call this <UUID-Guest>.
+
+#. 
+
+   Run the following command, substituting your own name-label and uuid
+   values.
+
+   .. sourcecode:: bash
+
+       # xe network-param-set name-label=<cloud-guestN> uuid=<UUID-Guest>
+
+#. 
+
+   Repeat these steps for each additional guest network, using a
+   different name-label and uuid each time.
+
+Separate Storage Network for XenServer (Optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+You can optionally set up a separate storage network. This should be
+done first on the host, before implementing the bonding steps below.
+This can be done using one or two available NICs. With two NICs bonding
+may be done as above. It is the administrator's responsibility to set up
+a separate storage network.
+
+Give the storage network a different name-label than what will be given
+for other networks.
+
+For the separate storage network to work correctly, it must be the only
+interface that can ping the primary storage device's IP address. For
+example, if eth0 is the management network NIC, ping -I eth0 <primary
+storage device IP> must fail. In all deployments, secondary storage
+devices must be pingable from the management network NIC or bond. If a
+secondary storage device has been placed on the storage network, it must
+also be pingable via the storage network NIC or bond on the hosts as
+well.
+
+You can set up two separate storage networks as well. For example, if
+you intend to implement iSCSI multipath, dedicate two non-bonded NICs to
+multipath. Each of the two networks needs a unique name-label.
+
+If no bonding is done, the administrator must set up and name-label the
+separate storage network on all hosts (masters and slaves).
+
+Here is an example to set up eth5 to access a storage network on
+172.16.0.0/24.
+
+.. sourcecode:: bash
+
+    # xe pif-list host-name-label='hostname' device=eth5
+    uuid(RO): ab0d3dd4-5744-8fae-9693-a022c7a3471d
+    device ( RO): eth5
+    #xe pif-reconfigure-ip DNS=172.16.3.3 gateway=172.16.0.1 IP=172.16.0.55 mode=static netmask=255.255.255.0 uuid=ab0d3dd4-5744-8fae-9693-a022c7a3471d
+
+.. _nic-bonding-for-xenserver:
+
+NIC Bonding for XenServer (Optional)
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+XenServer supports Source Level Balancing (SLB) NIC bonding. Two NICs
+can be bonded together to carry public, private, and guest traffic, or
+some combination of these. Separate storage networks are also possible.
+Here are some example supported configurations:
+
+-  
+
+   2 NICs on private, 2 NICs on public, 2 NICs on storage
+
+-  
+
+   2 NICs on private, 1 NIC on public, storage uses management network
+
+-  
+
+   2 NICs on private, 2 NICs on public, storage uses management network
+
+-  
+
+   1 NIC for private, public, and storage
+
+All NIC bonding is optional.
+
+XenServer expects all nodes in a cluster will have the same network
+cabling and same bonds implemented. In an installation the master will
+be the first host that was added to the cluster and the slave hosts will
+be all subsequent hosts added to the cluster. The bonds present on the
+master set the expectation for hosts added to the cluster later. The
+procedure to set up bonds on the master and slaves are different, and
+are described below. There are several important implications of this:
+
+-  
+
+   You must set bonds on the first host added to a cluster. Then you
+   must use xe commands as below to establish the same bonds in the
+   second and subsequent hosts added to a cluster.
+
+-  
+
+   Slave hosts in a cluster must be cabled exactly the same as the
+   master. For example, if eth0 is in the private bond on the master, it
+   must be in the management network for added slave hosts.
+
+Management Network Bonding
+''''''''''''''''''''''''''
+
+The administrator must bond the management network NICs prior to adding
+the host to CloudStack.
+
+Creating a Private Bond on the First Host in the Cluster
+''''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+Use the following steps to create a bond in XenServer. These steps
+should be run on only the first host in a cluster. This example creates
+the cloud-private network with two physical NICs (eth0 and eth1) bonded
+into it.
+
+#. 
+
+   Find the physical NICs that you want to bond together.
+
+   .. sourcecode:: bash
+
+       # xe pif-list host-name-label='hostname' device=eth0
+       # xe pif-list host-name-label='hostname' device=eth1
+
+   These command shows the eth0 and eth1 NICs and their UUIDs.
+   Substitute the ethX devices of your choice. Call the UUID's returned
+   by the above command slave1-UUID and slave2-UUID.
+
+#. 
+
+   Create a new network for the bond. For example, a new network with
+   name "cloud-private".
+
+   **This label is important. CloudStack looks for a network by a name
+   you configure. You must use the same name-label for all hosts in the
+   cloud for the management network.**
+
+   .. sourcecode:: bash
+
+       # xe network-create name-label=cloud-private
+       # xe bond-create network-uuid=[uuid of cloud-private created above]
+       pif-uuids=[slave1-uuid],[slave2-uuid]
+
+Now you have a bonded pair that can be recognized by CloudStack as the
+management network.
+
+Public Network Bonding
+''''''''''''''''''''''
+
+Bonding can be implemented on a separate, public network. The
+administrator is responsible for creating a bond for the public network
+if that network will be bonded and will be separate from the management
+network.
+
+Creating a Public Bond on the First Host in the Cluster
+'''''''''''''''''''''''''''''''''''''''''''''''''''''''
+
+These steps should be run on only the first host in a cluster. This
+example creates the cloud-public network with two physical NICs (eth2
+and eth3) bonded into it.
+
+#. 
+
+   Find the physical NICs that you want to bond together.
+
+   .. sourcecode:: bash
+
+       #xe pif-list host-name-label='hostname' device=eth2
+       # xe pif-list host-name-label='hostname' device=eth3
+
+   These command shows the eth2 and eth3 NICs and their UUIDs.
+   Substitute the ethX devices of your choice. Call the UUID's returned
+   by the above command slave1-UUID and slave2-UUID.
+
+#. 
+
+   Create a new network for the bond. For example, a new network with
+   name "cloud-public".
+
+   **This label is important. CloudStack looks for a network by a name
+   you configure. You must use the same name-label for all hosts in the
+   cloud for the public network.**
+
+   .. sourcecode:: bash
+
+       # xe network-create name-label=cloud-public
+       # xe bond-create network-uuid=[uuid of cloud-public created above]
+       pif-uuids=[slave1-uuid],[slave2-uuid]
+
+Now you have a bonded pair that can be recognized by CloudStack as the
+public network.
+
+Adding More Hosts to the Cluster
+''''''''''''''''''''''''''''''''
+
+With the bonds (if any) established on the master, you should add
+additional, slave hosts. Run the following command for all additional
+hosts to be added to the cluster. This will cause the host to join the
+master in a single XenServer pool.
+
+.. sourcecode:: bash
+
+    # xe pool-join master-address=[master IP] master-username=root
+    master-password=[your password]
+
+Complete the Bonding Setup Across the Cluster
+'''''''''''''''''''''''''''''''''''''''''''''
+
+With all hosts added to the pool, run the cloud-setup-bond script. This
+script will complete the configuration and set up of the bonds across
+all hosts in the cluster.
+
+#. 
+
+   Copy the script from the Management Server in
+   /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-setup-bonding.sh
+   to the master host and ensure it is executable.
+
+#. 
+
+   Run the script:
+
+   .. sourcecode:: bash
+
+       # ./cloud-setup-bonding.sh
+
+Now the bonds are set up and configured properly across the cluster.
+
+.. _upgrading-xenserver-version:
+
+Upgrading XenServer Versions
+~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+
+This section tells how to upgrade XenServer software on CloudStack
+hosts. The actual upgrade is described in XenServer documentation, but
+there are some additional steps you must perform before and after the
+upgrade.
+
+.. note:: Be sure the hardware is certified compatible with the new version of XenServer.
+
+To upgrade XenServer:
+
+#. 
+
+   Upgrade the database. On the Management Server node:
+
+   #. 
+
+      Back up the database:
+
+      .. sourcecode:: bash
+
+          # mysqldump --user=root --databases cloud > cloud.backup.sql
+          # mysqldump --user=root --databases cloud_usage > cloud_usage.backup.sql
+
+   #. 
+
+      You might need to change the OS type settings for VMs running on
+      the upgraded hosts.
+
+      -  
+
+         If you upgraded from XenServer 5.6 GA to XenServer 5.6 SP2,
+         change any VMs that have the OS type CentOS 5.5 (32-bit),
+         Oracle Enterprise Linux 5.5 (32-bit), or Red Hat Enterprise
+         Linux 5.5 (32-bit) to Other Linux (32-bit). Change any VMs that
+         have the 64-bit versions of these same OS types to Other Linux
+         (64-bit).
+
+      -  
+
+         If you upgraded from XenServer 5.6 SP2 to XenServer 6.0.2,
+         change any VMs that have the OS type CentOS 5.6 (32-bit),
+         CentOS 5.7 (32-bit), Oracle Enterprise Linux 5.6 (32-bit),
+         Oracle Enterprise Linux 5.7 (32-bit), Red Hat Enterprise Linux
+         5.6 (32-bit) , or Red Hat Enterprise Linux 5.7 (32-bit) to
+         Other Linux (32-bit). Change any VMs that have the 64-bit
+         versions of these same OS types to Other Linux (64-bit).
+
+      -  
+
+         If you upgraded from XenServer 5.6 to XenServer 6.0.2, do all
+         of the above.
+
+   #. 
+
+      Restart the Management Server and Usage Server. You only need to
+      do this once for all clusters.
+
+      .. sourcecode:: bash
+
+          # service cloudstack-management start
+          # service cloudstack-usage start
+
+#. 
+
+   Disconnect the XenServer cluster from CloudStack.
+
+   #. 
+
+      Log in to the CloudStack UI as root.
+
+   #. 
+
+      Navigate to the XenServer cluster, and click Actions – Unmanage.
+
+   #. 
+
+      Watch the cluster status until it shows Unmanaged.
+
+#. 
+
+   Log in to one of the hosts in the cluster, and run this command to
+   clean up the VLAN:
+
+   .. sourcecode:: bash
+
+       # . /opt/xensource/bin/cloud-clean-vlan.sh
+
+#. 
+
+   Still logged in to the host, run the upgrade preparation script:
+
+   .. sourcecode:: bash
+
+       # /opt/xensource/bin/cloud-prepare-upgrade.sh
+
+   Troubleshooting: If you see the error "can't eject CD," log in to the
+   VM and umount the CD, then run the script again.
+
+#. 
+
+   Upgrade the XenServer software on all hosts in the cluster. Upgrade
+   the master first.
+
+   #. 
+
+      Live migrate all VMs on this host to other hosts. See the
+      instructions for live migration in the Administrator's Guide.
+
+      Troubleshooting: You might see the following error when you
+      migrate a VM:
+
+      .. sourcecode:: bash
+
+          [root@xenserver-qa-2-49-4 ~]# xe vm-migrate live=true host=xenserver-qa-2-49-5 vm=i-2-8-VM
+          You attempted an operation on a VM which requires PV drivers to be installed but the drivers were not detected.
+          vm: b6cf79c8-02ee-050b-922f-49583d9f1a14 (i-2-8-VM)
+
+      To solve this issue, run the following:
+
+      .. sourcecode:: bash
+
+          # /opt/xensource/bin/make_migratable.sh  b6cf79c8-02ee-050b-922f-49583d9f1a14
+
+   #. 
+
+      Reboot the host.
+
+   #. 
+
+      Upgrade to the newer version of XenServer. Use the steps in
+      XenServer documentation.
+
+   #. 
+
+      After the upgrade is complete, copy the following files from the
+      management server to this host, in the directory locations shown
+      below:
+
+      =================================================================================   =======================================
+      Copy this Management Server file                                                    To this location on the XenServer host
+      =================================================================================   =======================================
+      /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/xenserver60/NFSSR.py   /opt/xensource/sm/NFSSR.py
+      /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/setupxenserver.sh      /opt/xensource/bin/setupxenserver.sh
+      /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/make\_migratable.sh    /opt/xensource/bin/make\_migratable.sh
+      /usr/share/cloudstack-common/scripts/vm/hypervisor/xenserver/cloud-clean-vlan.sh    /opt/xensource/bin/cloud-clean-vlan.sh
+      =================================================================================   =======================================
+
+   #. 
+
+      Run the following script:
+
+      .. sourcecode:: bash
+
+          # /opt/xensource/bin/setupxenserver.sh
+
+      Troubleshooting: If you see the following error message, you can
+      safely ignore it.
+
+      .. sourcecode:: bash
+
+          mv: cannot stat `/etc/cron.daily/logrotate`: No such file or directory
+
+   #. 
+
+      Plug in the storage repositories (physical block devices) to the
+      XenServer host:
+
+      .. sourcecode:: bash
+
+          # for pbd in `xe pbd-list currently-attached=false| grep ^uuid | awk '{print $NF}'`; do xe pbd-plug uuid=$pbd ; done
+
+      .. note:: If you add a host to this XenServer pool, you need to migrate all VMs on this host to other hosts, and eject this host from XenServer pool.
+
+#. 
+
+   Repeat these steps to upgrade every host in the cluster to the same
+   version of XenServer.
+
+#. 
+
+   Run the following command on one host in the XenServer cluster to
+   clean up the host tags:
+
+   .. sourcecode:: bash
+
+       # for host in $(xe host-list | grep ^uuid | awk '{print $NF}') ; do xe host-param-clear uuid=$host param-name=tags; done;
+
+   .. note:: 
+      When copying and pasting a command, be sure the command has pasted as
+      a single line before executing. Some document viewers may introduce
+      unwanted line breaks in copied text.
+
+#. 
+
+   Reconnect the XenServer cluster to CloudStack.
+
+   #. 
+
+      Log in to the CloudStack UI as root.
+
+   #. 
+
+      Navigate to the XenServer cluster, and click Actions – Manage.
+
+   #. 
+
+      Watch the status to see that all the hosts come up.
+
+#. 
+
+   After all hosts are up, run the following on one host in the cluster:
+
+   .. sourcecode:: bash
+
+       # /opt/xensource/bin/cloud-clean-vlan.sh
\ No newline at end of file