You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by ed...@apache.org on 2013/10/11 03:01:38 UTC

[35/67] [abbrv] [partial] Removing docs from master

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5586a221/docs/en-US/gslb.xml
----------------------------------------------------------------------
diff --git a/docs/en-US/gslb.xml b/docs/en-US/gslb.xml
deleted file mode 100644
index 968e8e2..0000000
--- a/docs/en-US/gslb.xml
+++ /dev/null
@@ -1,487 +0,0 @@
-<?xml version='1.0' encoding='utf-8' ?>
-<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "cloudstack.ent">
-%BOOK_ENTITIES;
-]>
-
-<!-- Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-    http://www.apache.org/licenses/LICENSE-2.0
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-<section id="gslb">
-  <title>Global Server Load Balancing Support</title>
-  <para>&PRODUCT; supports Global Server Load Balancing (GSLB) functionalities to provide business
-    continuity, and enable seamless resource movement within a &PRODUCT; environment. &PRODUCT;
-    achieve this by extending its functionality of integrating with NetScaler Application Delivery
-    Controller (ADC), which also provides various GSLB capabilities, such as disaster recovery and
-    load balancing. The DNS redirection technique is used to achieve GSLB in &PRODUCT;. </para>
-  <para>In order to support this functionality, region level services and service provider are
-    introduced. A new service 'GSLB' is introduced as a region level service. The GSLB service
-    provider is introduced that will provider the GSLB service. Currently, NetScaler is the
-    supported GSLB provider in &PRODUCT;. GSLB functionality works in an Active-Active data center
-    environment. </para>
-  <section id="about-gslb">
-    <title>About Global Server Load Balancing</title>
-    <para>Global Server Load Balancing (GSLB) is an extension of load balancing functionality, which
-      is highly efficient in avoiding downtime. Based on the nature of deployment, GSLB represents a
-      set of technologies that is used for various purposes, such as load sharing, disaster
-      recovery, performance, and legal obligations. With GSLB, workloads can be distributed across
-      multiple data centers situated at geographically separated locations. GSLB can also provide an
-      alternate location for accessing a resource in the event of a failure, or to provide a means
-      of shifting traffic easily to simplify maintenance, or both.</para>
-    <section id="gslb-comp">
-      <title>Components of GSLB</title>
-      <para>A typical GSLB environment is comprised of the following components:</para>
-      <itemizedlist>
-        <listitem>
-          <para><emphasis role="bold">GSLB Site</emphasis>: In &PRODUCT; terminology, GSLB sites are
-            represented by zones that are mapped to data centers, each of which has various network
-            appliances. Each GSLB site is managed by a NetScaler appliance that is local to that
-            site. Each of these appliances treats its own site as the local site and all other
-            sites, managed by other appliances, as remote sites. It is the central entity in a GSLB
-            deployment, and is represented by a name and an IP address.</para>
-        </listitem>
-        <listitem>
-          <para><emphasis role="bold">GSLB Services</emphasis>: A GSLB service is typically
-            represented by a load balancing or content switching virtual server. In a GSLB
-            environment, you can have a local as well as remote GSLB services. A local GSLB service
-            represents a local load balancing or content switching virtual server. A remote GSLB
-            service is the one configured at one of the other sites in the GSLB setup. At each site
-            in the GSLB setup, you can create one local GSLB service and any number of remote GSLB
-            services.</para>
-        </listitem>
-        <listitem>
-          <para><emphasis role="bold">GSLB Virtual Servers</emphasis>: A GSLB virtual server refers
-            to one or more GSLB services and balances traffic between traffic across the VMs in
-            multiple zones by using the &PRODUCT; functionality. It evaluates the configured GSLB
-            methods or algorithms to select a GSLB service to which to send the client requests. One
-            or more virtual servers from different zones are bound to the GSLB virtual server. GSLB
-            virtual server does not have a public IP associated with it, instead it will have a FQDN
-            DNS name.</para>
-        </listitem>
-        <listitem>
-          <para><emphasis role="bold">Load Balancing or Content Switching Virtual
-            Servers</emphasis>: According to Citrix NetScaler terminology, a load balancing or
-            content switching virtual server represents one or many servers on the local network.
-            Clients send their requests to the load balancing or content switching virtual server’s
-            virtual IP (VIP) address, and the virtual server balances the load across the local
-            servers. After a GSLB virtual server selects a GSLB service representing either a local
-            or a remote load balancing or content switching virtual server, the client sends the
-            request to that virtual server’s VIP address.</para>
-        </listitem>
-        <listitem>
-          <para><emphasis role="bold">DNS VIPs</emphasis>: DNS virtual IP represents a load
-            balancing DNS virtual server on the GSLB service provider. The DNS requests for domains
-            for which the GSLB service provider is authoritative can be sent to a DNS VIP.</para>
-        </listitem>
-        <listitem>
-          <para><emphasis role="bold">Authoritative DNS</emphasis>: ADNS (Authoritative Domain Name
-            Server) is a service that provides actual answer to DNS queries, such as web site IP
-            address. In a GSLB environment, an ADNS service responds only to DNS requests for
-            domains for which the GSLB service provider is authoritative. When an ADNS service is
-            configured, the service provider owns that IP address and advertises it. When you create
-            an ADNS service, the NetScaler responds to DNS queries on the configured ADNS service IP
-            and port.</para>
-        </listitem>
-      </itemizedlist>
-    </section>
-    <section id="concept-gslb">
-      <title>How Does GSLB Works in &PRODUCT;?</title>
-      <para>Global server load balancing is used to manage the traffic flow to a web site hosted on
-        two separate zones that ideally are in different geographic locations. The following is an
-        illustration of how GLSB functionality is provided in &PRODUCT;: An organization, xyztelco,
-        has set up a public cloud that spans two zones, Zone-1 and Zone-2, across geographically
-        separated data centers that are managed by &PRODUCT;. Tenant-A of the cloud launches a
-        highly available solution by using xyztelco cloud. For that purpose, they launch two
-        instances each in both the zones: VM1 and VM2 in Zone-1 and VM5 and VM6 in Zone-2. Tenant-A
-        acquires a public IP, IP-1 in Zone-1, and configures a load balancer rule to load balance
-        the traffic between VM1 and VM2 instances. &PRODUCT; orchestrates setting up a virtual
-        server on the LB service provider in Zone-1. Virtual server 1 that is set up on the LB
-        service provider in Zone-1 represents a publicly accessible virtual server that client
-        reaches at IP-1. The client traffic to virtual server 1 at IP-1 will be load balanced across
-        VM1 and VM2 instances. </para>
-      <para>Tenant-A acquires another public IP, IP-2 in Zone-2 and sets up a load balancer rule to
-        load balance the traffic between VM5 and VM6 instances. Similarly in Zone-2, &PRODUCT;
-        orchestrates setting up a virtual server on the LB service provider. Virtual server 2 that
-        is setup on the LB service provider in Zone-2 represents a publicly accessible virtual
-        server that client reaches at IP-2. The client traffic that reaches virtual server 2 at IP-2
-        is load balanced across VM5 and VM6 instances. At this point Tenant-A has the service
-        enabled in both the zones, but has no means to set up a disaster recovery plan if one of the
-        zone fails. Additionally, there is no way for Tenant-A to load balance the traffic
-        intelligently to one of the zones based on load, proximity and so on. The cloud
-        administrator of xyztelco provisions a GSLB service provider to both the zones. A GSLB
-        provider is typically an ADC that has the ability to act as an ADNS (Authoritative Domain
-        Name Server) and has the mechanism to monitor health of virtual servers both at local and
-        remote sites. The cloud admin enables GSLB as a service to the tenants that use zones 1 and
-        2. </para>
-      <mediaobject>
-        <imageobject>
-          <imagedata fileref="./images/gslb.png"/>
-        </imageobject>
-        <textobject>
-          <phrase>gslb.png: GSLB architecture</phrase>
-        </textobject>
-      </mediaobject>
-      <para>Tenant-A wishes to leverage the GSLB service provided by the xyztelco cloud. Tenant-A
-        configures a GSLB rule to load balance traffic across virtual server 1 at Zone-1 and virtual
-        server 2 at Zone-2. The domain name is provided as A.xyztelco.com. &PRODUCT; orchestrates
-        setting up GSLB virtual server 1 on the GSLB service provider at Zone-1. &PRODUCT; binds
-        virtual server 1 of Zone-1 and virtual server 2 of Zone-2 to GLSB virtual server 1. GSLB
-        virtual server 1 is configured to start monitoring the health of virtual server 1 and 2 in
-        Zone-1. &PRODUCT; will also orchestrate setting up GSLB virtual server 2 on GSLB service
-        provider at Zone-2. &PRODUCT; will bind virtual server 1 of Zone-1 and virtual server 2 of
-        Zone-2 to GLSB virtual server 2. GSLB virtual server 2 is configured to start monitoring the
-        health of virtual server 1 and 2. &PRODUCT; will bind the domain A.xyztelco.com to both the
-        GSLB virtual server 1 and 2. At this point, Tenant-A service will be globally reachable at
-        A.xyztelco.com. The private DNS server for the domain xyztelcom.com is configured by the
-        admin out-of-band to resolve the domain A.xyztelco.com to the GSLB providers at both the
-        zones, which are configured as ADNS for the domain A.xyztelco.com. A client when sends a DNS
-        request to resolve A.xyztelcom.com, will eventually get DNS delegation to the address of
-        GSLB providers at zone 1 and 2. A client DNS request will be received by the GSLB provider.
-        The GSLB provider, depending on the domain for which it needs to resolve, will pick up the
-        GSLB virtual server associated with the domain. Depending on the health of the virtual
-        servers being load balanced, DNS request for the domain will be resolved to the public IP
-        associated with the selected virtual server.</para>
-    </section>
-  </section>
-  <section id="gslb-workflow">
-    <title>Configuring GSLB</title>
-    <para>To configure a GSLB deployment, you must first configure a standard load balancing setup
-      for each zone. This enables you to balance load across the different servers in each zone in
-      the region. Then on the NetScaler side, configure both NetScaler appliances that you plan to
-      add to each zone as authoritative DNS (ADNS) servers. Next, create a GSLB site for each zone,
-      configure GSLB virtual servers for each site, create GLSB services, and bind the GSLB services
-      to the GSLB virtual servers. Finally, bind the domain to the GSLB virtual servers. The GSLB
-      configurations on the two appliances at the two different zones are identical, although each
-      sites load-balancing configuration is specific to that site.</para>
-    <para>Perform the following as a cloud administrator. As per the example given above, the
-      administrator of xyztelco is the one who sets up GSLB:</para>
-    <orderedlist>
-      <listitem>
-        <para>In the cloud.dns.name global parameter, specify the DNS name of your tenant's cloud
-          that make use of the GSLB service.</para>
-      </listitem>
-      <listitem>
-        <para>On the NetScaler side, configure GSLB as given in <ulink
-            url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-config-con.html"
-            >Configuring Global Server Load Balancing (GSLB)</ulink>:</para>
-        <orderedlist>
-          <listitem>
-            <para>Configuring a standard load balancing setup.</para>
-          </listitem>
-          <listitem>
-            <para>Configure Authoritative DNS, as explained in <ulink
-                url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-config-adns-svc-tsk.html"
-                >Configuring an Authoritative DNS Service</ulink>.</para>
-          </listitem>
-          <listitem>
-            <para>Configure a GSLB site with site name formed from the domain name details.</para>
-            <para>Configure a GSLB site with the site name formed from the domain name.</para>
-            <para>As per the example given above, the site names are A.xyztelco.com and
-              B.xyztelco.com.</para>
-            <para>For more information, see <ulink
-                url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-config-basic-site-tsk.html"
-                >Configuring a Basic GSLB Site</ulink>.</para>
-          </listitem>
-          <listitem>
-            <para>Configure a GSLB virtual server.</para>
-            <para>For more information, see <ulink
-                url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-config-vsvr-tsk.html"
-                >Configuring a GSLB Virtual Server</ulink>.</para>
-          </listitem>
-          <listitem>
-            <para>Configure a GSLB service for each virtual server.</para>
-            <para>For more information, see <ulink
-                url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-config-svc-tsk.html"
-                >Configuring a GSLB Service</ulink>.</para>
-          </listitem>
-          <listitem>
-            <para>Bind the GSLB services to the GSLB virtual server.</para>
-            <para>For more information, see <ulink
-                url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-bind-svc-vsvr-tsk.html"
-                >Binding GSLB Services to a GSLB Virtual Server</ulink>.</para>
-          </listitem>
-          <listitem>
-            <para>Bind domain name to GSLB virtual server. Domain name is obtained from the domain
-              details.</para>
-            <para>For more information, see <ulink
-                url="http://support.citrix.com/proddocs/topic/netscaler-traffic-management-10-map/ns-gslb-bind-dom-vsvr-tsk.html"
-                >Binding a Domain to a GSLB Virtual Server</ulink>.</para>
-          </listitem>
-        </orderedlist>
-      </listitem>
-      <listitem>
-        <para>In each zone that are participating in GSLB, add GSLB-enabled NetScaler device.</para>
-        <para>For more information, see <xref linkend="enable-glsb-ns"/>.</para>
-      </listitem>
-    </orderedlist>
-    <para>As a domain administrator/ user perform the following:</para>
-    <orderedlist>
-      <listitem>
-        <para>Add a GSLB rule on both the sites.</para>
-        <para>See <xref linkend="gslb-add"/>.</para>
-      </listitem>
-      <listitem>
-        <para>Assign load balancer rules.</para>
-        <para>See <xref linkend="assign-lb-gslb"/>.</para>
-      </listitem>
-    </orderedlist>
-    <section id="prereq-gslb">
-      <title>Prerequisites and Guidelines</title>
-      <itemizedlist>
-        <listitem>
-          <para>The GSLB functionality is supported both Basic and Advanced zones.</para>
-        </listitem>
-        <listitem>
-          <para>GSLB is added as a new network service.</para>
-        </listitem>
-        <listitem>
-          <para>GSLB service provider can be added to a physical network in a zone.</para>
-        </listitem>
-        <listitem>
-          <para>The admin is allowed to enable or disable GSLB functionality at region level.</para>
-        </listitem>
-        <listitem>
-          <para>The admin is allowed to configure a zone as GSLB capable or enabled. </para>
-          <para>A zone shall be considered as GSLB capable only if a GSLB service provider is
-            provisioned in the zone.</para>
-        </listitem>
-        <listitem>
-          <para>When users have VMs deployed in multiple availability zones which are GSLB enabled,
-            they can use the GSLB functionality to load balance traffic across the VMs in multiple
-            zones.</para>
-        </listitem>
-        <listitem>
-          <para>The users can use GSLB to load balance across the VMs across zones in a region only
-            if the admin has enabled GSLB in that region. </para>
-        </listitem>
-        <listitem>
-          <para>The users can load balance traffic across the availability zones in the same region
-            or different regions.</para>
-        </listitem>
-        <listitem>
-          <para>The admin can configure DNS name for the entire cloud.</para>
-        </listitem>
-        <listitem>
-          <para>The users can specify an unique name across the cloud for a globally load balanced
-            service. The provided name is used as the domain name under the DNS name associated with
-            the cloud.</para>
-          <para>The user-provided name along with the admin-provided DNS name is used to produce a
-            globally resolvable FQDN for the globally load balanced service of the user. For
-            example, if the admin has configured xyztelco.com as the DNS name for the cloud, and
-            user specifies 'foo' for the GSLB virtual service, then the FQDN name of the GSLB
-            virtual service is foo.xyztelco.com.</para>
-        </listitem>
-        <listitem>
-          <para>While setting up GSLB, users can select a load balancing method, such as round
-            robin, for using across the zones that are part of GSLB.</para>
-        </listitem>
-        <listitem>
-          <para>The user shall be able to set weight to zone-level virtual server. Weight shall be
-            considered by the load balancing method for distributing the traffic.</para>
-        </listitem>
-        <listitem>
-          <para>The GSLB functionality shall support session persistence, where series of client
-            requests for particular domain name is sent to a virtual server on the same zone. </para>
-          <para>Statistics is collected from each GSLB virtual server.</para>
-        </listitem>
-      </itemizedlist>
-    </section>
-    <section id="enable-glsb-ns">
-      <title>Enabling GSLB in NetScaler</title>
-      <para>In each zone, add GSLB-enabled NetScaler device for load balancing.</para>
-      <orderedlist>
-        <listitem>
-          <para>Log in as administrator to the &PRODUCT; UI.</para>
-        </listitem>
-        <listitem>
-          <para>In the left navigation bar, click Infrastructure.</para>
-        </listitem>
-        <listitem>
-          <para>In Zones, click View More.</para>
-        </listitem>
-        <listitem>
-          <para>Choose the zone you want to work with.</para>
-        </listitem>
-        <listitem>
-          <para>Click the Physical Network tab, then click the name of the physical network. </para>
-        </listitem>
-        <listitem>
-          <para>In the Network Service Providers node of the diagram, click Configure. </para>
-          <para>You might have to scroll down to see this.</para>
-        </listitem>
-        <listitem>
-          <para>Click NetScaler.</para>
-        </listitem>
-        <listitem>
-          <para>Click Add NetScaler device and provide the following:</para>
-          <para>For NetScaler:</para>
-          <itemizedlist>
-            <listitem>
-              <para><emphasis role="bold">IP Address</emphasis>: The IP address of the SRX.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Username/Password</emphasis>: The authentication
-                credentials to access the device. &PRODUCT; uses these credentials to access the
-                device.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Type</emphasis>: The type of device that is being added.
-                It could be F5 Big Ip Load Balancer, NetScaler VPX, NetScaler MPX, or NetScaler SDX.
-                For a comparison of the NetScaler types, see the &PRODUCT; Administration
-                Guide.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Public interface</emphasis>: Interface of device that is
-                configured to be part of the public network.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Private interface</emphasis>: Interface of device that is
-                configured to be part of the private network.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">GSLB service</emphasis>: Select this option.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">GSLB service Public IP</emphasis>: The public IP address
-                of the NAT translator for a GSLB service that is on a private network.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">GSLB service Private IP</emphasis>: The private IP of the
-                GSLB service.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Number of Retries</emphasis>. Number of times to attempt a
-                command on the device before considering the operation failed. Default is 2.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Capacity</emphasis>: The number of networks the device can
-                handle.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Dedicated</emphasis>: When marked as dedicated, this
-                device will be dedicated to a single account. When Dedicated is checked, the value
-                in the Capacity field has no significance implicitly, its value is 1.</para>
-            </listitem>
-          </itemizedlist>
-        </listitem>
-        <listitem>
-          <para>Click OK.</para>
-        </listitem>
-      </orderedlist>
-    </section>
-    <section id="gslb-add">
-      <title>Adding a GSLB Rule</title>
-      <orderedlist>
-        <listitem>
-          <para>Log in to the &PRODUCT; UI as a domain administrator or user.</para>
-        </listitem>
-        <listitem>
-          <para>In the left navigation pane, click Region.</para>
-        </listitem>
-        <listitem>
-          <para>Select the region for which you want to create a GSLB rule.</para>
-        </listitem>
-        <listitem>
-          <para>In the Details tab, click View GSLB.</para>
-        </listitem>
-        <listitem>
-          <para>Click Add GSLB.</para>
-          <para>The Add GSLB page is displayed as follows:</para>
-          <mediaobject>
-            <imageobject>
-              <imagedata fileref="./images/add-gslb.png"/>
-            </imageobject>
-            <textobject>
-              <phrase>gslb-add.png: adding a gslb rule</phrase>
-            </textobject>
-          </mediaobject>
-        </listitem>
-        <listitem>
-          <para>Specify the following:</para>
-          <itemizedlist>
-            <listitem>
-              <para><emphasis role="bold">Name</emphasis>: Name for the GSLB rule.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Description</emphasis>: (Optional) A short description of
-                the GSLB rule that can be displayed to users.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">GSLB Domain Name</emphasis>: A preferred domain name for
-                the service.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Algorithm</emphasis>: (Optional) The algorithm to use to
-                load balance the traffic across the zones. The options are Round Robin, Least
-                Connection, and Proximity.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Service Type</emphasis>: The transport protocol to use for
-                GSLB. The options are TCP and UDP.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Domain</emphasis>: (Optional) The domain for which you
-                want to create the GSLB rule.</para>
-            </listitem>
-            <listitem>
-              <para><emphasis role="bold">Account</emphasis>: (Optional) The account on which you
-                want to apply the GSLB rule.</para>
-            </listitem>
-          </itemizedlist>
-        </listitem>
-        <listitem>
-          <para>Click OK to confirm.</para>
-        </listitem>
-      </orderedlist>
-    </section>
-    <section id="assign-lb-gslb">
-      <title>Assigning Load Balancing Rules to GSLB</title>
-      <orderedlist>
-        <listitem>
-          <para>Log in to the &PRODUCT; UI as a domain administrator or user.</para>
-        </listitem>
-        <listitem>
-          <para>In the left navigation pane, click Region.</para>
-        </listitem>
-        <listitem>
-          <para>Select the region for which you want to create a GSLB rule.</para>
-        </listitem>
-        <listitem>
-          <para>In the Details tab, click View GSLB.</para>
-        </listitem>
-        <listitem>
-          <para>Select the desired GSLB.</para>
-        </listitem>
-        <listitem>
-          <para>Click view assigned load balancing.</para>
-        </listitem>
-        <listitem>
-          <para>Click assign more load balancing.</para>
-        </listitem>
-        <listitem>
-          <para>Select the load balancing rule you have created for the zone.</para>
-        </listitem>
-        <listitem>
-          <para>Click OK to confirm.</para>
-        </listitem>
-      </orderedlist>
-    </section>
-  </section>
-  <section>
-    <title>Known Limitation</title>
-    <para>Currently, &PRODUCT; does not support orchestration of services across the zones. The
-      notion of services and service providers in region are to be introduced.</para>
-  </section>
-</section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5586a221/docs/en-US/gsoc-dharmesh.xml
----------------------------------------------------------------------
diff --git a/docs/en-US/gsoc-dharmesh.xml b/docs/en-US/gsoc-dharmesh.xml
deleted file mode 100644
index 01a77c7..0000000
--- a/docs/en-US/gsoc-dharmesh.xml
+++ /dev/null
@@ -1,149 +0,0 @@
-<?xml version='1.0' encoding='utf-8' ?>
-<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "CloudStack_GSoC_Guide.ent">
-%BOOK_ENTITIES;
-]>
-
-<!-- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
- 
-   http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied.  See the License for the
- specific language governing permissions and limitations
- under the License.
--->
-
-<section id="gsoc-dharmesh">
-        <title>Dharmesh's 2013 GSoC Proposal</title>
-        <para>This chapter describes Dharmrsh's 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.</para>
-	<section id="abstract-dharmesh">
-		<title>Abstract</title>
-		<para>
-			The project aims to bring <ulink url="http://aws.amazon.com/cloudformation/"><citetitle>cloudformation</citetitle></ulink> like service to cloudstack. One of the prime use-case is cluster computing frameworks on cloudstack. A cloudformation service will give users and administrators of cloudstack ability to manage and control a set of resources easily. The cloudformation will allow booting and configuring a set of VMs and form a cluster. Simple example would be LAMP stack. More complex clusters such as mesos or hadoop cluster requires a little more advanced configuration. There is already some work done by Chiradeep Vittal at this front [5]. In this project, I will implement server side cloudformation service for cloudstack and demonstrate how to run mesos cluster using it.
-		</para>
-	</section>
-
-	<section id="mesos">
-		<title>Mesos</title>
-		<para>
-			<ulink url="http://incubator.apache.org/mesos/"><citetitle>Mesos</citetitle></ulink> is a resource management platform for clusters. It aims to increase resource utilization of clusters by sharing cluster resources among multiple processing frameworks(like MapReduce, MPI, Graph Processing) or multiple instances of same framework. It provides efficient resource isolation through use of containers. Uses zookeeper for state maintenance and fault tolerance.
-		</para>
-	</section>
-
-	<section id="mesos-use">
-		<title>What can run on mesos ?</title>
-		
-		<para><emphasis role="bold">Spark:</emphasis> A cluster computing framework based on the Resilient Distributed Datasets (RDDs) abstraction. RDD is more generalized than MapReduce and can support iterative and interactive computation while retaining fault tolerance, scalability, data locality etc.</para>
-			
-		<para><emphasis role="bold">Hadoop:</emphasis>: Hadoop is fault tolerant and scalable distributed computing framework based on MapReduce abstraction.</para>
-			
-		<para><emphasis role="bold">Begel:</emphasis>: A graph processing framework based on pregel.</para>
-
-		<para>and other frameworks like MPI, Hypertable.</para>
-	</section>
-
-	<section id="mesos-deploy">
-		<title>How to deploy mesos ?</title>
-		
-		<para>Mesos provides cluster installation <ulink url="https://github.com/apache/mesos/blob/trunk/docs/Deploy-Scripts.textile"><citetitle>scripts</citetitle></ulink> for cluster deployment. There are also scripts available to deploy a cluster on <ulink url="https://github.com/apache/mesos/blob/trunk/docs/EC2-Scripts.textile"><citetitle>Amazon EC2</citetitle></ulink>. It would be interesting to see if this scripts can be leveraged in anyway.</para>
-	</section>
-
-	<section id="deliverables-dharmesh">
-		<title>Deliverables</title>
-		<orderedlist>
-			<listitem>
-				<para>Deploy CloudStack and understand instance configuration/contextualization</para>
-			</listitem>
-			<listitem>
-				<para>Test and deploy Mesos on a set of CloudStack based VM, manually. Design/propose an automation framework</para>
-			</listitem>
-			<listitem>
-				<para>Test stackmate and engage chiradeep (report bugs, make suggestion, make pull request)</para>
-			</listitem>
-			<listitem>
-				<para>Create cloudformation template to provision a Mesos Cluster</para>
-			</listitem>
-			<listitem>
-				<para>Compare with Apache Whirr or other cluster provisioning tools for server side implementation of cloudformation service.</para>
-			</listitem>
-		</orderedlist>
-	</section>
-
-	<section id="arch-and-tools">
-		<title>Architecture and Tools</title>
-		
-		<para>The high level architecture is as follows:</para>
-		
-		<para>
-			<mediaobject>
-				<imageobject>
-					<imagedata fileref="images/mesos-integration-arch.jpg"/>
-				</imageobject>
-			</mediaobject>
-		</para>
-
-
-		<para>It includes following components:</para>
-
-		<orderedlist>
-			<listitem>
-				<para>CloudFormation Query API server:</para>
-				<para>This acts as a point of contact to and exposes CloudFormation functionality as Query API. This can be accessed directly or through existing tools from Amazon AWS for their cloudformation service. It will be easy to start as a module which resides outside cloudstack at first and  I plan to use dropwizard [3] to start with. Later may be the API server can be merged with cloudstack core. I plan to use mysql for storing details of clusters.</para>
-			</listitem>
-
-			<listitem>
-				<para>Provisioning:</para>
-
-				<para>Provisioning module is responsible for handling the booting process of the VMs through cloudstack. This uses the cloudstack APIs for launching VMs. I plan to use preconfigured templates/images with required dependencies installed, which will make cluster creation process much faster even for large clusters. Error handling is very important part of this module. For example, what you do if few VMs fail to boot in cluster ?</para>
-			</listitem>
-
-			<listitem>
-				<para>Configuration:</para>
-
-				<para>This module deals with configuring the VMs to form a cluster. This can be done via manual scripts/code or via configuration management tools like chef/ironfan/knife. Potentially workflow automation tools like rundeck [4] also can be used. Also Apache whirr and Provisionr are options. I plan to explore this tools and select suitable ones.</para>
-			</listitem>
-
-		</orderedlist>
-	</section>
-
-	<section id="api">
-		<title>API</title>
-		
-		<para>Query <ulink url="http://docs.aws.amazon.com/AWSCloudFormation/latest/APIReference/API_Operations.html"><citetitle>API</citetitle></ulink> will be based on Amazon AWS cloudformation service. This will allow leveraging existing <ulink url="http://aws.amazon.com/developertools/AWS-CloudFormation"><citetitle>tools</citetitle></ulink> for AWS.</para>
-	</section>
-
-	<section id="timeline">
-		<title>Timeline</title>
-		<para>1-1.5 week : project design. Architecture, tools selection, API design</para>
-		<para>1-1.5 week : getting familiar with cloudstack and stackmate codebase and architecture details</para>
-		<para>1-1.5 week : getting familiar with mesos internals</para>
-		<para>1-1.5 week : setting up the dev environment and create mesos templates</para>
-		<para>2-3 week : build provisioning and configuration module</para>
-		<para>Midterm evaluation: provisioning module, configuration module</para>
-		<para>2-3 week : develope cloudformation server side implementation</para>
-		<para>2-3 week : test and integrate</para>
-	</section>
-
-	<section id="future-work">
-		<title>Future Work</title>
-		<orderedlist>
-			<listitem>
-				<para><emphasis role="bold">Auto Scaling:</emphasis></para>
-				<para>Automatically adding or removing VMs from mesos cluster based on various conditions like utilization going above/below a static threshold. There can be more sophisticated strategies based on prediction or fine grained metric collection with tight integration with mesos framework.</para>
-			</listitem>
-			<listitem>
-				<para><emphasis role="bold">Cluster Simulator:</emphasis></para>
-				<para>Integrating with existing simulator to simulate mesos clusters. This can be useful in various scenarios, for example while developing a new scheduling algorithm, testing autoscaling etc.</para>
-			</listitem>
-		</orderedlist>
-	</section>
-</section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5586a221/docs/en-US/gsoc-imduffy15.xml
----------------------------------------------------------------------
diff --git a/docs/en-US/gsoc-imduffy15.xml b/docs/en-US/gsoc-imduffy15.xml
deleted file mode 100644
index f78cb54..0000000
--- a/docs/en-US/gsoc-imduffy15.xml
+++ /dev/null
@@ -1,395 +0,0 @@
-<?xml version='1.0' encoding='utf-8' ?>
-<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "CloudStack_GSoC_Guide.ent">
-%BOOK_ENTITIES;
-]>
-
-<!-- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
- 
-   http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied.  See the License for the
- specific language governing permissions and limitations
- under the License.
--->
-
-<section id="gsoc-imduffy15">
-        <title>Ians's 2013 GSoC Proposal</title>
-        <para>This chapter describes Ians 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.</para>
-	<section id="ldap-user-provisioning">
-		<title>LDAP user provisioning</title>
-		<para>
-			"Need to automate the way the LDAP users are provisioned into cloud stack. This will mean better
-			integration with a LDAP server, ability to import users and a way to define how the LDAP user
-			maps to the cloudstack users."
-		</para>
-	</section>
-	<section id="abstract">
-		<title>Abstract</title>
-		<para>
-			The aim of this project is to provide an more effective mechanism to provision users from LDAP
-			into cloudstack. Currently cloudstack enables LDAP authentication. In this authentication users
-			must be first setup in cloudstack. Once the user is setup in cloudstack they can authenticate
-			using their LDAP username and password. This project will improve Cloudstack LDAP integration
-			by enabling users be setup automatically using their LDAP credential
-		</para>
-	</section>
-	<section id="deliverables">
-		<title>Deliverables</title>
-		<itemizedlist>
-			<listitem>
-				<para>Service that retrieves a list of LDAP users from a configured group</para>
-			</listitem>
-			<listitem>
-				<para>Extension of the cloudstack UI "Add User" screen to offer user list from LDAP</para>
-			</listitem>
-			<listitem>
-				<para>Add service for saving new user it details from LDAP</para>
-			</listitem>
-			<listitem>
-				<para>BDD unit and acceptance automated testing</para>
-			</listitem>
-			<listitem>
-				<para>Document change details</para>
-			</listitem>
-		</itemizedlist>
-	</section>
-	<section id="quantifiable-results">
-		<title>Quantifiable Results</title>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator wants to add new user to cloudstack and LDAP is setup in cloudstack</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The administrator opens the "Add User" screen</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>A table of users appears for the current list of users (not already created on cloudstack) from the LDAP group displaying their usernames, given name and email address. The timezone dropdown will still be available beside each user</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator wants to add new user to cloudstack and LDAP is not setup in cloudstack</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The administrator opens the "Add User" screen</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>The current add user screen and functionality is provided</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator wants to add new user to cloudstack and LDAP is setup in cloudstack</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The administrator opens the "Add User" screen and mandatory information is missing</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>These fields will be editable to enable you to populate the name or email address</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator wants to add new user to cloudstack, LDAP is setup and the user being created is in the LDAP query group</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The administrator opens the "Add User" screen</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>There is a list of LDAP users displayed but the user is present in the list</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator wants to add a new user to cloudstack, LDAP is setup and the user is not in the query group</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The administrator opens the "Add User" screen</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>There is a list of LDAP users displayed but the user is not in the list</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator wants to add a group of new users to cloudstack</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The administrator opens the "Add User" screen, selects the users and hits save</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>The list of new users are saved to the database</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>An administrator has created a new LDAP user on cloudstack</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>The user authenticates against cloudstack with the right credentials</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>They are authorised in cloudstack</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-		<informaltable>
-   			<tgroup cols="2">
-				<tbody>
-					<row>
-						<entry>Given</entry>
-						<entry>A user wants to edit an LDAP user</entry>
-					</row>
-					<row>
-						<entry>When</entry>
-						<entry>They open the "Edit User" screen</entry>					
-					</row>
-					<row>
-						<entry>Then</entry>
-						<entry>The password fields are disabled and cannot be changed</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-		<para/>
-	</section>
-	<section id="the-design-document">
-		<title>The Design Document</title>
-		<para>
-			<emphasis role="bold">
-				LDAP user list service			
-			</emphasis>
-		</para>
-		<para>
-			<emphasis role="bold">name:</emphasis> ldapUserList
-		</para>
-		<para>
-			<emphasis role="bold">responseObject:</emphasis> LDAPUserResponse {username,email,name}
-		</para>
-		<para>
-			<emphasis role="bold">parameter:</emphasis> listType:enum {NEW, EXISTING,ALL} (Default to ALL if no option provided)
-		</para>
-		<para>
-			Create a new API service call for retreiving the list of users from LDAP. This will call a new
-			ConfigurationService which will retrieve the list of users using the configured search base and the query
-			filter. The list may be filtered in the ConfigurationService based on listType parameter
-		</para>
-		<para>
-			<emphasis role="bold">
-				LDAP Available Service		
-			</emphasis>
-		</para>
-		<para>
-			<emphasis role="bold">name:</emphasis> ldapAvailable
-		</para>
-		<para>
-			<emphasis role="bold">responseObject</emphasis> LDAPAvailableResponse {available:boolean}
-		</para>
-		<para>
-			Create a new API service call veriying LDAP is setup correctly verifying the following configuration elements are all set:
-			<itemizedlist>
-				<listitem>
-					<para>ldap.hostname</para>
-				</listitem>
-				<listitem>
-					<para>ldap.port</para>
-				</listitem>
-				<listitem>
-					<para>ldap.usessl</para>
-				</listitem>
-				<listitem>
-					<para>ldap.queryfilter</para>
-				</listitem>
-				<listitem>
-					<para>ldap.searchbase</para>
-				</listitem>
-				<listitem>
-					<para>ldap.dn</para>
-				</listitem>
-				<listitem>
-					<para>ldap.password</para>
-				</listitem>
-			</itemizedlist>
-		</para>
-		<para>
-			<emphasis role="bold">
-				LDAP Save Users Service		
-			</emphasis>
-		</para>
-		<para>
-			<emphasis role="bold">name:</emphasis> ldapSaveUsers
-		</para>
-		<para>
-			<emphasis role="bold">responseObject:</emphasis> LDAPSaveUsersRssponse {list<![CDATA[<UserResponse>]]>}
-		</para>
-		<para>
-			<emphasis role="bold">parameter:</emphasis> list of users
-		</para>
-		<para>
-			Saves the list of objects instead. Following the functionality in CreateUserCmd it will
-			<itemizedlist>
-				<listitem>
-					<para>Create the user via the account service</para>
-				</listitem>
-				<listitem>
-					<para>Handle the response</para>
-				</listitem>
-			</itemizedlist>
-			It will be decided whether a transation should remain over whole save or only over individual users. A list of UserResponse will be returned.
-		</para>
-		<para>
-			<emphasis role="bold">
-				Extension of cloudstack UI "Add User" screen
-			</emphasis>
-		</para>
-		<para>
-			Extend account.js enable the adding of a list of users with editable fields where required. The new "add user" screen for LDAP setup will:
-			<itemizedlist>
-				<listitem>
-					<para>Make an ajax call to the ldapAvailable, ldapuserList and ldapSaveUsers services</para>
-				</listitem>
-				<listitem>
-					<para>Validate on username, email, firstname and lastname</para>
-				</listitem>
-			</itemizedlist>
-		</para>
-		<para>
-			<emphasis role="bold">
-				Extension of cloudstack UI "Edit User" screen
-			</emphasis>
-		</para>
-		<para>
-			Extend account.js to disable the password fields on the edit user screen if LDAP available, specifically:
-			<itemizedlist>
-				<listitem>
-					<para>Make an ajax call to the ldapAvailable, ldapuserList and ldapSaveUsers services</para>
-				</listitem>
-				<listitem>
-					<para>Validate on username, email, firstname and lastname. Additional server validation will nsure the password has not changed</para>
-				</listitem>
-			</itemizedlist>
-		</para>
-	</section>
-	<section id="approach">
-		<title>Approach</title>
-		<para>
-			To get started a development cloudstack environment will be created with DevCloud used to verify changes. Once the schedule is agreed with the mentor the deliverables will be broken into small user stories with expected delivery dates set. The development cycle will focus on BDD, enforcing all unit and acceptance tests are written first.
-		</para>
-		<para>
-			A build pipe line for continious delivery environment around cloudstack will be implemented, the following stages will be adopted:
-		</para>
-		<informaltable>
-   			<tgroup cols="2">
-				<thead>
-					<row>
-						<entry>Stage</entry>
-						<entry>Action</entry>
-					</row>
-				</thead>
-				<tbody>
-					<row>
-						<entry>Commit</entry>
-						<entry>Run unit tests</entry>
-					</row>
-					<row>
-						<entry>Sonar</entry>
-						<entry>Runs code quality metrics</entry>					
-					</row>
-					<row>
-						<entry>Acceptance</entry>
-						<entry>Deploys the devcloud and runs all acceptance tests</entry>
-					</row>
-					<row>
-						<entry>Deployment</entry>
-						<entry>Deploy a new management server using Chef</entry>
-					</row>
-				</tbody>			
-			</tgroup>
-		</informaltable>
-	</section>
-	<section id="about-me">
-		<title>About me</title>
-		<para>
-			I am a Computer Science Student at Dublin City University in Ireland. I have interests in virtualization,
-automation, information systems, networking and web development
-		</para>	
-		<para>
-			I was involved with a project in a K-12(educational) environment of moving their server systems over
-to a virtualized environment on ESXi. I have good knowledge of programming in Java, PHP and
-Scripting langages. During the configuration of an automation system for OS deployment I experienced
-some exposure to scripting in powershell, batch, vbs and bash and configuration of PXE images based
-of WinPE and Debian.
-Additionally I am also a mentor in an opensource teaching movement called CoderDojo, we teach kids
-from the age of 8 everything from web page, HTML 5 game and raspberry pi development. It's really
-cool.		
-		</para>
-		<para>
-			I’m excited at the opportunity and learning experience that cloudstack are offering with this project.
-		</para>
-	</section>
-</section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5586a221/docs/en-US/gsoc-meng.xml
----------------------------------------------------------------------
diff --git a/docs/en-US/gsoc-meng.xml b/docs/en-US/gsoc-meng.xml
deleted file mode 100644
index 8ea2b4c..0000000
--- a/docs/en-US/gsoc-meng.xml
+++ /dev/null
@@ -1,235 +0,0 @@
-<?xml version='1.0' encoding='utf-8' ?>
-<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "CloudStack_GSoC_Guide.ent">
-%BOOK_ENTITIES;
-]>
-
-<!-- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
- 
-   http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied.  See the License for the
- specific language governing permissions and limitations
- under the License.
--->
-
-<section id="gsoc-meng">
-        <title>Meng's 2013 GSoC Proposal</title>
-        <para>This chapter describes Meng's 2013 Google Summer of Code project within the &PRODUCT; ASF project. It is a copy paste of the submitted proposal.</para>
-	<section id="Project-Description">
-		<title>Project Description</title>
-		<para>
-			Getting a hadoop cluster going can be challenging and painful due to the tedious configuration phase and the diverse idiosyncrasies of each cloud provider. Apache Whirr<ulink url="http://whirr.apache.org/ "><citetitle>[1]</citetitle></ulink> and Provisionr is a set of libraries for running cloud services in an automatic or semi-automatic fashion. They take advantage of a cloud-neutral library called jclouds<ulink url=" http://www.jclouds.org/documentation/gettingstarted/what-is-jclouds/"><citetitle>[2]</citetitle></ulink> to create one-click, auto-configuring hadoop clusters on multiple clouds. Since jclouds supports CloudStack API, most of the services provided by Whirr and Provisionr should work out of the box on CloudStack. My first task is to test that assumption, make sure everything is well documented, and correct all issues with the latest version of CloudStack (4.0 and 4.1).
-		</para>
-		
-<para>
-The biggest challenge for hadoop provisioning is automatically configuring each instance at launch time based on what it is supposed to do, a process known as contextualization<ulink url="http://dl.acm.org/citation.cfm?id=1488934"><citetitle>[3]</citetitle></ulink><ulink url="http://www.nimbusproject.org/docs/current/clouds/clusters2.html "><citetitle>[4]</citetitle></ulink>. It causes last minute changes inside an instance to adapt to a cluster environment. Many automated cloud services are enabled by contextualization. For example in one-click hadoop clusters, contextualization basically amounts to generating and distributing ssh key pairs among instances, telling an instance where the master node is and what other slave nodes it should be aware of, etc. On EC2 contextualization is done via passing information through the EC2_USER_DATA entry<ulink url="http://aws.amazon.com/amazon-linux-ami/ "><citetitle>[5]</citetitle></ulink><ulink url="https://svn.apache.org/repos/asf/whirr/bra
 nches/contrib-python/src/py/hadoop/cloud/data/hadoop-ec2-init-remote.sh"><citetitle>[6]</citetitle></ulink>. Whirr and Provisionr embrace this feature to provision hadoop instances on EC2. My second task is to test and extend Whirr and Provisionr’s one-click solution on EC2 to CloudStack and also improve CloudStack’s support for Whirr and Provisionr to enable hadoop provisioning on CloudStack based clouds.
-</para>
-<para>
-My third task is to add a Query API  that is compatible with Amazon Elastic MapReduce (EMR) to CloudStack. Through this API, all hadoop provisioning functionality will be exposed and users can reuse cloud clients that are written for EMR to create and manage hadoop clusters on CloudStack based clouds.
-</para>
-	</section>
-
-	<section id="Project-Details">
-		<title>Project Details</title>
-		<para>
-			Whirr defines four roles for the hadoop provisioning service: Namenode, JobTracker, Datanode and TaskTraker. With the help of CloudInit<ulink url="https://help.ubuntu.com/community/CloudInit "><citetitle>[7]</citetitle></ulink> (a popular package for cloud instance initialization), each VM instance is configured based on its role and a compressed file that is passed in the EC2_USER_DATA entry. Since CloudStack also supports EC2_USER_DATA, I think the most feasible way to have hadoop provisioning on CloudStack is to extend Whirr’s solution on EC2 to CloudStack platform and to make necessary adjustment based on CloudStack’s
-		</para>
-		
-		<para>
-		Whirr and Provisionr deal with two critical issues in their role configuration scripts (configure-hadoop-role_list): SSH key authentication and hostname configuration.
-		</para>
-		<orderedlist>
-			<listitem><para>
-			SSH Key Authentication. The need for SSH Key based authentication is required so that the master node can login to slave nodes to start/stop hadoop daemons. Also each node needs to login to itself to start its own hadoop daemons. Traditionally this is done by generating a key pair on the master node and distributing the public key to all slave nodes. This can be only done with human intervention. Whirr works around this problem on EC2 by having a common key pair for all nodes in a hadoop cluster. Thus every node is able to login to one another. The key pair is provided by users and obtained by CloudInit inside an instance from metadata service. As far as I know, Cloudstack does not support user-provided ssh key authentication. Although CloudStack has the createSSHKeyPair API<ulink url="http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/using-sshkeys.html "><citetitle>[8]</citetitle></ulink> to generate SSH keys and users can create an instance
  template that supports SSH keys, there is no easy way to have a unified SSH key on all cluster instances. Besides Whirr prefers minimal image management, so having a customized template doesn’t seem quite fit here.
-			</para></listitem>
-			<listitem><para>
-			Hostname configuration. The hostname of each instance has to be properly set and injected into the set of hadoop config files (core-site.xml, hdfs-site.xml, mapred-site.xml ). For an EC2 instance, its host name is converted from a combination of its public IP and an EC2-specific pre/suffix (e.g. an instance with IP 54.224.206.71 will have its hostname set to ec2-54-224-206-71.compute-1.amazonaws.com). This hostname amounts to the Fully Qualified Domain Name that uniquely identifies this node on the network.  As for the case of CloudStack, if users do not specify a name the hostname that identifies a VM on a network will be a unique UUID generated by CloudStack<ulink url="https://cwiki.apache.org/CLOUDSTACK/allow-user-provided-hostname-internal-vm-name-on-hypervisor-instead-of-cloud-platform-auto-generated-name-for-guest-vms.html"><citetitle>[9]</citetitle></ulink>.
-
-
-
-			</para></listitem>
-			</orderedlist>
-			<para>
-			These two are the main issues that need support improvement on the CloudStack side. Other things like preparing disks, installing hadoop tarballs and starting hadoop daemons can be easily done as they are relatively role/instance-independent and static. Runurl can be used to simplify user-data scripts.
-
-
-
-			</para>
-			<para>
-			After we achieve hadoop provisioning on CloudStack using Whirr we can go further to add a Query API to CloudStack to expose this functionality. I will write an API that is compatible with Amazon Elastic MapReduce Service (EMR)<ulink url="http://docs.aws.amazon.com/ElasticMapReduce/latest/API/Welcome.html "><citetitle>[10]</citetitle></ulink> so that users can reuse clients that are written for EMR to submit jobs to existing hadoop clusters, poll job status, terminate a hadoop instance and do other things on CloudStack based clouds.  There are eight actions<ulink url="http://docs.aws.amazon.com/ElasticMapReduce/latest/API/API_Operations.html "><citetitle>[11]</citetitle></ulink> now supported in EMR API. I will try to implement as many as I can during the period of GSoC. The following statements give some examples of the API that I will write.
-			</para>
-			<programlisting><![CDATA[
-    https://elasticmapreduce.cloudstack.com?Action=RunJobFlow &Name=MyJobFlowName &Instances.MasterInstanceType=m1.small &Instances.SlaveInstanceType=m1.small &Instances.InstanceCount=4
-]]></programlisting>
-<para>
-This will launch a new hadoop cluster with four instances using specified instance types and add a job flow to it.
-</para>
-<programlisting><![CDATA[
-https://elasticmapreduce.cloudstack.com?Action=AddJobFlowSteps &JobFlowId=j-3UN6WX5RRO2AG &Steps.member.1.Name=MyStep2 &Steps.member.1.HadoopJarStep.Jar=MyJar
-]]></programlisting>
-<para>
-This will add a step to the existing job flow with ID j-3UN6WX5RRO2AG. This step will run the specified jar file.
-</para>
-<programlisting><![CDATA[
-https://elasticmapreduce.cloudstack.com?Action=DescribeJobFlows &JobFlowIds.member.1=j-3UN6WX5RRO2AG
-]]></programlisting>
-<para>
-This will return the status of the given job flow.
-</para>
-	</section>
-
-	<section id="Roadmap">
-		<title>Roadmap</title>
-		
-		<para><emphasis role="bold">Jun. 17 ∼ Jun. 30</emphasis> </para>
-		<orderedlist>
-		<listitem><para>
-		Learn CloudStack and Apache Whirr/Provisionr APIs; Deploy a CloudStack cluster.
-		</para></listitem>
-		
-		<listitem><para>
-		Identify how EC2_USER_DATA is passed and executed on each CloudStack instance.
-		</para></listitem>
-		<listitem><para>
-		Figure out how the files passed in EC2_USER_DATA are acted upon by CloudInit.
-		</para></listitem>
-		<listitem><para>
-		Identify files in /etc/init/ that are used or modified by Whirr and Provisionr for hadoop related configuration.
-		</para></listitem>
-		<listitem><para>
-		Deploy a hadoop cluster on CloudStack via Whirr/Provisionr. This is to test what are missing in CloudStack or Whirr/Provisionr in terms of their support for each other.
-		</para></listitem>
-		</orderedlist>
-		<para><emphasis role="bold">Jul. 1∼ Aug. 1</emphasis> </para>
-		<orderedlist>
-		<listitem><para>
-		Write scripts to configure VM hostname on CloudStack with the help of CloudInit;
-		</para></listitem>
-		<listitem><para>
-		Write scripts to distribute SSH keys among CloudStack instances. Add the capability of using user-provided ssh key for authentication to CloudStack.
-		</para></listitem>
-		<listitem><para>
-		Take care of the other things left for hadoop provisioning, such as mounting disks, installing hadoop tarballs, etc.
-		</para></listitem>
-		<listitem><para>
-		Compose files that need to be passed in EC2_USER_DATA to each CloudStack instance . Test these files and write patches to make sure that Whirr/Provisionr can succefully deploy one-click hadoop clusters on CloudStack.
-		</para></listitem>
-		</orderedlist>
-		<para><emphasis role="bold">Aug. 3 ∼ Sep. 8</emphasis> </para>
-		<orderedlist>
-		<listitem><para>
-		Design and build an Elastic Mapreduce API for CloudStack that takes control of hadoop cluster creation and management.
-		</para></listitem>
-		<listitem><para>
-		Implement the eight actions defined in EMR API. This task might take a while.
-		</para></listitem>
-		
-		</orderedlist>
-		<para><emphasis role="bold">Sep. 10 ∼ Sep. 23</emphasis> </para>
-		<orderedlist>
-		<listitem><para>
-		
-    Code cleaning and documentation wrap up.
-
-		</para></listitem>
-		
-		</orderedlist>
-		
-		
-	</section>
-
-	<section id="Deliverables-meng">
-		<title>Deliverables</title>
-		<orderedlist>
-		<listitem><para>
-		
- Whirr has limited support for CloudStack. Check what’s missing and make sure all steps are properly documented on the Whirr and CloudStack websites.
-		</para></listitem>
-		<listitem><para>
-		Contribute code to CloudStack and and send patches to Whirr/Provisionr if necessary to enable hadoop provisioning on CloudStack via Whirr/Provisionr.
-		</para></listitem>
-		<listitem><para>
-		Build an  EMR-compatible API for CloudStack.
-		</para></listitem>
-		</orderedlist>
-		</section>
-			<section id="Nice-to-have">
-		<title>Nice to have</title>
-		<para>In addition to the required deliverables, it’s nice to have the following:</para>
-		<orderedlist>
-		<listitem><para>
-		
- The capability to add and remove hadoop nodes dynamically to enable elastic hadoop clusters on CloudStack.
-
-		</para></listitem>
-		<listitem><para>
-		A review of the existing tools that offer one-click provisioning and make sure that they support CloudStack based clouds.
-		</para></listitem>
-		</orderedlist>
-	</section>
-
-			<section id="References">
-		<title>References</title>
-		
-		<orderedlist>
-		<listitem><para>
-		
- http://whirr.apache.org/
-		</para></listitem>
-		<listitem><para>
-		http://www.jclouds.org/documentation/gettingstarted/what-is-jclouds/
-		</para></listitem>
-		<listitem><para>
-		Katarzyna Keahey, Tim Freeman, Contextualization: Providing One-Click Virtual Clusters
-		</para></listitem>
-		<listitem><para>
-		http://www.nimbusproject.org/docs/current/clouds/clusters2.html
-		</para></listitem>
-		<listitem><para>
-		http://aws.amazon.com/amazon-linux-ami/
-		</para></listitem>
-		<listitem><para>
-		https://svn.apache.org/repos/asf/whirr/branches/contrib-python/src/py/hadoop/cloud/data/hadoop-ec2-init-remote.sh
-		</para></listitem>
-		<listitem><para>
-		https://help.ubuntu.com/community/CloudInit
-		</para></listitem>
-		<listitem><para>
-		http://cloudstack.apache.org/docs/en-US/Apache_CloudStack/4.0.2/html/Installation_Guide/using-sshkeys.html
-		</para></listitem>
-		<listitem><para>
-		https://cwiki.apache.org/CLOUDSTACK/allow-user-provided-hostname-internal-vm-name-on-hypervisor-instead-of-cloud-platform-auto-generated-name-for-guest-vms.html
-		</para></listitem>
-		<listitem><para>
-http://docs.aws.amazon.com/ElasticMapReduce/latest/API/Welcome.html
-		</para></listitem>
-		<listitem><para>
-		http://docs.aws.amazon.com/ElasticMapReduce/latest/API/API_Operations.html
-		</para></listitem>
-		<listitem><para>
-		http://buildacloud.org/blog/235-puppet-and-cloudstack.html
-		</para></listitem>
-		<listitem><para>
-http://chriskleban-internet.blogspot.com/2012/03/build-cloud-cloudstack-instance.html
-		</para></listitem>
-		<listitem><para>
-		http://gehrcke.de/2009/06/aws-about-api/
-		</para></listitem>
-		<listitem><para>
-		Apache_CloudStack-4.0.0-incubating-API_Developers_Guide-en-US.pdf
-		</para></listitem>
-		
-		</orderedlist>
-	</section>
-	
-</section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/5586a221/docs/en-US/gsoc-midsummer-dharmesh.xml
----------------------------------------------------------------------
diff --git a/docs/en-US/gsoc-midsummer-dharmesh.xml b/docs/en-US/gsoc-midsummer-dharmesh.xml
deleted file mode 100644
index 9e0fdcf..0000000
--- a/docs/en-US/gsoc-midsummer-dharmesh.xml
+++ /dev/null
@@ -1,193 +0,0 @@
-<?xml version='1.0' encoding='utf-8' ?>
-<!DOCTYPE section PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "CloudStack_GSoC_Guide.ent">
-%BOOK_ENTITIES;
-]>
-
-<!-- Licensed to the Apache Software Foundation (ASF) under one
- or more contributor license agreements.  See the NOTICE file
- distributed with this work for additional information
- regarding copyright ownership.  The ASF licenses this file
- to you under the Apache License, Version 2.0 (the
- "License"); you may not use this file except in compliance
- with the License.  You may obtain a copy of the License at
- 
-   http://www.apache.org/licenses/LICENSE-2.0
- 
- Unless required by applicable law or agreed to in writing,
- software distributed under the License is distributed on an
- "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
- KIND, either express or implied.  See the License for the
- specific language governing permissions and limitations
- under the License.
--->
-
-<section id="gsoc-midsummer-dharmesh">
-    <title>Dharmesh's Mid-Summer Progress Updates</title>
-    <para>This section describes Dharmesh's progress on project "Integration project to deploy and use Mesos on a CloudStack based cloud"</para>
-
-    <section id="dharmesh-introduction">
-        <title>Introduction</title>
-        <para>
-        	I am lagging a little in my timeline of the project. After the community bonding period, I have explored several things. My mentor, Sebastian has been really helpful and along with several others from the community. Along with my GSoC project I took up the task of resolving CLOUDSTACK-212 and it has been a wonderful experience. I am putting my best effort to complete the mesos integration as described in my proposal.
-        </para>
-    </section>
-
-    <section id="cloudstack-212">
-    	<title>CLOUDSTACK-212 "Switch java package structure from com.cloud to org.apache"</title>
-    	<para>   	
-    		CLOOUDSTACK-212(https://issues.apache.org/jira/browse/CLOUDSTACK-212) is about migrating old com.cloud package structure to new org.apache to reflect the project move to Apache Software Foundation.
-        </para>
-        <para>
-            Rohit had taken the initiative and had already refactored cloud-api project to new package. When I looked at this bug, I thought it was a pretty straight forward task. I was not quite correct. 
-        </para>
-        <para>
-            I used eclipse's refactoring capabilities for most of the refactoring. I used context-menu->refactor->rename with options of update - "references", "variable/method names" and "textual references" check-boxes checked. Also I disabled autobuild option as suggested. Also I disabled the CVS plugins as suggested by eclipse community the indexing by plugin while long refactoring was interfering and left garbled code. Even after these precautions, I noticed that eclipse was messing up some of the imports and especially bean-names in xml files. After correcting them manually, I got many test case failures. Upon investigation, I came to know that the error was because of resource folders of test cases. In short, I learned a lot.
-        </para>
-        <para>
-            Due to active development on master branch even between I create master-rebased-patch and apply-test-submit and one of the committer checks the applicability of the patch, the patch was failing due to new merges during this time. After several such attempt cycles, it became clear that this is not a good idea.
-            So after discussion with senior members of community, separate branch "namespacechanges" was created and I applied all the code refactoring there. Then one of the committer, Dave will cherry-pick them to master freezing other merge. I have submitted the patch as planned on 19th and it is currently being reviewed.
-        </para>
-        <para>
-            One of the great advantage of working on this bug was I got much better understanding of the cloudstack codebase. Also my understanding of unit testing with maven has become much more clearer.
-    	</para>
-    </section>
-
-    <section id="mesos-integration">
-        <title>Mesos integration with cloudstack</title>
-        <para>There are multiple ways of implementing the project. I have explored following options with specific pros and cons.</para>
-        
-
-        <section id="mesos-script">
-            <title>Shell script to boot and configure mesos</title>
-            <para>This idea is to write a shell script to automate all the steps involved in running mesos over cloudstack. This is very flexible option as we have full power of shell.</para>
-            <itemizedlist>
-            <listitem>
-                <para>create security groups for master, slave and zookeeper.</para>
-            </listitem>
-            <listitem>
-                <para>get latest AMI number and get the image.</para>
-            </listitem>
-            <listitem>
-                <para>create device mapping</para>
-            </listitem>
-            <listitem>
-                <para>launch slave</para>
-            </listitem>
-            <listitem>
-                <para>launch master</para>
-            </listitem>
-            <listitem>
-                <para>launch zookeeper</para>
-            </listitem>
-            <listitem>
-                <para>wait for instances to come up</para>
-            </listitem>
-            <listitem>
-                <para>ssh-copy-ids</para>
-            </listitem>
-            <listitem>
-                <para>rsync</para>
-            </listitem>
-            <listitem>
-                <para>run mesos setup script</para>
-            </listitem>
-            </itemizedlist>
-            
-            <para>Since there exists a shell script within mesos codebase to create and configure mesos cluster on AWS, the idea is to use the same script and make use of cloudstack-aws API. Currently I am testing this script.
-            Following are the steps:</para>
-            <itemizedlist>
-            <listitem>
-                <para>enable aws-api on cloudstack.</para>
-            </listitem>
-            <listitem>
-                <para>create AMI or template with required dependencies.</para>
-            </listitem>
-            <listitem>
-                <para>download mesos.</para>
-            </listitem>
-            <listitem>
-                <para>configure boto environment to use with cloudstack</para>
-            </listitem>
-            <listitem>
-                <para>run mesos-aws script.</para>
-            </listitem>
-            </itemizedlist>
-
-            <para>Pros: 
-                <itemizedlist>
-                    <listitem><para>Since the script is part of mesos codebase, it will be updated to work in future as well.</para></listitem>
-                </itemizedlist>
-            </para>
-
-        </section>
-
-        <section id="mesos-whirr">
-            <title>WHIRR-121 "Creating Whirr service for mesos"</title>
-            <para>Whirr provides a comman API to deploy services to various clouds. Currently, it is highly hadoop centric. Tom white had done some work in Whirr community, but has not been updated for quite a long time.</para>
-
-            <para>Pros: 
-                <itemizedlist>
-                    <listitem><para>Leverage Whirr API and tools.</para></listitem>
-                </itemizedlist>
-            </para>
-
-            <para>Cons: 
-                <itemizedlist>
-                    <listitem><para>Dependence on yet another tool.</para></listitem>
-                </itemizedlist>
-            </para>
-        </section >
-
-        <section id="mesos-cloudformation">
-            <title>Creating a cloudformation template for mesos</title>
-            <para>The idea is to use AWS cloudformation APIs/functions, so that it can be used with any cloudformation tools. Within cloudstack, Stackmate project is implementing cloudformation service.</para>
-
-            <para>Pros: 
-                <itemizedlist>
-                    <listitem><para>Leverage all the available tools for AWS cloudformation and stackmate</para></listitem>
-                </itemizedlist>
-                <itemizedlist>
-                    <listitem><para>Potentially can be used on multiple clouds.</para></listitem>
-                </itemizedlist>
-            </para>
-
-            <para>Cons: 
-                <itemizedlist>
-                    <listitem><para>Have to stay in the limits of ASW cloudformation API and otherwise have to use user-data to pass "shell commands", which will be not a maintainable solution in long term.</para></listitem>
-                </itemizedlist>
-            </para>
-        </section>
-
-    </section>
-
-    <section id="dharmesh-conclusion">
-        <title>Conclusion</title>
-        <para>
-            I am very happy with the kind of things I have learned so far with the project. This includes:
-        </para>
-        <itemizedlist>
-            <listitem>
-                <para>Advanced git commands</para>
-            </listitem>
-            <listitem>
-                <para>Exposed to very large code base</para>
-            </listitem>
-            <listitem>
-                <para>Hidden features, methods and bugs of eclipse that will be useful refactoring large projects</para>
-            </listitem>
-            <listitem>
-                <para>How Unit testing work, especially with mvn</para>
-            </listitem>
-            <listitem>
-                <para>How to evaluate pros and cons of multiple options to achieve same functionality</para>
-            </listitem>
-            <listitem>
-                <para>Writing a blog</para>
-            </listitem>
-        </itemizedlist>
-        <para>
-            The experience gained from this project is invaluable and it is great that the Google Summer Of Code program exist.
-        </para>
-    </section>
-</section>