You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@cloudstack.apache.org by se...@apache.org on 2013/06/23 17:09:53 UTC

[1/2] git commit: updated refs/heads/ACS101 to 7ddf787

Updated Branches:
  refs/heads/ACS101 52787777e -> 7ddf787f6


finished whirr and starte saltcloud


Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/c30152c6
Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/c30152c6
Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/c30152c6

Branch: refs/heads/ACS101
Commit: c30152c63761fea194238e42e34d57444c2ad7fc
Parents: 5278777
Author: Sebastien Goasguen <ru...@gmail.com>
Authored: Sat Jun 22 10:37:45 2013 -0400
Committer: Sebastien Goasguen <ru...@gmail.com>
Committed: Sat Jun 22 10:37:45 2013 -0400

----------------------------------------------------------------------
 docs/acs101/en-US/Wrappers.xml  |   2 +-
 docs/acs101/en-US/saltcloud.xml | 106 +++++++++++++++++++++++++++++----
 docs/acs101/en-US/whirr.xml     | 111 ++++++++++++++++++++++++++++++++++-
 3 files changed, 207 insertions(+), 12 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cloudstack/blob/c30152c6/docs/acs101/en-US/Wrappers.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/Wrappers.xml b/docs/acs101/en-US/Wrappers.xml
index 70f2977..b963e94 100644
--- a/docs/acs101/en-US/Wrappers.xml
+++ b/docs/acs101/en-US/Wrappers.xml
@@ -26,7 +26,7 @@
 <chapter id="Wrappers">
   <title>Wrappers</title>
   <para>
-    This is a test paragraph
+    In this paragraph we introduce several &PRODUCT; <emphasis>wrappers</emphasis>. These tools are using client libraries presented in the previous chapter and add additional functionality that involve some high-level orchestration. For instance <emphasis>knife-cloudstack</emphasis> uses the power of <ulink url="http://opscode.com">Chef</ulink>, the configuration management system, to seamlessly bootstrap instances running in a &PRODUCT; cloud. Apache <ulink url="http://whirr.apache.org">Whirr</ulink> uses <ulink url="http://jclouds.incubator.apache.org">jclouds</ulink> to boostrap <ulink url="http://hadoop.apache.org">Hadoop</ulink> clusters in the cloud and pallet does the same thing but using the clojure language.
   </para>
 
   <xi:include href="knife-cloudstack.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/c30152c6/docs/acs101/en-US/saltcloud.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/saltcloud.xml b/docs/acs101/en-US/saltcloud.xml
index fe6f306..9f531b9 100644
--- a/docs/acs101/en-US/saltcloud.xml
+++ b/docs/acs101/en-US/saltcloud.xml
@@ -22,14 +22,100 @@
  under the License.
 -->
 
-<section id="saltcloud">
-    <title>Saltcloud</title>
-    <para>Salt is an alternative to Chef and Puppet  ovides a <emphasis>Cloud in a box</emphasis>.</para>
-    <note>
-        <para>DevCloud is provided as a convenience by community members. It is not an official &PRODUCT; release artifact.</para>
-        <para>The &PRODUCT; source code however, contains tools to build your own DevCloud.</para>
-    </note>
-    <warning>
-        <para>Storm is </para>
-    </warning>
+<section id="salt">
+    <title>Salt</title>
+    <para><ulink url="http://saltstack.com">Salt</ulink> is a configuration management system written in Python. It can be seen as an alternative to Chef and Puppet. Its concept is similar with a master node holding states called <emphasis>salt states (SLS)</emphasis> and minions that get their configuration from the master. A nice difference with Chef and Puppet is that Salt is also a remote execution engine and can be used to execute commands on the minions by specifying a set of targets. In this chapter we introduce Salt and dive into <ulink url="http://saltcloud.org">SaltCloud</ulink>, an open source software to provision <emphasis>Salt</emphasis> masters and minions in the Cloud. <emphasis>SaltCloud</emphasis> can be looked at as an alternative to <emphasis>knife-cs</emphasis> but certainly with less functionality.
+	</para>
+
+    <section id="intro-to-salt">
+    <title>Quick Introduction to Salt</title>
+    <para>
+    </para>
+
+    </section>
+
+    <section id="salt-cloud">
+    <title>SaltCloud installation and usage.</title>
+	    <para>
+	        To install Saltcloud one simply clones the git repository. To develop Saltcloud, just fork it on github and clone your fork, then commit patches and submit pull request. SaltCloud depends on libcloud, therefore you will need libcloud installed as well. See the previous chapter to setup libcloud. With Saltcloud installed and in your path, you need to define a Cloud provider in <emphasis>~/.saltcloud/cloud</emphasis>. For example:
+	    </para>
+	    <programlisting>
+	<![CDATA[
+	providers:
+	  exoscale:
+	    apikey: <your api key> 
+	    secretkey: <your secret key>
+	    host: api.exoscale.ch
+	    path: /compute
+	    securitygroup: default
+	    user: root
+	    private_key: ~/.ssh/id_rsa
+	    provider: cloudstack
+	]]>
+	    </programlisting>
+	    <para>
+	        The apikey, secretkey, host, path and provider keys are mandatory. The securitygroup key will specify which security group to use when starting the instances in that cloud. The user will be the username used to connect to the instances via ssh and the private_key is the ssh key to use. Note that the optional parameter are specific to the Cloud that this was tested on. Cloud in advanced zones especially will need a different setup.
+	    </para>
+	    <warning><para>
+	        Saltcloud used libcloud. Support for advanced zones in libcloud is still experimental, therefore using SaltCloud in advanced zone will likely need some development of libcloud.</para>
+	    </warning>
+		<para>
+	        Once a provider is defined, we can start using saltcloud to list the zones, the service offerings and the templates available on that cloud provider. So far nothing more than what libcloud provides. For example:
+	    </para>
+	    <programlisting>
+	$salt-cloud –list-locations exoscale
+	$salt-cloud –list-images exoscale
+	$salt-cloud –list-sizes exoscale
+	    </programlisting>
+	    <para>
+	        To start creating instances and configuring them with Salt, we need to define node profiles in <emphasis>~/.saltcloud/config</emphasis>. To illustrate two different profiles we show a Salt Master and a Minion. The Master would need a specific template (image:uuid), a service offering or instance type (size:uuid). In a basic zone with keypair access and security groups, one would also need to specify which keypair to use, where to listen for ssh connections and of course you would need to define the provider (e.g exoscale in our case, defined above). Below if the node profile for a Salt Master deployed in the Cloud:
+	    </para>
+	    <programlisting>
+	<![CDATA[
+	ubuntu-exoscale-master:
+	    provider: exoscale
+	    image: 1d16c78d-268f-47d0-be0c-b80d31e765d2 
+	    size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8 
+	    ssh_interface: public
+	    ssh_username: root
+	    keypair: exoscale
+	    make_master: True
+	    master:
+	       user: root
+	       interface: 0.0.0.0
+	]]>
+	    </programlisting>
+	    <para>
+	        The master key shows which user to use and what interface, the make_master key if set to true will boostrap this node as a Salt Master. To create it on our cloud provider simply enter:
+	    </para>
+	    <programlisting>
+	$salt-cloud –p ubuntu-exoscale-master mymaster
+	    </programlisting>
+	    <para>
+	        Where <emphasis>mymaster</emphasis> is going to be the instance name. To create a minion, add a minion node profile in the config file:
+	    </para>
+	    <programlisting>
+	<![CDATA[
+	ubuntu-exoscale-minion:
+	    provider: exoscale
+	    image: 1d16c78d-268f-47d0-be0c-b80d31e765d2
+	    size: b6cd1ff5-3a2f-4e9d-a4d1-8988c1191fe8
+	    ssh_interface: public
+	    ssh_username: root
+	    keypair: exoscale
+	]]>
+	    </programlisting>
+	    <para>
+	        you would then start it with:
+	    </para>
+	    <programlisting>
+	$salt-cloud –p ubuntu-exoscale-minion myminion
+	    </programlisting>
+	    <note>
+	        <para>Saltcloud is still in an early phase of development and has little concept of dependencies between nodes. Therefore in the example described above the minion would not know where the master is, this would need to be resolved by hand by passing the IP of the master in the config profile of the minion. However this may not be a problem if the master is already existent and reachable by the instances. 
+	        </para>
+	    </note>
+
+    </section>
+
 </section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/c30152c6/docs/acs101/en-US/whirr.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/whirr.xml b/docs/acs101/en-US/whirr.xml
index 4b14a1a..903c54c 100644
--- a/docs/acs101/en-US/whirr.xml
+++ b/docs/acs101/en-US/whirr.xml
@@ -153,6 +153,16 @@ whirr.endpoint=https://the/endpoint/url
 whirr.image-id=1d16c78d-268f-47d0-be0c-b80d31e765d2
         </programlisting>
         </para>
+        <warning>
+            <para>
+                The example shown above is specific to a production <ulink url="http://exoscale.ch">Cloud</ulink> setup as a basic zone. This cloud uses security groups for isolation between instances. The proper rules had to be setup by hand. Also note the use of <emphasis>whirr.store-cluster-in-etc-hosts</emphasis>. If set to true whirr will edit the <emphasis>/etc/hosts</emphasis> file of the nodes and enter the IP adresses. This is handy in the case where DNS resolution is problematic.
+            </para>
+        </warning>
+        <note>
+            <para>
+                To use the Cloudera Hadoop distribution (CDH) like in the example above, you will need to copy the <emphasis>services/cdh/src/main/resources/functions</emphasis> directory to the root of your Whirr source. In this directory you will find the bash scripts used to bootstrap the instances. It may be handy to edit those scripts.
+            </para>
+        </note>
         <para>
             You are now ready to launch an hadoop cluster:
         </para>
@@ -188,8 +198,107 @@ To destroy cluster, run 'whirr destroy-cluster' with the same options used to la
         </programlisting>
         </para>
         <para>
-            After the boostrapping process finishes you should be able to login to your instances and use <emphasis>hadoop</emphasis> or if you are running a proxy on your machine, you will be able to access your hadoop cluster locally. Testing of Whirr for &PRODUCT; is still under <ulink url="https://issues.apache.org/jira/browse/WHIRR-725">investigation</ulink> and the subject of a Google Summer of Code 2013 project. More information will be added as we learn them.
+            After the boostrapping process finishes, you should be able to login to your instances and use <emphasis>hadoop</emphasis> or if you are running a proxy on your machine, you will be able to access your hadoop cluster locally. Testing of Whirr for &PRODUCT; is still under <ulink url="https://issues.apache.org/jira/browse/WHIRR-725">investigation</ulink> and the subject of a Google Summer of Code 2013 project. We currently identified issues with the use of security groups. Moreover this was tested on a basic zone. Complete testing on an advanced zone is future work.
         </para>
     </section>
 
+    <section id="using-map-reduce">
+    <title>Running Map-Reduce jobs on Hadoop</title>
+        <para>
+        Whirr gives you the ssh command to connect to the instances of your hadoop cluster, login to the namenode and browse the hadoop file system that was created:
+        </para>
+        <programlisting>
+$ hadoop fs -ls /
+Found 5 items
+drwxrwxrwx   - hdfs supergroup          0 2013-06-21 20:11 /hadoop
+drwxrwxrwx   - hdfs supergroup          0 2013-06-21 20:10 /hbase
+drwxrwxrwx   - hdfs supergroup          0 2013-06-21 20:10 /mnt
+drwxrwxrwx   - hdfs supergroup          0 2013-06-21 20:11 /tmp
+drwxrwxrwx   - hdfs supergroup          0 2013-06-21 20:11 /user
+        </programlisting>
+		<para>Create a directory to put your input data</para>
+        <programlisting>
+$ hadoop fs -mkdir input
+$ hadoop fs -ls /user/sebastiengoasguen
+Found 1 items
+drwxr-xr-x   - sebastiengoasguen supergroup          0 2013-06-21 20:15 /user/sebastiengoasguen/input
+        </programlisting>
+        <para>Create a test input file and put in the hadoop file system:</para>
+		<programlisting>
+$ cat foobar 
+this is a test to count the words
+$ hadoop fs -put ./foobar input
+$ hadoop fs -ls /user/sebastiengoasguen/input
+Found 1 items
+-rw-r--r--   3 sebastiengoasguen supergroup         34 2013-06-21 20:17 /user/sebastiengoasguen/input/foobar
+        </programlisting>
+        <para>Define the map-reduce environment. Note that this default Cloudera distribution installation uses MRv1. To use Yarn one would have to edit the hadoop.properties file.
+        </para>
+        <programlisting>
+$ export HADOOP_MAPRED_HOME=/usr/lib/hadoop-0.20-mapreduce
+        </programlisting>
+        <para>Start the map-reduce job:</para>
+        <programlisting>
+<![CDATA[
+			$ hadoop jar $HADOOP_MAPRED_HOME/hadoop-examples.jar wordcount input output
+			13/06/21 20:19:59 WARN mapred.JobClient: Use GenericOptionsParser for parsing the arguments. Applications should implement Tool for the same.
+			13/06/21 20:20:00 INFO input.FileInputFormat: Total input paths to process : 1
+			13/06/21 20:20:00 INFO mapred.JobClient: Running job: job_201306212011_0001
+			13/06/21 20:20:01 INFO mapred.JobClient:  map 0% reduce 0%
+			13/06/21 20:20:11 INFO mapred.JobClient:  map 100% reduce 0%
+			13/06/21 20:20:17 INFO mapred.JobClient:  map 100% reduce 33%
+			13/06/21 20:20:18 INFO mapred.JobClient:  map 100% reduce 100%
+			13/06/21 20:20:21 INFO mapred.JobClient: Job complete: job_201306212011_0001
+			13/06/21 20:20:22 INFO mapred.JobClient: Counters: 32
+			13/06/21 20:20:22 INFO mapred.JobClient:   File System Counters
+			13/06/21 20:20:22 INFO mapred.JobClient:     FILE: Number of bytes read=133
+			13/06/21 20:20:22 INFO mapred.JobClient:     FILE: Number of bytes written=766347
+			13/06/21 20:20:22 INFO mapred.JobClient:     FILE: Number of read operations=0
+			13/06/21 20:20:22 INFO mapred.JobClient:     FILE: Number of large read operations=0
+			13/06/21 20:20:22 INFO mapred.JobClient:     FILE: Number of write operations=0
+			13/06/21 20:20:22 INFO mapred.JobClient:     HDFS: Number of bytes read=157
+			13/06/21 20:20:22 INFO mapred.JobClient:     HDFS: Number of bytes written=50
+			13/06/21 20:20:22 INFO mapred.JobClient:     HDFS: Number of read operations=2
+			13/06/21 20:20:22 INFO mapred.JobClient:     HDFS: Number of large read operations=0
+			13/06/21 20:20:22 INFO mapred.JobClient:     HDFS: Number of write operations=3
+			13/06/21 20:20:22 INFO mapred.JobClient:   Job Counters 
+			13/06/21 20:20:22 INFO mapred.JobClient:     Launched map tasks=1
+			13/06/21 20:20:22 INFO mapred.JobClient:     Launched reduce tasks=3
+			13/06/21 20:20:22 INFO mapred.JobClient:     Data-local map tasks=1
+			13/06/21 20:20:22 INFO mapred.JobClient:     Total time spent by all maps in occupied slots (ms)=10956
+			13/06/21 20:20:22 INFO mapred.JobClient:     Total time spent by all reduces in occupied slots (ms)=15446
+			13/06/21 20:20:22 INFO mapred.JobClient:     Total time spent by all maps waiting after reserving slots (ms)=0
+			13/06/21 20:20:22 INFO mapred.JobClient:     Total time spent by all reduces waiting after reserving slots (ms)=0
+			13/06/21 20:20:22 INFO mapred.JobClient:   Map-Reduce Framework
+			13/06/21 20:20:22 INFO mapred.JobClient:     Map input records=1
+			13/06/21 20:20:22 INFO mapred.JobClient:     Map output records=8
+			13/06/21 20:20:22 INFO mapred.JobClient:     Map output bytes=66
+			13/06/21 20:20:22 INFO mapred.JobClient:     Input split bytes=123
+			13/06/21 20:20:22 INFO mapred.JobClient:     Combine input records=8
+			13/06/21 20:20:22 INFO mapred.JobClient:     Combine output records=8
+			13/06/21 20:20:22 INFO mapred.JobClient:     Reduce input groups=8
+			13/06/21 20:20:22 INFO mapred.JobClient:     Reduce shuffle bytes=109
+			13/06/21 20:20:22 INFO mapred.JobClient:     Reduce input records=8
+			13/06/21 20:20:22 INFO mapred.JobClient:     Reduce output records=8
+			13/06/21 20:20:22 INFO mapred.JobClient:     Spilled Records=16
+			13/06/21 20:20:22 INFO mapred.JobClient:     CPU time spent (ms)=1880
+			13/06/21 20:20:22 INFO mapred.JobClient:     Physical memory (bytes) snapshot=469413888
+			13/06/21 20:20:22 INFO mapred.JobClient:     Virtual memory (bytes) snapshot=5744541696
+			13/06/21 20:20:22 INFO mapred.JobClient:     Total committed heap usage (bytes)=207687680
+]]>
+        </programlisting>
+        <para> And you can finally check the output:</para>
+        <programlisting>
+$ hadoop fs -cat output/part-* | head
+this	1
+to		1
+the		1
+a		1
+count	1
+is		1
+test	1
+words	1
+        </programlisting>            
+    </section>
+
 </section>


[2/2] git commit: updated refs/heads/ACS101 to 7ddf787

Posted by se...@apache.org.
more cleanup of docs


Project: http://git-wip-us.apache.org/repos/asf/cloudstack/repo
Commit: http://git-wip-us.apache.org/repos/asf/cloudstack/commit/7ddf787f
Tree: http://git-wip-us.apache.org/repos/asf/cloudstack/tree/7ddf787f
Diff: http://git-wip-us.apache.org/repos/asf/cloudstack/diff/7ddf787f

Branch: refs/heads/ACS101
Commit: 7ddf787f695ab303e2fb50d9d2fa8ba432dcfefc
Parents: c30152c
Author: Sebastien Goasguen <ru...@gmail.com>
Authored: Sat Jun 22 15:45:49 2013 -0400
Committer: Sebastien Goasguen <ru...@gmail.com>
Committed: Sat Jun 22 15:45:49 2013 -0400

----------------------------------------------------------------------
 docs/acs101/en-US/Clientsandshells.xml |   4 +-
 docs/acs101/en-US/clostack.xml         |  36 +++++-----
 docs/acs101/en-US/cloudstackapi.xml    |  33 ++++++---
 docs/acs101/en-US/jcloudscli.xml       |   7 ++
 docs/acs101/en-US/libcloud.xml         | 105 +++++++++++++++++++++++++---
 5 files changed, 147 insertions(+), 38 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/cloudstack/blob/7ddf787f/docs/acs101/en-US/Clientsandshells.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/Clientsandshells.xml b/docs/acs101/en-US/Clientsandshells.xml
index b4d3dd3..0372fda 100644
--- a/docs/acs101/en-US/Clientsandshells.xml
+++ b/docs/acs101/en-US/Clientsandshells.xml
@@ -26,15 +26,15 @@
 <chapter id="Clientsandshells">
   <title>Clients and Shells</title>
   <para>
-    This is a test paragraph
+    Clients and Shells are critical to the ease of use of any API, even more so Cloud APIs. In this chapter we present the basis of the &PRODUCT; API. We illustrate how to sign requests for the sake of completeness and because it is a very nice exercise for beginners. We then introduce CloudMonkey the &PRODUCT; CLI and shell which boasts a 100% coverage of the API. While jclouds is a java library, it can also be used as a cli or interactive shell, we present jclouds-cli to contrast it to CloudMonkey and introduce jclouds. Apache libcloud is a Python based API wrapper, once installed, a developer can use libcloud to talk to multiple cloud providers and cloud APIs, it serves a similar role as jclouds but in Python. Clostack is a clojure client. Clojure is receiving lots of attention recently for its clean functional programming style, clojure is the basis of Pallet which we will talk about in the next chapter. Clostack serves as a teaser for Clojure. Finally we cover Boto, the well-kn
 own Python Amazon Web Service interface, and show how it can be used with a &PRODUCT; cloud.
   </para>
 
     <xi:include href="cloudstackapi.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
     <xi:include href="cloudmonkey.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
     <xi:include href="jcloudscli.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
     <xi:include href="libcloud.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
-    <xi:include href="clostack.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
     <xi:include href="boto.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
+    <xi:include href="clostack.xml" xmlns:xi="http://www.w3.org/2001/XInclude" />
 
 </chapter>
 

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/7ddf787f/docs/acs101/en-US/clostack.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/clostack.xml b/docs/acs101/en-US/clostack.xml
index c5ea05d..a1c19fd 100644
--- a/docs/acs101/en-US/clostack.xml
+++ b/docs/acs101/en-US/clostack.xml
@@ -24,9 +24,16 @@
 
 <section id="clostack">
     <title>Clostack, a Clojure client</title>
-    <para>There are many tools available to interface with the &PRODUCT; API. Apache Libcloud is one of those. In this section
-          we provide a basic.</para>
+    <para>There are many tools available to interface with the &PRODUCT; API. </para>
 
+
+    <section id="clojure-intro">
+    <title>Clojure</title>
+        <para>A quick intro to clojure</para>
+    </section>
+
+    <section id="clostack-install">
+    <title>Installation and configuration</title>
     <para>To install Libcloud refer to the libcloud website. If you are familiar with Pypi simply do:</para>
     <programlisting>pip install apache-libcloud</programlisting>
 
@@ -36,8 +43,11 @@
 pip install apache-libcloud
 Downloading/unpacking apache-libcloud
     </programlisting>
-    
-    <para>You can then open a Python interactive shell, create an instance of a &PRODUCT; driver and call the available methods via the libcloud API.</para>
+    </section>   
+
+    <section id="clostask-usage">
+    <title>Using clostack</title>
+    <para>With lein install you can start a REPL within the clostack project</para>
 
     <programlisting>
  <![CDATA[
@@ -84,20 +94,6 @@ list-tags                           list-template-permissions           list-tem
 list-volumes                        list-vp-cs                          list-vpc-offerings                  list-vpn-connections                
 list-vpn-customer-gateways          list-vpn-gateways                   list-vpn-users                      list-zones                          
 list?                               
-user=> (list
-list                                list*                               list-accounts                       list-async-jobs                     
-list-capabilities                   list-disk-offerings                 list-event-types                    list-events                         
-list-firewall-rules                 list-hypervisors                    list-instance-groups                list-ip-forwarding-rules            
-list-iso-permissions                list-isos                           list-lb-stickiness-policies         list-load-balancer-rule-instances   
-list-load-balancer-rules            list-network-ac-ls                  list-network-offerings              list-networks                       
-list-os-categories                  list-os-types                       list-port-forwarding-rules          list-private-gateways               
-list-project-accounts               list-project-invitations            list-projects                       list-public-ip-addresses            
-list-remote-access-vpns             list-resource-limits                list-security-groups                list-service-offerings              
-list-snapshot-policies              list-snapshots                      list-ssh-key-pairs                  list-static-routes                  
-list-tags                           list-template-permissions           list-templates                      list-virtual-machines               
-list-volumes                        list-vp-cs                          list-vpc-offerings                  list-vpn-connections                
-list-vpn-customer-gateways          list-vpn-gateways                   list-vpn-users                      list-zones                          
-list?                               
 user=> (def cs (http-client))
 #'user/cs
 user=> cs
@@ -110,11 +106,15 @@ user=> (def cs (http-client :api-secret "Hv97W5UKHG-268UN_UKIzPgw7B0zgnJKdReeUmt
 user=> (list-templates cs :templatefilter "featured")
 ]]>
     </programlisting>
+    </section>
 
+    <section id="clostack-future">
+    <title>Trend</title>
     <para>Clojure seems to be getting a lot of attention these days, mostly for its functional programming aspect.
           It offers a very clean syntax with the strength of java and the rapid prototyping characteristics of scripting languages.
           Frameworks like Pallet are making use of clojure to build advanced cloud services. In the next chapter we will have a quick look at pallet-exoscale
           which lets you create <emphasis>crates</emphasis> in the cloud, defining node dependencies and software packages that need to be configured.
     </para>
+    </section>
 
  </section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/7ddf787f/docs/acs101/en-US/cloudstackapi.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/cloudstackapi.xml b/docs/acs101/en-US/cloudstackapi.xml
index 6d6155c..3707ed7 100644
--- a/docs/acs101/en-US/cloudstackapi.xml
+++ b/docs/acs101/en-US/cloudstackapi.xml
@@ -23,14 +23,22 @@
 -->
 
 <section id="cloudstackapi">
+	<title>The &PRODUCT; API</title>
+	<para>All functionalities of the &PRODUCT; data center orchestrator are exposed via an API server. Github currently has over fifteen clients for this API, in various languages. In this section we introduce this API and the signing mechanism. The follow on sections will introduce clients that already contain a signing method. The signing process is only highlighted for completeness.</para>
+
+    <section id="intro-api">
+    <title>Basics of the API</title>
+    <para>The API is <emphasis>http</emphasis> based, meaning that calls to the API server are made using the http protocol, the reponses are either in XML or JSON format. The request is made out of a set of key value pairs that correspond to input parameters of each call. The key/value pairs are passed within a url string. All calls used the http GET method. As such, the &PRODUCT; API is not a RESTfull API and more a Query API or RESTlike.</para>
+    <para>In isolated testing, one may wish to use the so-called integration port (e.g 8096 by default), this port makes the API directly accessible without signing requests. In production mode, the integration port should never be used and certainly not open to the public internet.</para>
+	</section>
+
+	<section id="signing-request">
     <title>How to sign an API call with Python</title>
-    <para>To illustrate the procedure used to sign API calls we present a step by step interactive session
-          using Python.</para>
+    <para>To illustrate the procedure used to sign API calls we present a step by step interactive session using Python.</para>
     
     <para>First import the required modules:</para>
     <programlisting>
-
- <![CDATA[
+<![CDATA[
 $python
 Python 2.7.3 (default, Nov 17 2012, 19:54:34) 
 [GCC 4.2.1 Compatible Apple Clang 4.1 ((tags/Apple/clang-421.11.66))] on darwin
@@ -43,7 +51,7 @@ Type "help", "copyright", "credits" or "license" for more information.
  ]]>
     </programlisting>
    
-    <para>Define the endpoint of the Cloud, the command that you want to execute and the keys of the user.</para>
+    <para>Define the endpoint of the Cloud, the command that you want to execute, the response type and the keys of the user.</para>
     <programlisting>
  <![CDATA[
 
@@ -55,7 +63,7 @@ Type "help", "copyright", "credits" or "license" for more information.
 >>> secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ'
   ]]>
     </programlisting>
-    <para>Build the request string:</para>
+    <para>Build the request string. The key/value pairs are join with the equal sign and all the pairs are joined with ampersand. </para>
     <programlisting>
  <![CDATA[
 >>> request_str='&'.join(['='.join([k,urllib.quote_plus(request[k])]) for k in request.keys()])
@@ -64,7 +72,7 @@ Type "help", "copyright", "credits" or "license" for more information.
   ]]>
     </programlisting>
 
-    <para>Compute the signature with hmac, do a 64 bit encoding and a url encoding: </para>
+    <para>Create the signature request similarly to the request string, but lower case everything, sort the keys and replace the plus sign with %20. Compute the hmac of the resulting string using the secret key and the sha1 algorithm, do a 64 bit encoding, strip the return carriage, and do a url encoding: </para>
     <programlisting>
   <![CDATA[
 >>> sig_str='&'.join(['='.join([k.lower(),urllib.quote_plus(request[k].lower().replace('+','%20'))])for k in sorted(request.iterkeys())]) 
@@ -93,9 +101,16 @@ Type "help", "copyright", "credits" or "license" for more information.
 >>> req
 'http://localhost:8080/client/api?apikey=plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg&command=listUsers&response=json&signature=TTpdDq%2F7j%2FJ58XCRHomKoQXEQds%3D'
 >>> res=urllib2.urlopen(req)
+]]>
+    </programlisting>
+    <para>In this particular example, the response is in json format. The first key is <emphasis>listusersresponse</emphasis> which contains <emphasis>count</emphasis> and a list of the users in <emphasis>user</emphasis></para>
+    <programlisting>
+<![CDATA[
 >>> res.read()
 '{ "listusersresponse" : { "count":3 ,"user" : [  {"id":"7ed6d5da-93b2-4545-a502-23d20b48ef2a","username":"admin","firstname":"admin","lastname":"cloud","created":"2012-07-05T12:18:27-0700","state":"enabled","account":"admin","accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT","apikey":"plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg","secretkey":"VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ","accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}, {"id":"1fea6418-5576-4989-a21e-4790787bbee3","username":"runseb","firstname":"foobar","lastname":"goa","email":"joe@smith.com","created":"2013-04-10T16:52:06-0700","state":"enabled","account":"admin","accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT","apikey":"Xhsb3MewjJQaXXMszRcLvQI9_NPy_UcbDj1QXikkVbDC9MDSPwWdtZ1bUY1H7JBEYTtDDLY3yuchCeW778GkBA","secretkey":"gIsgmi8C5YwxMHjX5o51pSe0kqs6JnKriw0jJBLceY5b
 gnfzKjL4aM6ctJX-i1ddQIHJLbLJDK9MRzsKk6xZ_w","accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"}, {"id":"52f65396-183c-4473-883f-a37e7bb93967","username":"toto","firstname":"john","lastname":"smith","email":"john@smith.com","created":"2013-04-23T04:27:22-0700","state":"enabled","account":"admin","accounttype":1,"domainid":"8a111e58-e155-4482-93ce-84efff3c7c77","domain":"ROOT","apikey":"THaA6fFWS_OmvU8od201omxFC8yKNL_Hc5ZCS77LFCJsRzSx48JyZucbUul6XYbEg-ZyXMl_wuEpECzK-wKnow","secretkey":"O5ywpqJorAsEBKR_5jEvrtGHfWL1Y_j1E4Z_iCr8OKCYcsPIOdVcfzjJQ8YqK0a5EzSpoRrjOFiLsG0hQrYnDA","accountid":"7548ac03-af1d-4c1c-9064-2f3e2c0eda0d"} ] } }'
   ]]>
     </programlisting>
-    
- </section>
+    <para>To epxlore further the &PRODUCT; API, use CloudMonkey and/or read the API <ulink url="http://cloudstack.apache.org/apidocumentation">documentation</ulink>, which contains the entire parameters list for each API call.</para>
+    </section>
+
+</section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/7ddf787f/docs/acs101/en-US/jcloudscli.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/jcloudscli.xml b/docs/acs101/en-US/jcloudscli.xml
index 3d2d704..5bda6d5 100644
--- a/docs/acs101/en-US/jcloudscli.xml
+++ b/docs/acs101/en-US/jcloudscli.xml
@@ -30,6 +30,8 @@
         <para>jclouds is under going incubation at the Apache Software Foundation, jclouds-cli is available on github. Changes may occur in the sofware from the time of this writing to the time of you reading it.</para>
     </warning>
 
+    <section id="installation">
+    <title>Installation and Configuration</title>
     <para>
         First install jclouds-cli via github and build it with maven:
     </para>
@@ -104,6 +106,10 @@ State         Version          Name                                    Repositor
     <note>
         <para>I edited the output of jclouds-cli to gain some space, there a lot more providers available</para>
     </note>
+    </section>
+
+    <section id="jclouds-cli-usage">
+    <title>Using jclouds CLI</title>
     <para>The &PRODUCT; API driver is not installed by default. Install it with:</para>
 
     <programlisting>
@@ -202,5 +208,6 @@ $ jclouds node info 4e733609-4c4a-4de1-9063-6fe5800ccb10
     </programlisting>
 
     <para>With this short intro, you are well on your way to using jclouds-cli. Check out the interactive shell, the blobstore and chef facility and commit back to this section.</para>
+    </section>
 
 </section>

http://git-wip-us.apache.org/repos/asf/cloudstack/blob/7ddf787f/docs/acs101/en-US/libcloud.xml
----------------------------------------------------------------------
diff --git a/docs/acs101/en-US/libcloud.xml b/docs/acs101/en-US/libcloud.xml
index e7bc353..3cf2652 100644
--- a/docs/acs101/en-US/libcloud.xml
+++ b/docs/acs101/en-US/libcloud.xml
@@ -26,41 +26,82 @@
     <title>Apache Libcloud</title>
     <para>There are many tools available to interface with the &PRODUCT; API. Apache Libcloud is one of those. In this section
           we provide a basic example of how to use Libcloud with &PRODUCT;. It assumes that you have access to a &PRODUCT; endpoint and that you have the API access key and secret key of a user.</para>
-    <para>To install Libcloud refer to the libcloud website. If you are familiar with Pypi simply do:</para>
+
+    <section id="libcloud-installation">
+    <title>Installation</title>
+    <para>To install Libcloud refer to the libcloud <ulink url="http://libcloud.apache.org">website</ulink>. If you are familiar with Pypi simply do:</para>
     <programlisting>pip install apache-libcloud</programlisting>
     <para>You should see the following output:</para>
     <programlisting>
 pip install apache-libcloud
 Downloading/unpacking apache-libcloud
-  Downloading apache-libcloud-0.12.4.tar.bz2 (376kB): 376kB downloaded
-  Running setup.py egg_info for package apache-libcloud
+Downloading apache-libcloud-0.12.4.tar.bz2 (376kB): 376kB downloaded
+Running setup.py egg_info for package apache-libcloud
     
 Installing collected packages: apache-libcloud
-  Running setup.py install for apache-libcloud
+Running setup.py install for apache-libcloud
     
 Successfully installed apache-libcloud
 Cleaning up...
     </programlisting>
-    
-    <para>You can then open a Python interactive shell, create an instance of a &PRODUCT; driver and call the available methods via the libcloud API.</para>
+    <para>
+        Developers will want to clone the repository, for example from the github mirror:
+    </para>
+    <programlisting>
+git clone https://github.com/apache/libcloud.git
+    </programlisting>
+    <para>
+         To install libcloud from the cloned repo, simply do the following from within the clone repository directory:
+    </para>
+    <programlisting>
+sudo python ./setup.py install
+    </programlisting>
+    <note>
+        <para>
+            The &PRODUCT; driver is located in <emphasis>/path/to/libcloud/source/libcloud/compute/drivers/cloudstack.py</emphasis>. file bugs on the libcloud JIRA and submit your patches as an attached file to the JIRA entry.
+        </para>
+    </note>
+    </section>    
+
+    <section id="libcloud-usage">
+    <title>Using Libcloud</title>
+    <para>With libcloud installed either via PyPi or via the source, you can now open a Python interactive shell, create an instance of a &PRODUCT; driver and call the available methods via the libcloud API.</para>
+    <para>First you need to import the libcloud modules and create a &PRODUCT; driver.</para>
 
     <programlisting>
- <![CDATA[
+<![CDATA[
 >>> from libcloud.compute.types import Provider
 >>> from libcloud.compute.providers import get_driver
 >>> Driver = get_driver(Provider.CLOUDSTACK)
+]]>
+    </programlisting>
+
+    <para>Then, using your keys and endpoint, create a connection object. Note that this is a localtest and thus not secured.  If you use a production public cloud, make sure to use SSL properly.</para>
+    <programlisting>   
+<![CDATA[
 >>> apikey='plgWJfZK4gyS3mOMTVmjUVg-X-jlWlnfaUJ9GAbBbf9EdM-kAYMmAiLqzzq1ElZLYq_u38zCm0bewzGUdP66mg'
 >>> secretkey='VDaACYb0LV9eNjTetIOElcVQkvJck_J_QljX_FcHRj87ZKiy0z0ty0ZsYBkoXkY9b7eq1EhwJaw7FF3akA3KBQ'
 >>> host='http://localhost:8080'
 >>> path='/client/api'
->>> conn=Driver(apikey,secretkey,secure='False',host='localhost:8080',path=path)
 >>> conn=Driver(key=apikey,secret=secretkey,secure=False,host='localhost',port='8080',path=path)
+]]>
+    </programlisting>
+
+    <para>With the connection image in hand, you now use the libcloud base api to list such things as the templates (i.e images), the service offerings (i.e sizes) and the zones (i.e locations)</para>
+    <programlisting>
+<![CDATA[
 >>> conn.list_images()
 [<NodeImage: id=13ccff62-132b-4caf-b456-e8ef20cbff0e, name=tiny Linux, driver=CloudStack  ...>]
 >>> conn.list_sizes()
 [<NodeSize: id=ef2537ad-c70f-11e1-821b-0800277e749c, name=tinyOffering, ram=100 disk=0 bandwidth=0 price=0 driver=CloudStack ...>, <NodeSize: id=c66c2557-12a7-4b32-94f4-48837da3fa84, name=Small Instance, ram=512 disk=0 bandwidth=0 price=0 driver=CloudStack ...>, <NodeSize: id=3d8b82e5-d8e7-48d5-a554-cf853111bc50, name=Medium Instance, ram=1024 disk=0 bandwidth=0 price=0 driver=CloudStack ...>]
 >>> images=conn.list_images()
 >>> offerings=conn.list_sizes()
+]]>
+    </programlisting>
+
+    <para>The create_node method will take an instance name, a template and an instance type as arguments. It will return an instance of a <emphasis>CloudStackNode</emphasis> that has additional extensions methods, such as ex_stop and ex_start.</para>
+    <programlisting>
+<![CDATA[
 >>> node=conn.create_node(name='toto',image=images[0],size=offerings[0])
 >>> help(node)
 >>> node.get_uuid()
@@ -69,7 +110,10 @@ Cleaning up...
 u'toto'
 ]]>
     </programlisting>
+    </section>
 
+    <section id="libcloud-basic-zone">
+    <title>Keypairs and Security Groups</title>
     <para>
         I recently added support for keypair management in libcloud. For instace, given a conn object obtained from the previous interactive session:
     </para>
@@ -90,8 +134,51 @@ conn.ex_create_security_group(name='libcloud')
 conn.ex_authorize_security_group_ingress(securitygroupname='llibcloud',protocol='TCP',startport=22,cidrlist='0.0.0.0/0')
 conn.ex_delete_security_group('llibcloud')
     </programlisting>   
+    </section>
+    
+    <section id="libcloud-multi">
+    <title>Multiple Clouds</title>
+    <para>One of the interesting use cases of Libcloud is that you can use multiple Cloud Providers, such as AWS, Rackspace, OpenNebula, vCloud and so on. You can then create Driver instances to each of these clouds and create your own multi cloud application. In the example below we instantiate to libcloud &PRODUCT; driver, one on <ulink url="http://exoscale.ch">Exoscale</ulink> and the other on one <ulink url="http://ikoula.com">Ikoula</ulink>.</para>
+    <programlisting>
+ <![CDATA[
+import libcloud.security as sec
+
+Driver = get_driver(Provider.CLOUDSTACK)
 
+apikey=os.getenv('EXOSCALE_API_KEY')
+secretkey=os.getenv('EXOSCALE_SECRET_KEY')
+endpoint=os.getenv('EXOSCALE_ENDPOINT')
+host=urlparse.urlparse(endpoint).netloc
+path=urlparse.urlparse(endpoint).path
 
-    <para>One of the interesting use cases of Libcloud is that you can use multiple Cloud Providers, such as AWS, Rackspace, OpenNebula, vCloud and so on. You can then create Driver instances to each of these clouds and create your own multi cloud application.</para>
+exoconn=Driver(key=apikey,secret=secretkey,secure=True,host=host,path=path)
+
+Driver = get_driver(Provider.CLOUDSTACK)
+
+apikey=os.getenv('IKOULA_API_KEY')
+secretkey=os.getenv('IKOULA_SECRET_KEY')
+endpoint=os.getenv('IKOULA_ENDPOINT')
+host=urlparse.urlparse(endpoint).netloc
+print host
+path=urlparse.urlparse(endpoint).path
+print path
+
+sec.VERIFY_SSL_CERT = False
+
+ikoulaconn=Driver(key=apikey,secret=secretkey,secure=True,host=host,path=path)
+
+drivers = [exoconn, ikoulaconn]
+
+		for driver in drivers:
+		    print driver.list_locations()
+]]>
+    </programlisting>
+    <note>
+        <para>
+            In the example above, I set my access and secret keys as well as the endpoints as environment variable. Also note the libcloud security module and the VERIFY_SSL_CERT. In the case of iKoula the SSL certificate used was not verifiable by the CERTS that libcloud checks. Especially if you use a self-signed SSL certificate for testing, you might have to disable this check as well.
+        </para>
+    </note>
+    <para>From this basic setup you can imagine how you would write an application that would manage instances in different Cloud Providers. Providing more resiliency to your overall infrastructure.</para>
+    </section>
 
  </section>