You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@karaf.apache.org by jb...@apache.org on 2015/03/27 14:14:46 UTC

svn commit: r1669576 - /karaf/site/production/manual/cellar/latest/user-guide/cloud.html

Author: jbonofre
Date: Fri Mar 27 13:14:46 2015
New Revision: 1669576

URL: http://svn.apache.org/r1669576
Log:
[scm-publish] Updating main site with Karaf Cellar manual

Modified:
    karaf/site/production/manual/cellar/latest/user-guide/cloud.html

Modified: karaf/site/production/manual/cellar/latest/user-guide/cloud.html
URL: http://svn.apache.org/viewvc/karaf/site/production/manual/cellar/latest/user-guide/cloud.html?rev=1669576&r1=1669575&r2=1669576&view=diff
==============================================================================
--- karaf/site/production/manual/cellar/latest/user-guide/cloud.html (original)
+++ karaf/site/production/manual/cellar/latest/user-guide/cloud.html Fri Mar 27 13:14:46 2015
@@ -37,14 +37,22 @@
               <h1 id="DiscoveryServices">Discovery Services</h1><p>The Discovery Services allow you to use third party libraries to discover the nodes member of the Cellar cluster.</p><h2 id="jClouds">jClouds</h2><p>Cellar relies on Hazelcast (http://www.hazelcast.com) in order to discover cluster nodes. This can happen either by using unicast, multicast  or specifying the ip address of each node.<br/>See the <a href="../architecture-guide/hazelcast.html">Core Configuration</a> section for details.</p><p>Unfortunately multicast is not allowed in most IaaS providers and the alternative of specifying all IP addresses creates maintenance difficulties, especially since in most cases the addresses are not known in advance.</p><p>Cellar solves this problem using a cloud discovery service powered by jclouds (http://jclouds.apache.org).</p><h3 id="Clouddiscoveryservice">Cloud discovery service</h3><p>Most cloud providers provide cloud storage among other services. Cellar uses the cloud stor
 age via jclouds, in order to determine the IP addresses of each node so that Hazelcast can find them.</p><p>This approach is also called blackboard and refers to the process where each node registers itself in a common storage are so that other nodes know its existence.</p><h3 id="InstallingCellarclouddiscoveryservice">Installing Cellar cloud discovery service</h3><p>To install the cloud discovery service simply install the appropriate jclouds provider and then install cellar-cloud feature.<br/>Amazon S3 is being used here for this example, but the below applies to any provider supported by jclouds.</p><pre>
 karaf@root()> feature:install jclouds-aws-s3
 karaf@root()> feature:install cellar-cloud
-</pre><p>Once the feature is installed, you're required to create a configuration that contains credentials and the type of the cloud storage (aka blobstore).<br/>To do that add a configuration file under the etc folder with the name org.apache.karaf.cellar.cloud-&lt;provider>.cfg and place the following information there:</p><p>provider=aws-s3 (this varies according to the blobstore provider)<br/>identity=&lt;the identity of the blobstore account><br/>credential=&lt;the credential/password of the blobstore account)<br/>container=&lt;the name of the bucket><br/>validity=&lt;the amount of time an entry is considered valid, after that time the entry is removed></p><p>After creating the file the service will check for new nodes. If new nodes are found the Hazelcast instance configuration will be updated and the instance restarted.</p><h2 id="Kubernetesdocker.io">Kubernetes &amp; docker.io</h2><p><a href="http://kubernetes.io">Kubernetes</a> is an open source orchestration system for do
 cker.io containers.<br/>It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches<br/>the users declared intentions.<br/>Using the concepts of "labels", "pods", "replicationControllers" and "services", it groups the containers which make up<br/>an application into logical units for easy management and discovery.<br/>Following the aforementioned concept will most likely change how you package and provision your Karaf based applications.<br/>For instance, you will eventually have to provide a Docker image with a pre-configured Karaf, KAR files in deployment<br/>folder, etc. so that your Kubernetes container may bootstrap everything on boot.</p><p>The Cellar Kubernetes discovery service is a great complement to the Karaf docker.io feature (allowing you to easily<br/>create and manage docker.io images in and for Karaf).</p><h3 id="Kubernetesdiscoveryservice">Kubernetes discovery service</h3><p>In order to determine the IP add
 ress of each node, so that Hazelcast can connect to them, the Kubernetes discovery service queries<br/>the Kubernetes API for containers labeled with the <em>pod.label.key</em> and <em>pod.label.key</em> specified in <em>etc/org.apache.karaf.cellar.kubernetes.cfg</em>.<br/>So, you <strong>must be sure</strong> to label your containers (pods) accordingly.<br/><strong>NOTE</strong>: Since environment variables are injected into all Kubernetes containers, they can access said API at:</p><pre>
-http://$KUBERNETES_RO_SERVICE_HOST:$KUBERNETES_RO_SERVICE_PORT
-</pre><p>After a Cellar node starts up, Kubernetes discovery service will configure Hazelcast with currently running Cellar nodes.<br/>Since Hazelcast follows a peer-to-peer all-shared topology, whenever nodes come up and down, the cluster will remain up-to-date.</p><h3 id="InstallingKubernetesdiscoveryservice">Installing Kubernetes discovery service</h3><p>To install the Kubernetes discovery service, simply install cellar-kubernetes feature.</p><pre>
+</pre><p>Once the feature is installed, you're required to create a configuration that contains credentials and the type of the cloud storage (aka blobstore).<br/>To do that add a configuration file under the etc folder with the name org.apache.karaf.cellar.cloud-&lt;provider>.cfg and place the following information there:</p><p>provider=aws-s3 (this varies according to the blobstore provider)<br/>identity=&lt;the identity of the blobstore account><br/>credential=&lt;the credential/password of the blobstore account)<br/>container=&lt;the name of the bucket><br/>validity=&lt;the amount of time an entry is considered valid, after that time the entry is removed></p><p>For instance, you can create <em>etc/org.apache.karaf.cellar.cloud-mycloud.cfg</em> containing:</p><pre>
+provider=aws-s3
+identity=username
+credential=password
+container=cellar
+validity=360000
+</pre><p>NB: you can find the cloud providers supported by jclouds here http://repo1.maven.org/maven2/org/apache/jclouds/provider/.<br/>You have to install the corresponding jclouds feature for the provider.</p><p>After creating the file the service will check for new nodes. If new nodes are found the Hazelcast instance configuration will be updated and the instance restarted.</p><h2 id="Kubernetesdocker.io">Kubernetes &amp; docker.io</h2><p><a href="http://kubernetes.io">Kubernetes</a> is an open source orchestration system for docker.io containers.<br/>It handles scheduling onto nodes in a compute cluster and actively manages workloads to ensure that their state matches<br/>the users declared intentions.<br/>Using the concepts of "labels", "pods", "replicationControllers" and "services", it groups the containers which make up<br/>an application into logical units for easy management and discovery.<br/>Following the aforementioned concept will most likely change how you package and
  provision your Karaf based applications.<br/>For instance, you will eventually have to provide a Docker image with a pre-configured Karaf, KAR files in deployment<br/>folder, etc. so that your Kubernetes container may bootstrap everything on boot.</p><p>The Cellar Kubernetes discovery service is a great complement to the Karaf docker.io feature (allowing you to easily<br/>create and manage docker.io images in and for Karaf).</p><h3 id="Kubernetesdiscoveryservice">Kubernetes discovery service</h3><p>In order to determine the IP address of each node, so that Hazelcast can connect to them, the Kubernetes discovery service queries<br/>the Kubernetes API for containers labeled with the <em>pod.label.key</em> and <em>pod.label.key</em> specified in <em>etc/org.apache.karaf.cellar.kubernetes-name.cfg</em>.<br/>The name in <em>etc/org.apache.karaf.cellar.kubernetes-name.cfg</em> is a name of the choice. It allows you to create multiple Kubernetes discovery services.<br/>Thanks to that, the
  Cellar nodes can be discovered on different Kubernetes.</p><p>So, you <strong>must be sure</strong> to label your containers (pods) accordingly.</p><p>After a Cellar node starts up, Kubernetes discovery service will configure Hazelcast with currently running Cellar nodes.<br/>Since Hazelcast follows a peer-to-peer all-shared topology, whenever nodes come up and down, the cluster will remain up-to-date.</p><h3 id="InstallingKubernetesdiscoveryservice">Installing Kubernetes discovery service</h3><p>To install the Kubernetes discovery service, simply install cellar-kubernetes feature.</p><pre>
 karaf@root()> feature:install cellar-kubernetes
-</pre><p>Once the feature is installed, a new configuration file for the Kubernetes discovery service will live in etc/org.apache.karaf.cellar.kubernetes.cfg with the following contents:</p><pre>
-#
-# Label selector used to idenfity Cellar nodes in Kubernetes cluster
-#
-pod.label.key = name
-pod.label.value = cellar
+</pre><p>Once the cellar-kubernetes feature is installed, you have to create the Kubernetes provider configuration file.<br/>If you have multiple Kubernetes instances, you create one configuration file per instance.</p><p>For instance, you can create <em>etc/org.apache.karaf.cellar.kubernetes-myfirstcluster.cfg</em> containing:</p><pre>
+host=localhost
+port=8080
+pod.label.key=name
+pod.label.value=cellar
+</pre><p>and another one <em>etc/org.apache.karaf.cellar.kubernetes-mysecondcluster.cfg</em> containing:</p><pre>
+host=192.168.134.2
+port=8080
+pod.label.key=name
+pod.label.value=cellar
 </pre><p>In case you change the file, the discovery service will check again for new nodes. If new nodes are found, Hazelcast configuration will be<br/>updated and the instance restarted.</p>
\ No newline at end of file