You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@karaf.apache.org by jb...@apache.org on 2015/01/19 16:09:45 UTC

karaf-cellar git commit: Update the documentation with the latest changes

Repository: karaf-cellar
Updated Branches:
  refs/heads/cellar-2.3.x 403547b08 -> 95c84a8a8


Update the documentation with the latest changes


Project: http://git-wip-us.apache.org/repos/asf/karaf-cellar/repo
Commit: http://git-wip-us.apache.org/repos/asf/karaf-cellar/commit/95c84a8a
Tree: http://git-wip-us.apache.org/repos/asf/karaf-cellar/tree/95c84a8a
Diff: http://git-wip-us.apache.org/repos/asf/karaf-cellar/diff/95c84a8a

Branch: refs/heads/cellar-2.3.x
Commit: 95c84a8a863c496dcad87e705c7ba4c93a672bdb
Parents: 403547b
Author: Jean-Baptiste Onofré <jb...@apache.org>
Authored: Mon Jan 19 16:09:26 2015 +0100
Committer: Jean-Baptiste Onofré <jb...@apache.org>
Committed: Mon Jan 19 16:09:26 2015 +0100

----------------------------------------------------------------------
 manual/src/main/webapp/user-guide/deploy.conf   |  46 +--
 manual/src/main/webapp/user-guide/event.conf    |   8 +-
 manual/src/main/webapp/user-guide/groups.conf   | 362 ++++++++++++-------
 .../src/main/webapp/user-guide/hazelcast.conf   | 115 ++++++
 manual/src/main/webapp/user-guide/index.conf    |   1 +
 .../main/webapp/user-guide/installation.conf    |   2 +
 .../main/webapp/user-guide/introduction.conf    |  55 +--
 manual/src/main/webapp/user-guide/nodes.conf    | 161 ++++++++-
 8 files changed, 524 insertions(+), 226 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/deploy.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/deploy.conf b/manual/src/main/webapp/user-guide/deploy.conf
index e78d2f5..69ec859 100644
--- a/manual/src/main/webapp/user-guide/deploy.conf
+++ b/manual/src/main/webapp/user-guide/deploy.conf
@@ -10,27 +10,27 @@ Karaf Cellar is provided as a Karaf features XML descriptor.
 Simply register the Cellar feature URL in your Karaf instance:
 
 {code}
-karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.3.2/xml/features
+karaf@root> features:addurl mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.3.4/xml/features
 {code}
 
 Now you have Cellar features available in your Karaf instance:
 
 {code}
 karaf@node1> features:list|grep -i cellar
-[uninstalled] [2.3.2          ] cellar-core                   karaf-cellar-2.3.2 Karaf clustering core
-[uninstalled] [2.5            ] hazelcast                     karaf-cellar-2.3.2 In memory data grid
-[uninstalled] [2.3.2          ] cellar-hazelcast              karaf-cellar-2.3.2 Cellar implementation based on Hazelcast
-[uninstalled] [2.3.2          ] cellar-config                 karaf-cellar-2.3.2 ConfigAdmin cluster support
-[uninstalled] [2.3.2          ] cellar-features               karaf-cellar-2.3.2 Karaf features cluster support
-[uninstalled] [2.3.2          ] cellar-bundle                 karaf-cellar-2.3.2 Bundle cluster support
-[uninstalled] [2.3.2          ] cellar-shell                  karaf-cellar-2.3.2 Cellar shell commands
-[uninstalled] [2.3.2          ] cellar-management             karaf-cellar-2.3.2 Cellar management
-[uninstalled] [2.3.2          ] cellar                        karaf-cellar-2.3.2 Karaf clustering
-[uninstalled] [2.3.2          ] cellar-dosgi                  karaf-cellar-2.3.2 DOSGi support
-[uninstalled] [2.3.2          ] cellar-obr                    karaf-cellar-2.3.2 OBR cluster support
-[uninstalled] [2.3.2          ] cellar-event                  karaf-cellar-2.3.2 OSGi events broadcasting in clusters
-[uninstalled] [2.3.2          ] cellar-cloud                  karaf-cellar-2.3.2 Cloud blobstore support in clusters
-[uninstalled] [2.3.2          ] cellar-webconsole             karaf-cellar-2.3.2 Cellar plugin for Karaf WebConsole
+[uninstalled] [2.3.4          ] cellar-core                   karaf-cellar-2.3.4 Karaf clustering core
+[uninstalled] [2.5            ] hazelcast                     karaf-cellar-2.3.4 In memory data grid
+[uninstalled] [2.3.4          ] cellar-hazelcast              karaf-cellar-2.3.4 Cellar implementation based on Hazelcast
+[uninstalled] [2.3.4          ] cellar-config                 karaf-cellar-2.3.4 ConfigAdmin cluster support
+[uninstalled] [2.3.4          ] cellar-features               karaf-cellar-2.3.4 Karaf features cluster support
+[uninstalled] [2.3.4          ] cellar-bundle                 karaf-cellar-2.3.4 Bundle cluster support
+[uninstalled] [2.3.4          ] cellar-shell                  karaf-cellar-2.3.4 Cellar shell commands
+[uninstalled] [2.3.4          ] cellar-management             karaf-cellar-2.3.4 Cellar management
+[uninstalled] [2.3.4          ] cellar                        karaf-cellar-2.3.4 Karaf clustering
+[uninstalled] [2.3.4          ] cellar-dosgi                  karaf-cellar-2.3.4 DOSGi support
+[uninstalled] [2.3.4          ] cellar-obr                    karaf-cellar-2.3.4 OBR cluster support
+[uninstalled] [2.3.4          ] cellar-event                  karaf-cellar-2.3.4 OSGi events broadcasting in clusters
+[uninstalled] [2.3.4          ] cellar-cloud                  karaf-cellar-2.3.4 Cloud blobstore support in clusters
+[uninstalled] [2.3.4          ] cellar-webconsole             karaf-cellar-2.3.4 Cellar plugin for Karaf WebConsole
 {code}
 
 h2. Starting Cellar
@@ -45,14 +45,14 @@ You can now see the Cellar components (bundles) installed:
 
 {code}
 karaf@node1> la|grep -i cellar
-[  55] [Active     ] [Created     ] [   30] Apache Karaf :: Cellar :: Core (2.3.2)
-[  56] [Active     ] [Created     ] [   31] Apache Karaf :: Cellar :: Utils (2.3.2)
-[  57] [Active     ] [Created     ] [   33] Apache Karaf :: Cellar :: Hazelcast (2.3.2)
-[  58] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Shell (2.3.2)
-[  59] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Config (2.3.2)
-[  60] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Bundle (2.3.2)
-[  61] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Features (2.3.2)
-[  62] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Management (2.3.2)
+[  55] [Active     ] [Created     ] [   30] Apache Karaf :: Cellar :: Core (2.3.4)
+[  56] [Active     ] [Created     ] [   31] Apache Karaf :: Cellar :: Utils (2.3.4)
+[  57] [Active     ] [Created     ] [   33] Apache Karaf :: Cellar :: Hazelcast (2.3.4)
+[  58] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Shell (2.3.4)
+[  59] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Config (2.3.4)
+[  60] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Bundle (2.3.4)
+[  61] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Features (2.3.4)
+[  62] [Active     ] [Created     ] [   40] Apache Karaf :: Cellar :: Management (2.3.4)
 {code}
 
 And Cellar cluster commands are now available:

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/event.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/event.conf b/manual/src/main/webapp/user-guide/event.conf
index aed36cb..8b16438 100644
--- a/manual/src/main/webapp/user-guide/event.conf
+++ b/manual/src/main/webapp/user-guide/event.conf
@@ -7,13 +7,7 @@ h2. Enable OSGi Event Broadcasting support
 OSGi Event Broadcasting is an optional feature. To enable it, you have to install the cellar-event feature:
 
 {code}
-karaf@root> feature:install cellar-event
-{code}
-
-Of course, if Cellar is already installed, you can use Cellar itself to install cellar-event feature on all nodes:
-
-{code}
-karaf@root> cluster:feature-install group cellar-event
+karaf@root> features:install cellar-event
 {code}
 
 h2. OSGi Event Broadcast in action

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/groups.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/groups.conf b/manual/src/main/webapp/user-guide/groups.conf
index 2d49bca..9a108f3 100644
--- a/manual/src/main/webapp/user-guide/groups.conf
+++ b/manual/src/main/webapp/user-guide/groups.conf
@@ -7,12 +7,12 @@ a node within a group.
 By default, the Cellar nodes go into the default group:
 
 {code}
-karaf@node1> cluster:group-list
+karaf@root> cluster:group-list
    Group                  Members
-* [default             ] [vostro.local:5701* ]
+* [default             ] [node1:5701* ]
 {code}
 
-As for node, the starting * shows the local node/group.
+The 'x' indicates a local group. A local group is a group containing the local node (where we are connected).
 
 h2. New group
 
@@ -25,174 +25,266 @@ karaf@root> cluster:group-create test
 For now, the test group hasn't any nodes:
 
 {code}
-karaf@node1> cluster:group-list
+karaf@root> cluster:group-list
    Group                  Members
-* [default             ] [vostro.local:5701* ]
+* [default             ] [node1:5701* ]
   [test                ] []
 {code}
 
-You can use cluster:group-join, cluster:group-quit, cluster:group-set commands to add/remove a node into a cluster group.
+h2. Group nodes
+
+You can declare a node member of one of more groups:
+
+{code}
+araf@root> cluster:group-join test
+   Group                  Members
+* [default             ] [node1:5701* ]
+* [test                ] [node1:5701* ]
+{code}
+
+You can specify the node ID as argument (after the cluster group).
+The node can be local or remote.
+
+h2. Clustered Resources and Cluster Groups
+
+h3. Features
+
+Cellar can manipulate features and features repositories on cluster groups.
+
+You can use cluster:feature-* commands or the corresponding MBean for that.
 
-For instance, to set the local into the test cluster group:
+You can list the features repositories on a given cluster group:
 
 {code}
-karaf@node1> cluster:group-join test
+karaf@root> cluster:feature-url-list default
+mvn:org.apache.karaf.cellar/apache-karaf-cellar/2.3.4-SNAPSHOT/xml/features
+mvn:org.apache.karaf.assemblies.features/enterprise/2.3.8/xml/features
+mvn:org.jclouds.karaf/jclouds-karaf/1.4.0/xml/features
+mvn:org.apache.karaf.assemblies.features/standard/2.3.8/xml/features
+mvn:org.ops4j.pax.cdi/pax-cdi-features/0.8.0/xml/features
 {code}
 
-The cluster:group-delete command deletes the given cluster group:
+You can add a repository on a cluster group using the cluster:feature-url-add command:
 
 {code}
-karaf@node1> cluster:group-delete test
+karaf@root> cluster:feature-url-add default mvn:org.apache.activemq/activemq-karaf/5.10.0/xml/features
+{code}
+
+You can remove a repository from a cluster group using the cluster:feature-url-remove command:
+
+{code}
+karaf@root> cluster:feature-url-remove default mvn:org.apache.activemq/activemq-karaf/5.10.0/xml/features
+{code}
+
+You can list the features on a given cluster group:
+
+{code}
+karaf@root> cluster:feature-list default |more
+Features in cluster group default
+ Status        Version          Name
+[uninstalled] [0.8.0          ] pax-cdi-1.1-web-weld
+[uninstalled] [3.0.7.RELEASE  ] spring
+[uninstalled] [1.4.0          ] jclouds-cloudfiles-uk
+[uninstalled] [1.4.0          ] jclouds-aws-s3
+[uninstalled] [1.4.0          ] jclouds-services
+...
+{code}
+
+You can install a feature on a cluster group using the cluster:feature-install command:
+
 {code}
+karaf@root> cluster:feature-install default eventadmin
+{code}
+
+You can uninstall a feature from a cluster group, using the cluster:feature-uninstall command:
+
+{code}
+karaf@root> cluster:feature-uninstall default eventadmin
+{code}
+
+Cellar also provides a feature listener, disabled by default as you can see in etc/org.apache.karaf.cellar.node.cfg configuration
+file:
+
+{code}
+feature.listener = false
+{code}
+
+The listener listens for the following local feature changes:
+* add features repository
+* remove features repository
+* install feature
+* uninstall feature
+
+h3. Bundles
+
+Cellar can manipulate bundles on cluster groups.
+
+You can use cluster:bundle-* commands or the corresponding MBean for that.
+
+You can list the bundles in a cluster group using the cluster:bundle-list command:
+
+{code}
+karaf@root> cluster:bundle-list default |more
+Bundles in cluster group default
+ ID     State        Name
+[0   ] [Active     ] Apache Karaf :: Diagnostic :: Common (2.3.8)
+[1   ] [Active     ] Apache Karaf :: Admin :: Core (2.3.8)
+[2   ] [Active     ] Apache Karaf :: Shell :: OSGi Commands (2.3.8)
+[3   ] [Active     ] Apache Karaf :: Diagnostic :: Command (2.3.8)
+ ...
+{code}
+
+You can install a bundle on a cluster group using the cluster:bundle-install command:
+
+{code}
+karaf@root> cluster:bundle-install default mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.commons-lang/2.4_6
+{code}
+
+You can start a bundle in a cluster group using the cluster:bundle-start command:
 
-h2. Group configuration
+{code}
+karaf@root> cluster:bundle-start default commons-lang
+{code}
 
-You can see the configuration PID associated with a given group, for instance the default group:
+You can stop a bundle in a cluster group using the cluster:bundle-stop command:
 
 {code}
-karaf@root> cluster:config-list default
-PIDs for group:default
-PID                                     
-org.apache.felix.fileinstall.3e4e22ea-8495-4612-9839-a537c8a7a503
-org.apache.felix.fileinstall.1afcd688-b051-4b12-a50e-97e40359b24e
-org.apache.karaf.features               
-org.apache.karaf.log                    
-org.apache.karaf.features.obr           
-org.ops4j.pax.logging                   
-org.apache.karaf.cellar.groups          
-org.ops4j.pax.url.mvn                   
-org.apache.karaf.jaas                   
-org.apache.karaf.shell  
+karaf@root> cluster:bundle-stop default commons-lang
 {code}
 
-You can use the cluster:config-proplist and config-propset commands to list, add and edit the configuration.
+You can uninstall a bundle from a cluster group using the cluster:bundle-uninstall command:
+
+{code}
+karaf@root> cluster:bundle-uninstall default commons-lang
+{code}
 
-For instance, in the test group, we don't have any configuration:
+Like for the feature, Cellar provides a bundle listener disabled by default in etc/org.apache.karaf.cellar.nodes.cfg:
 
 {code}
-karaf@root> cluster:config-list test
-No PIDs found for group:test
+bundle.listener = false
 {code}
 
-We can create a tstcfg config in the test group, containing name=value property:
+The bundle listener listens the following local bundle changes:
+* install bundle
+* start bundle
+* stop bundle
+* uninstall bundle
+
+h3. Configurations
+
+Cellar can manipulate configurations on cluster groups.
+
+You can use cluster:config-* commands or the corresponding MBean for that.
+
+You can list the configurations on a cluster group using the cluster:config-list command:
 
 {code}
-karaf@root> cluster:config-propset test tstcfg name value
+karaf@root> cluster:config-list default |more
+----------------------------------------------------------------
+Pid:            org.apache.karaf.command.acl.jaas
+Properties:
+   update = admin
+   service.pid = org.apache.karaf.command.acl.jaas
+----------------------------------------------------------------
+...
 {code}
 
-Now, we have this property in the test group:
+YOu can list properties in a config using the cluster:config-proplist command:
 
 {code}
-karaf@root> cluster:config-list test
-PIDs for group:test
-PID                                     
-tstcfg                                  
-karaf@root> cluster:config-proplist test tstcfg
-Property list for PID:tstcfg for group:test
+karaf@root> cluster:config-proplist default org.apache.karaf.jaas
+Property list for configuration PID org.apache.karaf.jaas in cluster group default
 Key                                      Value
-name                                     value
+encryption.prefix                        {CRYPT}
+encryption.name
+encryption.enabled                       false
+encryption.suffix                        {CRYPT}
+encryption.encoding                      hexadecimal
+service.pid                              org.apache.karaf.jaas
+encryption.algorithm                     MD5
 {code}
 
-h2. Group nodes
+You can set or append a value to a config property using the cluster:config-propset or cluster:config-propappend command:
 
-You can define a node member of one of more group:
+{code}
+karaf@root> cluster:config-propset default my.config my.property my.value
+{code}
+
+You can delete a property in a config using the cluster:config-propdel command:
 
 {code}
-karaf@root> cluster:group-join test node1.local:5701
-  Node                 Group
-  node1:5701 default
-* node2:5702 default
-  node1:5701 test
+karaf@root> cluster:config-propdel default my.config my.property
 {code}
 
-The node can be local or remote.
+You can delete the whole config using the cluster:config-delete command:
+
+{code}
+karaf@root> cluster:config-delete default my.config
+{code}
+
+Like for feature and bundle, Cellar provides a config listener disabled by default in etc/org.apache.karaf.cellar.nodes.cfg:
+
+{code}
+config.listener = false
+{code}
+
+The config listener listens the following local config changes:
+* create a config
+* add/delete/change a property
+* delete a config
+
+As some properties may be local to a node, Cellar excludes some property by default.
+You can see the current excluded properties using the cluster:config-property-excluded command:
 
-Now, the nodes of a given group will inherit of all configuration defined in the group. This means that
-node1 now knows the tstcfg configuration because it's a member of the test group:
-
-{code}
-karaf@root> config:edit tstcfg
-karaf@root> proplist
-  service.pid = tstcfg
-  name = value
-{code}
-
-h2. Group features
-
-Configuration and features can be assigned to a given group.
-
-{code}
-karaf@root> cluster:features-list default
-Features for group:default
-Name                                                  Version Status 
-spring-dm                                               1.2.1 true 
-kar                                                     2.3.1 false
-config                                                  2.3.1 true
-http-whiteboard                                         2.3.1 false
-application-without-isolation                             0.3 false 
-war                                                     2.3.1 false
-standard                                                2.3.1 false
-management                                              2.3.1 false
-transaction                                               0.3 false 
-jetty                                         7.4.2.v20110526 false 
-wrapper                                                 2.3.1 false
-jndi                                                      0.3 false 
-obr                                                     2.3.1 false
-jpa                                                       0.3 false 
-webconsole-base                                         2.3.1 false
-hazelcast                                               1.9.3 true 
-eventadmin                                              2.3.1 false
-spring-dm-web                                           1.2.1 false 
-ssh                                                     2.3.1 true
-spring-web                                      3.0.5.RELEASE false 
-hazelcast-monitor                                       1.9.3 false 
-jasypt-encryption                                       2.3.1 false
-webconsole                                              2.3.1 false
-spring                                          3.0.5.RELEASE true 
-{code}
-
-{code}
-karaf@root> cluster:features-list test
-Features for group:test
-Name                                                  Version Status 
-webconsole                                              2.3.1 false
-spring-dm                                               1.2.1 true 
-eventadmin                                              2.3.1 false
-http                                                    2.3.1 false
-war                                                     2.3.1 false
-http-whiteboard                                         2.3.1 false
-obr                                                     2.3.1 false
-spring                                          3.0.5.RELEASE true 
-hazelcast-monitor                                       1.9.3 false 
-webconsole-base                                         2.3.1 false
-management                                              2.3.1 true
-hazelcast                                               1.9.3 true 
-jpa                                                       0.3 false 
-jndi                                                      0.3 false 
-standard                                                2.3.1 false
-jetty                                         7.4.2.v20110526 false 
-application-without-isolation                             0.3 false 
-config                                                  2.3.1 true
-spring-web                                      3.0.5.RELEASE false 
-wrapper                                                 2.3.1 false
-transaction                                               0.3 false 
-spring-dm-web                                           1.2.1 false 
-ssh                                                     2.3.1 true
-jasypt-encryption                                       2.3.1 false
-kar                                                     2.3.1 false
-{code}
-
-Now we can "install" a feature for a given cluster group:
-
-{code}
-karaf@root> cluster:feature-install test eventadmin
-karaf@root> cluster:feature-list test|grep -i event
-eventadmin                                     2.3.1 true
-{code}
-
-Below, we see that the eventadmin feature has been installed on this member of the test group:
-
-{code}
-karaf@root> features:list|grep -i event
-[installed  ] [2.3.1 ] eventadmin                    karaf-2.3.1
 {code}
+karaf@node1()> cluster:config-propexcluded
+service.factoryPid, felix.fileinstall.filename, felix.fileinstall.dir, felix.fileinstall.tmpdir, org.ops4j.pax.url.mvn.defaultRepositories
+{code}
+
+You can modify this list using the same command, or by editing the etc/org.apache.karaf.cellar.node.cfg configuration file:
+
+{code}
+#
+# Excluded config properties from the sync
+# Some config properties can be considered as local to a node, and should not be sync on the cluster.
+#
+config.excluded.properties = service.factoryPid, felix.fileinstall.filename, felix.fileinstall.dir, felix.fileinstall.tmpdir, org.ops4j.pax.url.mvn.defaultRepositories
+{code}
+
+h3. OBR (optional)
+
+See the [OBR section|obr] for details.
+
+h3. EventAdmin (optiona)
+
+See the [EventAdmin section|event] for details.
+
+h2. Blocking policy
+
+You can define a policy to filter the cluster events exchanges by the nodes (inbound or outbound).
+
+It allows you to block or allow some resources on the cluster.
+
+By adding a resource id in a blacklist, you block the resource.
+By adding a resource id in a whitelist, you allow the resource.
+
+The blocking policies are configured in the etc/org.apache.karaf.cellar.groups.cfg configuration file.
+
+The format of the properties key used is:
+
+[CLUSTER_GROUP].[CLUSTER_RESOURCE].[WHITELIST|BLACKLIST].[INBOUND|OUTBOUND]
+
+and the value is a comma separated list of resource identifier (* wildcard is allowed).
+
+For instance, by default, we have:
+
+{code}
+default.features.whitelist.inbound = *
+default.features.whitelist.outbound = *
+default.features.blacklist.inbound = config,management,hazelcast,cellar*
+default.features.blacklist.outbound = config,management,hazelcast,cellar*
+{code}
+
+It means that:
+* for the default cluster group, regarding features on the cluster, we allow all (*) inbound and outbound
+* on the other hand, for the default cluster group, we block config, management, hazelcast and all cellar features inbound and outbound.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/hazelcast.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/hazelcast.conf b/manual/src/main/webapp/user-guide/hazelcast.conf
new file mode 100644
index 0000000..ba76906
--- /dev/null
+++ b/manual/src/main/webapp/user-guide/hazelcast.conf
@@ -0,0 +1,115 @@
+h1. Core runtime and Hazelcast
+
+Cellar uses Hazelcast as cluster engine.
+
+When you install the cellar feature, a hazelcast feature is automatically installed, providing the etc/hazelcast.xml
+configuration file.
+
+The etc/hazelcast.xml configuration file contains all the core configuration, especially:
+* the Hazelcast cluster identifiers (group name and password)
+* network discovery and security configuration
+
+h2. Hazelcast cluster identification
+
+The <group/> element in the etc/hazelcast.xml defines the identification of the Hazelcast cluster:
+
+{code}
+    <group>
+        <name>cellar</name>
+        <password>pass</password>
+    </group>
+{code}
+
+All Cellar nodes have to use the same name and password (to be part of the same Hazelcast cluster).
+
+h2. Network
+
+The <network/> element in the etc/hazelcast.xml contains all the network configuration.
+
+First, it defines the port numbers used by Hazelcast:
+
+{code}
+        <port auto-increment="true" port-count="100">5701</port>
+        <outbound-ports>
+            <!--
+                Allowed port range when connecting to other nodes.
+                0 or * means use system provided port.
+            -->
+            <ports>0</ports>
+        </outbound-ports>
+{code}
+
+Second, it defines the mechanism used to discover the Cellar nodes: it's the <join/> element.
+
+By default, Hazelcast uses unicast.
+
+You can also use multicast (enabled by default in Cellar):
+
+{code}
+            <multicast enabled="true">
+                <multicast-group>224.2.2.3</multicast-group>
+                <multicast-port>54327</multicast-port>
+            </multicast>
+{code}
+
+Instead of using multicast, you can also explicitly define the host names (or IP addresses) of the different
+Cellar nodes:
+
+{code}
+            <tcp-ip enabled="true">
+                <interface>127.0.0.1</interface>
+            </tcp-ip>
+{code}
+
+You can also discover nodes located on a Amazon instance:
+
+{code}
+            <aws enabled="true">
+                <access-key>my-access-key</access-key>
+                <secret-key>my-secret-key</secret-key>
+                <!--optional, default is us-east-1 -->
+                <region>us-west-1</region>
+                <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
+                <host-header>ec2.amazonaws.com</host-header>
+                <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
+                <security-group-name>hazelcast-sg</security-group-name>
+                <tag-key>type</tag-key>
+                <tag-value>hz-nodes</tag-value>
+            </aws>
+{code}
+
+Third, you can specific on which network interface the cluster is running. By default, Hazelcast listens on all interfaces (0.0.0.0).
+But you can specify an interface:
+
+{code}
+        <interfaces enabled="true">
+            <interface>10.10.1.*</interface>
+        </interfaces>
+{code}
+
+Finally, you can also enable security transport on the cluster.
+Two modes are supported:
+* SSL:
+{code}
+        <ssl enabled="true"/>
+{code}
+* Symmetric Encryption:
+{code}
+        <symmetric-encryption enabled="true">
+            <!--
+               encryption algorithm such as
+               DES/ECB/PKCS5Padding,
+               PBEWithMD5AndDES,
+               AES/CBC/PKCS5Padding,
+               Blowfish,
+               DESede
+            -->
+            <algorithm>PBEWithMD5AndDES</algorithm>
+            <!-- salt value to use when generating the secret key -->
+            <salt>thesalt</salt>
+            <!-- pass phrase to use when generating the secret key -->
+            <password>thepass</password>
+            <!-- iteration count to use when generating the secret key -->
+            <iteration-count>19</iteration-count>
+        </symmetric-encryption>
+{code}
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/index.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/index.conf b/manual/src/main/webapp/user-guide/index.conf
index 58cda87..e5eca5e 100644
--- a/manual/src/main/webapp/user-guide/index.conf
+++ b/manual/src/main/webapp/user-guide/index.conf
@@ -3,6 +3,7 @@ h1. Karaf Cellar User Guide
 * [Karaf Cellar Introduction|/user-guide/introduction]
 * [Installing Karaf Cellar|/user-guide/installation]
 * [Start Karaf Cellar|/user-guide/deploy]
+* [Core Configuration|/user-guide/hazelcast]
 * [Nodes in Karaf Cellar|/user-guide/nodes]
 * [Groups in Karaf Cellar|/user-guide/groups]
 * [OBR in Karaf Cellar|/user-guide/obr]

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/installation.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/installation.conf b/manual/src/main/webapp/user-guide/installation.conf
index 8260ee9..4f8c113 100644
--- a/manual/src/main/webapp/user-guide/installation.conf
+++ b/manual/src/main/webapp/user-guide/installation.conf
@@ -15,6 +15,8 @@ org.apache.aries.blueprint.synchronous=true
 Karaf Cellar is provided under a Karaf features descriptor. The easiest way to install is just to
 have an internet connection from the Karaf running instance.
 
+See [deploy] to how to install and start Cellar.
+
 h2. Building from Sources
 
 If you intend to build Karaf Cellar from the sources, the requirements are:

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/introduction.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/introduction.conf b/manual/src/main/webapp/user-guide/introduction.conf
index 3273e5a..477d08c 100644
--- a/manual/src/main/webapp/user-guide/introduction.conf
+++ b/manual/src/main/webapp/user-guide/introduction.conf
@@ -4,58 +4,31 @@ h2. Karaf Cellar use cases
 
 The first purpose of Cellar is to synchronize the state of several Karaf instances (named nodes).
 
-It means that all resources modified (installed, started, etc) on one Karaf instance will be propagated to all others
-nodes.
-Concretely, Cellar will broadcast an event to others nodes when you perform an action.
+Cellar provides dedicated shell commands and MBeans to administrate the cluster, and manipulate the resources on the cluster.
 
-The nodes list could be discovered (using multicast/unicast), or explicitly defined (using a couple hostname or IP
+It's also possible to enable local resources listeners: these listeners broadcast local resource changes as cluster events.
+Please note that this behavior is disabled by default as it can have side effects (especially when a node is stopped).
+Enabling listeners is at your own risk.
+
+The nodes list could be discovered (using unicast or multicast), or "staticly" defined (using a couple hostname or IP
 and port list).
 
 Cellar is able to synchronize:
 - bundles (remote, local, or from an OBR)
 - config
 - features
-- OSGi events (optional)
-- OBR events (optional)
+- eventadmin
+
+Optionally, Cellar also support synchronization of OSGi EventAdmin, OBR (URLs and bundles).
 
 The second purpose is to provide a Distributed OSGi runtime. It means that using Cellar, you are able to call an OSGi
 service located on a remote instance. See the [Transport and DOSGi] section of the user guide.
 
-h2. Cellar network
-
-Cellar relies on Hazelcast (http://www.hazelcast.com), a memory data grid implementation.
-
-You have a full access to the Hazelcast configuration (in etc/hazelcast.xml) allowing you to specify the network
-configuration.
-
-Especially, you can enable or not the multicast support and choose the multicast group and port number.
-
-You can also configure on which interface and IP address you configure Cellar and port number used by Cellar:
-
-{code}
-    <network>
-        <port auto-increment="true">5701</port>
-        <join>
-            <multicast enabled="true">
-                <multicast-group>224.2.2.3</multicast-group>
-                <multicast-port>54327</multicast-port>
-            </multicast>
-            <tcp-ip enabled="false">
-                <interface>127.0.0.1</interface>
-            </tcp-ip>
-            <aws enabled="false">
-                <access-key>my-access-key</access-key>
-                <secret-key>my-secret-key</secret-key>
-                <region>us-east-1</region>
-            </aws>
-        </join>
-        <interfaces enabled="false">
-            <interface>10.10.1.*</interface>
-        </interfaces>
-    </network>
-{code}
-
-By default, the Cellar node will start from network port 5701, each new node will use an incremented port number.
+Finally, Cellar also provides "runtime clustering" by providing dedicated feature like:
+- HTTP load balancing
+- HTTP sessions replication
+- log centralization
+Please, see the sections dedicated to those features.
 
 h2. Cross topology
 

http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/95c84a8a/manual/src/main/webapp/user-guide/nodes.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/nodes.conf b/manual/src/main/webapp/user-guide/nodes.conf
index f9fac0b..f5d09c8 100644
--- a/manual/src/main/webapp/user-guide/nodes.conf
+++ b/manual/src/main/webapp/user-guide/nodes.conf
@@ -10,45 +10,166 @@ and hence tries to discover the others Cellar nodes.
 You can list the known Cellar nodes using the list-nodes command:
 
 {code}
-karaf@node1> cluster:node-list
+karaf@root> cluster:node-list
    ID                               Host Name              Port
-* [vostro.local:5701             ] [vostro.local        ] [ 5701]
+* [172.17.42.1:5701              ] [172.17.42.1         ] [ 5701]
 {code}
 
-The starting * indicates that it's the Karaf instance on which you are logged on (the local node).
+The starting '*' indicates that it's the Karaf instance on which you are logged on (the local node).
 
 h2. Testing nodes
 
 You can ping a node to test it:
 
 {code}
-karaf@node1> cluster:node-ping vostro.local:5701
-PING vostro.local:5701
-from 1: req=vostro.local:5701 time=67 ms
-from 2: req=vostro.local:5701 time=10 ms
-from 3: req=vostro.local:5701 time=8 ms
-from 4: req=vostro.local:5701 time=9 ms
+karaf@root> cluster:node-ping 172.17.42.1:5701
+PING 172.17.42.1:5701
+from 1: req=172.17.42.1:5701 time=15 ms
+from 2: req=172.17.42.1:5701 time=9 ms
+from 3: req=172.17.42.1:5701 time=9 ms
+from 4: req=172.17.42.1:5701 time=10 ms
+from 5: req=172.17.42.1:5701 time=9 ms
+from 6: req=172.17.42.1:5701 time=9 ms
+from 7: req=172.17.42.1:5701 time=9 ms
+from 8: req=172.17.42.1:5701 time=9 ms
+from 9: req=172.17.42.1:5701 time=9 ms
+from 10: req=172.17.42.1:5701 time=9 ms
 {code}
 
-h2. Nodes sync
+h2. Node Components: listener, producer, handler, consume, and synchronizer
 
-Cellar allows nodes to 'sync' state. It currently covers features, configs, and bundles.
+A Cellar node is actually a set of components, each component is dedicated to a special purpose.
 
-For instance, if you install a feature (eventadmin for example) on node1:
+The etc/org.apache.karaf.cellar.node.cfg configuration file is dedicated to the configuration of the local node.
+It's where you can control the status of the different components.
+
+h3. Synchronizers and sync policy
+
+A synchronizer is invoked when you:
+* Cellar starts
+* a node joins a cluster group (see [groups] for details about cluster groups)
+* you explicitly call the cluster:sync command
+
+We have a synchronizer per resource: feature, bundle, config, obr (optional).
+
+Cellar supports three sync policies:
+* cluster (default): if the node is the first one in the cluster, it pushes its local state to the cluster, else if it's
+not the first node in the cluster, the node will update its local state with the cluster one (meaning that the cluster
+is the master).
+* node: in this case, the node is the master, it means that the cluster state will be overwritten by the node state.
+* disabled: in this case, it means that the synchronizer is not used at all, meaning the node or the cluster are not
+updated at all (at sync time).
+
+You can configure the sync policy (for each resource, and each cluster group) in the etc/org.apache.karaf.cellar.groups.cfg
+configuration file:
 
 {code}
-karaf@node1> features:install eventadmin
-karaf@node1> features:list|grep -i eventadmin
-[installed  ] [2.3.1 ] eventadmin                    karaf-2.3.1
+default.bundle.sync = cluster
+default.config.sync = cluster
+default.feature.sync = cluster
+default.obr.urls.sync = cluster
 {code}
 
-You can see that the eventadmin feature has been installed on node2:
+The cluster:sync command allows you to "force" the sync:
 
 {code}
-karaf@node2> features:list|grep -i eventadmin
-[installed  ] [2.3.1 ] eventadmin                    karaf-2.3.1
+karaf@root> cluster:sync
+Synchronizing cluster group default
+        bundle: done
+        config: done
+        feature: done
+        obr.urls: No synchronizer found for obr.urls
 {code}
 
-Features uninstall works in the same way. Basically, Cellar synchronisation is completely transparent.
+It's also possible to sync only a resource using:
+* -b (--bundle) for bundle
+* -f (--feature) for feature
+* -c (--config) for configuration
+* -o (--obr) for OBR URLs
+
+or a given cluster group using the -g (--group) option.
+
+h3. Producer, consumer, and handlers
+
+To notify the other nodes in the cluster, Cellar produces a cluster event.
+
+For that, the local node uses a producer to create and send the cluster event.
+You can see the current status of the local producer using the cluster:producer-status command:
+
+{code}
+karaf@root> cluster:producer-status
+   Node                             Status
+* [172.17.42.1:5701              ] [ON   ]
+
+{code}
+
+The cluster:producer-stop and cluster:producer-start commands allow you to stop or start the local cluster event
+producer:
+
+{code}
+karaf@root> cluster:producer-stop
+   Node                             Status
+* [172.17.42.1:5701              ] [OFF  ]
+karaf@root> cluster:producer-start
+   Node                             Status
+* [172.17.42.1:5701              ] [ON   ]
+{code}
+
+When the producer is off, it means that the node is "isolated" from the cluster as it doesn't send "outbound" cluster events
+to the other nodes.
+
+On the other hand, a node receives the cluster events on a consumer. Like for the producer, you can see and control the
+consumer using dedicated command:
+
+{code}
+karaf@root> cluster:consumer-status
+   Node                             Status
+* [172.17.42.1:5701              ] [ON   ]
+karaf@root> cluster:consumer-stop
+   Node                             Status
+* [172.17.42.1:5701              ] [OFF  ]
+karaf@root> cluster:consumer-start
+   Node                             Status
+* [172.17.42.1:5701              ] [ON   ]
+{code}
+
+When the consumer is off, it means that node is "isolated" from the cluster as it doesn't receive "inbound" cluster events
+from the other nodes.
+
+Different cluster events are involved. For instance, we have cluster event for feature, for bundle, for configuration, for OBR, etc.
+When a consumer receives a cluster event, it delegates the handling of the cluster event to a specific handler, depending of the
+type of the cluster event.
+You can see the different handlers and their status using the cluster:handler-status command:
+
+{code}
+karaf@root> cluster:handler-status
+   Node                             Status  Event Handler
+* [172.17.42.1:5701              ] [ON   ] org.apache.karaf.cellar.config.ConfigurationEventHandler
+* [172.17.42.1:5701              ] [ON   ] org.apache.karaf.cellar.bundle.BundleEventHandler
+* [172.17.42.1:5701              ] [ON   ] org.apache.karaf.cellar.features.FeaturesEventHandler
+{code}
+
+You can stop or start a specific handler using the cluster:handler-stop and cluster:handler-start commands.
+
+When a handler is stopped, it means that the node will receive the cluster event, but will not update the local resources
+dealt by the handler.
+
+h3. Listeners
+
+The listeners are listening for local resource change.
+
+For instance, when you install a feature (with feature:install), the feature listener traps the change and broadcast this
+change as a cluster event to other nodes.
+
+To avoid some unexpected behaviors (especially when you stop a node), most of the listeners are switch off by default.
+
+The listeners status are configured in the etc/org.apache.karaf.cellar.node.cfg configuration file.
+
+NB: enabling listeners is at your own risk. We encourage you to use cluster dedicated commands and MBeans to manipulate
+the resources on the cluster.
+
+h2. Clustered resources
+
+Cellar provides dedicated commands and MBeans for clustered resources.
 
-Configuration is also synchronized.
+Please, go into the [cluster groups|groups] section for details.
\ No newline at end of file