You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@karaf.apache.org by jb...@apache.org on 2015/01/18 08:43:40 UTC
karaf-cellar git commit: [KARAF-1895] Improve Cellar manual
Repository: karaf-cellar
Updated Branches:
refs/heads/master c738c846a -> 7a598b285
[KARAF-1895] Improve Cellar manual
Project: http://git-wip-us.apache.org/repos/asf/karaf-cellar/repo
Commit: http://git-wip-us.apache.org/repos/asf/karaf-cellar/commit/7a598b28
Tree: http://git-wip-us.apache.org/repos/asf/karaf-cellar/tree/7a598b28
Diff: http://git-wip-us.apache.org/repos/asf/karaf-cellar/diff/7a598b28
Branch: refs/heads/master
Commit: 7a598b285f7b302efa15d9887dfea9d855b9951a
Parents: c738c84
Author: Jean-Baptiste Onofré <jb...@apache.org>
Authored: Sun Jan 18 08:43:15 2015 +0100
Committer: Jean-Baptiste Onofré <jb...@apache.org>
Committed: Sun Jan 18 08:43:15 2015 +0100
----------------------------------------------------------------------
manual/src/main/webapp/user-guide/deploy.conf | 30 ++
manual/src/main/webapp/user-guide/groups.conf | 273 +++++++++++++++++--
.../src/main/webapp/user-guide/hazelcast.conf | 115 ++++++++
manual/src/main/webapp/user-guide/index.conf | 1 +
.../main/webapp/user-guide/installation.conf | 6 +-
.../main/webapp/user-guide/introduction.conf | 53 +---
manual/src/main/webapp/user-guide/nodes.conf | 141 +++++++++-
7 files changed, 548 insertions(+), 71 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/deploy.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/deploy.conf b/manual/src/main/webapp/user-guide/deploy.conf
index e9f5c89..4a820c0 100644
--- a/manual/src/main/webapp/user-guide/deploy.conf
+++ b/manual/src/main/webapp/user-guide/deploy.conf
@@ -59,3 +59,33 @@ And Cellar cluster commands are now available:
{code}
karaf@root()> cluster:<TAB>
{code}
+
+h2. Optional features
+
+Optionally, you can install additional features.
+
+The cellar-event feature adds support of OSGi EventAdmin on the cluster:
+
+{code}
+karaf@root()> feature:install cellar-event
+{code}
+
+The cellar-obr feature adds support of OBR sync on the cluster:
+
+{code}
+karaf@root()> feature:install cellar-obr
+{code}
+
+The cellar-dosgi feature adds support of DOSGi (Distributed OSGi):
+
+{code}
+karaf@root()> feature:install cellar-dosgi
+{code}
+
+The cellar-cloud feature adds support of cloud blobstore, allowing to use instances located on a cloud provider:
+
+{code}
+karaf@root()> feature:install cellar-cloud
+{code}
+
+Please, see the sections dedicated to these features for details.
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/groups.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/groups.conf b/manual/src/main/webapp/user-guide/groups.conf
index 17ffc2b..ce89fd8 100644
--- a/manual/src/main/webapp/user-guide/groups.conf
+++ b/manual/src/main/webapp/user-guide/groups.conf
@@ -115,12 +115,51 @@ karaf@root()> property-list
name = value
{code}
-h2. Group features
+h2. Clustered Resources and Cluster Groups
-Configuration and features can be assigned to a given group.
+h3. Features
+
+Cellar can manipulate features and features repositories on cluster groups.
+
+You can use cluster:feature-* commands or the corresponding MBean for that.
+
+You can list the features repositories on a given cluster group:
+
+{code}
+karaf@node1()> cluster:feature-repo-list default
+Repository | Located | URL
+-------------------------------------------------------------------------------------------------------------------------
+jclouds-1.8.1 | cluster/local | mvn:org.apache.jclouds.karaf/jclouds-karaf/1.8.1/xml/features
+karaf-cellar-3.0.1-SNAPSHOT | cluster/local | mvn:org.apache.karaf.cellar/apache-karaf-cellar/3.0.1-SNAPSHOT/xml/features
+org.ops4j.pax.cdi-0.8.0 | cluster/local | mvn:org.ops4j.pax.cdi/pax-cdi-features/0.8.0/xml/features
+spring-3.0.2 | cluster/local | mvn:org.apache.karaf.features/spring/3.0.2/xml/features
+standard-3.0.2 | cluster/local | mvn:org.apache.karaf.features/standard/3.0.2/xml/features
+enterprise-3.0.2 | cluster/local | mvn:org.apache.karaf.features/enterprise/3.0.2/xml/features
+org.ops4j.pax.web-3.1.2 | cluster/local | mvn:org.ops4j.pax.web/pax-web-features/3.1.2/xml/features
+{code}
+
+You have the name of the repository, and the URL, like in the feature:repo-list command. However, the cluster:feature-repo-list command
+provides the location of the features repository:
+* cluster means that the repository is defined only on the cluster group
+* local means that the repository is defined only on the local node (not on the cluster)
+* cluster/local means that the repository is defined both on the local node, but also on the cluster group
+
+You can add a repository on a cluster group using the cluster:feature-repo-add command:
+
+{code}
+karaf@node1()> cluster:feature-repo-add default mvn:org.apache.activemq/activemq-karaf/5.10.0/xml/features
+{code}
+
+You can remove a repository from a cluster group using the cluster:feature-repo-remove command:
+
+{code}
+karaf@node1()> cluster:feature-repo-remove default mvn:org.apache.activemq/activemq-karaf/5.10.0/xml/features
+{code}
+
+You can list the features on a given cluster group:
{code}
-karaf@root()> cluster:feature-list default |more
+karaf@node1()> cluster:feature-list default |more
Name | Version | Installed | Located | Blocked
------------------------------------------------------------------------------------------------
gemini-blueprint | 1.0.0.RELEASE | | cluster/local |
@@ -130,34 +169,230 @@ jclouds-rackspace-clouddns-uk | 1.8.1 | | cluster
cellar-cloud | 3.0.1-SNAPSHOT | | local | in/out
webconsole | 3.0.2 | | cluster/local |
cellar-shell | 3.0.1-SNAPSHOT | x | local | in/out
+jclouds-glesys | 1.8.1 | | cluster/local |
...
{code}
+Like for the features repositories, you can note there the "Located" column containing where the feature is located (local
+to the node, or on the cluster group).
+You can also see the "Blocked" column indicating if the feature is blocked inbound or outbound (see the blocking policy).
+
+You can install a feature on a cluster group using the cluster:feature-install command:
+
{code}
-karaf@root()> cluster:feature-list test|more
-Name | Version | Installed | Located | Blocked
-------------------------------------------------------------------------------------------------
-gemini-blueprint | 1.0.0.RELEASE | | cluster/local |
-package | 3.0.2 | x | cluster/local |
-jclouds-api-route53 | 1.8.1 | | cluster/local |
-jclouds-rackspace-clouddns-uk | 1.8.1 | | cluster/local |
-cellar-cloud | 3.0.1-SNAPSHOT | | local | in/out
-webconsole | 3.0.2 | | cluster/local |
-cellar-shell | 3.0.1-SNAPSHOT | x | local | in/out
+karaf@node1()> cluster:feature-install default eventadmin
+{code}
+
+You can uninstall a feature from a cluster group, using the cluster:feature-uninstall command:
+
+{code}
+karaf@node1()> cluster:feature-uninstall default eventadmin
+{code}
+
+Cellar also provides a feature listener, disabled by default as you can see in etc/org.apache.karaf.cellar.node.cfg configuration
+file:
+
+{code}
+feature.listener = false
+{code}
+
+The listener listens for the following local feature changes:
+* add features repository
+* remove features repository
+* install feature
+* uninstall feature
+
+h3. Bundles
+
+Cellar can manipulate bundles on cluster groups.
+
+You can use cluster:bundle-* commands or the corresponding MBean for that.
+
+You can list the bundles in a cluster group using the cluster:bundle-list command:
+
+{code}
+karaf@node1()> cluster:bundle-list default |more
+Bundles in cluster group default
+ID | State | Located | Blocked | Version | Name
+--------------------------------------------------------------------------------------------------------------------
+ 0 | Active | cluster/local | | 2.2.0 | OPS4J Pax Url - aether:
+ 1 | Active | cluster/local | | 3.0.2 | Apache Karaf :: Deployer :: Blueprint
+ 2 | Active | cluster/local | | 2.2.0 | OPS4J Pax Url - wrap:
+ 3 | Active | cluster/local | | 1.8.0 | Apache Felix Configuration Admin Service
+ 4 | Active | cluster/local | | 3.0.2 | Apache Karaf :: Region :: Core
+ ...
+{code}
+
+Like for the features, you can see the "Located" and "Blocked" columns.
+
+You can install a bundle on a cluster group using the cluster:bundle-install command:
+
+{code}
+karaf@node1()> cluster:bundle-install default mvn:org.apache.servicemix.bundles/org.apache.servicemix.bundles.commons-lang/2.4_6
+{code}
+
+You can start a bundle in a cluster group using the cluster:bundle-start command:
+
+{code}
+karaf@node1()> cluster:bundle-start default commons-lang
+{code}
+
+You can stop a bundle in a cluster group using the cluster:bundle-stop command:
+
+{code}
+karaf@node1()> cluster:bundle-stop default commons-lang
+{code}
+
+You can uninstall a bundle from a cluster group using the cluster:bundle-uninstall command:
+
+{code}
+karaf@node1()> cluster:bundle-uninstall default commons-lang
+{code}
+
+Like for the feature, Cellar provides a bundle listener disabled by default in etc/org.apache.karaf.cellar.nodes.cfg:
+
+{code}
+bundle.listener = false
+{code}
+
+The bundle listener listens the following local bundle changes:
+* install bundle
+* start bundle
+* stop bundle
+* uninstall bundle
+
+h3. Configurations
+
+Cellar can manipulate configurations on cluster groups.
+
+You can use cluster:config-* commands or the corresponding MBean for that.
+
+You can list the configurations on a cluster group using the cluster:config-list command:
+
+{code}
+karaf@node1()> cluster:config-list default |more
+----------------------------------------------------------------
+Pid: org.apache.karaf.command.acl.jaas
+Located: cluster/local
+Blocked:
+Properties:
+ update = admin
+ service.pid = org.apache.karaf.command.acl.jaas
+----------------------------------------------------------------
...
{code}
-In the list, you can see where the feature is located (on the cluster or on the local node), and the blocking policy (if the feature is blocked inbound/outbound).
+You can note the "Blocked" and "Located" attributes, like for features and bundles.
-Now we can "install" a feature for a given cluster group:
+YOu can list properties in a config using the cluster:config-property-list command:
{code}
-karaf@root()> cluster:feature-install test eventadmin
+karaf@node1()> cluster:config-property-list default org.apache.karaf.jaas
+Property list for configuration PID org.apache.karaf.jaas for cluster group default
+ encryption.prefix = {CRYPT}
+ encryption.name =
+ encryption.enabled = false
+ encryption.suffix = {CRYPT}
+ encryption.encoding = hexadecimal
+ service.pid = org.apache.karaf.jaas
+ encryption.algorithm = MD5
{code}
-Below, we see that the eventadmin feature has been installed on this member of the test group:
+You can set or append a value to a config property using the cluster:config-property-set or cluster:config-property-append command:
{code}
-karaf@root()> feature:list |grep -i eventadmin
-eventadmin | 3.0.1 | x | standard-3.0.1 | OSGi Event Admin service specification for event-b
+karaf@node1()> cluster:config-property-set default my.config my.property my.value
+{code}
+
+You can delete a property in a config using the cluster:config-property-delete command:
+
{code}
+karaf@node1()> cluster:config-property-delete default my.config my.property
+{code}
+
+You can delete the whole config using the cluster:config-delete command:
+
+{code}
+karaf@node1()> cluster:config-delete default my.config
+{code}
+
+Like for feature and bundle, Cellar provides a config listener disabled by default in etc/org.apache.karaf.cellar.nodes.cfg:
+
+{code}
+config.listener = false
+{code}
+
+The config listener listens the following local config changes:
+* create a config
+* add/delete/change a property
+* delete a config
+
+As some properties may be local to a node, Cellar excludes some property by default.
+You can see the current excluded properties using the cluster:config-property-excluded command:
+
+{code}
+karaf@node1()> cluster:config-property-excluded
+service.factoryPid, felix.fileinstall.filename, felix.fileinstall.dir, felix.fileinstall.tmpdir, org.ops4j.pax.url.mvn.defaultRepositories
+{code}
+
+You can modify this list using the same command, or by editing the etc/org.apache.karaf.cellar.node.cfg configuration file:
+
+{code}
+#
+# Excluded config properties from the sync
+# Some config properties can be considered as local to a node, and should not be sync on the cluster.
+#
+config.excluded.properties = service.factoryPid, felix.fileinstall.filename, felix.fileinstall.dir, felix.fileinstall.tmpdir, org.ops4j.pax.url.mvn.defaultRepositories
+{code}
+
+h3. OBR (optional)
+
+See the [OBR section|obr] for details.
+
+h3. EventAdmin (optiona)
+
+See the [EventAdmin section|event] for details.
+
+h2. Blocking policy
+
+You can define a policy to filter the cluster events exchanges by the nodes (inbound or outbound).
+
+It allows you to block or allow some resources on the cluster.
+
+By adding a resource id in a blacklist, you block the resource.
+By adding a resource id in a whitelist, you allow the resource.
+
+For instance, for feature, you can use the cluster:feature-block command to display or modify the current blocking policy for features:
+
+{code}
+karaf@node1()> cluster:feature-block default
+INBOUND:
+ whitelist: [*]
+ blacklist: [config, cellar*, hazelcast, management]
+OUTBOUND:
+ whitelist: [*]
+ blacklist: [config, cellar*, hazelcast, management]
+{code}
+
+NB: * is a wildcard.
+
+You have the equivalent command for bundle and config:
+
+{code}
+karaf@node1()> cluster:bundle-block default
+INBOUND:
+ whitelist: [*]
+ blacklist: [*.xml]
+OUTBOUND:
+ whitelist: [*]
+ blacklist: [*.xml]
+karaf@node1()> cluster:config-block default
+INBOUND:
+ whitelist: [*]
+ blacklist: [org.apache.karaf.cellar*, org.apache.karaf.shell, org.ops4j.pax.logging, org.ops4j.pax.web, org.apache.felix.fileinstall*, org.apache.karaf.management, org.apache.aries.transaction]
+OUTBOUND:
+ whitelist: [*]
+ blacklist: [org.apache.karaf.cellar*, org.apache.karaf.shell, org.ops4j.pax.logging, org.ops4j.pax.web, org.apache.felix.fileinstall*, org.apache.karaf.management, org.apache.aries.transaction]
+{code}
+
+Using those commands, you can also update the blacklist and whitelist for inbound or outbound cluster events.
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/hazelcast.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/hazelcast.conf b/manual/src/main/webapp/user-guide/hazelcast.conf
new file mode 100644
index 0000000..ba76906
--- /dev/null
+++ b/manual/src/main/webapp/user-guide/hazelcast.conf
@@ -0,0 +1,115 @@
+h1. Core runtime and Hazelcast
+
+Cellar uses Hazelcast as cluster engine.
+
+When you install the cellar feature, a hazelcast feature is automatically installed, providing the etc/hazelcast.xml
+configuration file.
+
+The etc/hazelcast.xml configuration file contains all the core configuration, especially:
+* the Hazelcast cluster identifiers (group name and password)
+* network discovery and security configuration
+
+h2. Hazelcast cluster identification
+
+The <group/> element in the etc/hazelcast.xml defines the identification of the Hazelcast cluster:
+
+{code}
+ <group>
+ <name>cellar</name>
+ <password>pass</password>
+ </group>
+{code}
+
+All Cellar nodes have to use the same name and password (to be part of the same Hazelcast cluster).
+
+h2. Network
+
+The <network/> element in the etc/hazelcast.xml contains all the network configuration.
+
+First, it defines the port numbers used by Hazelcast:
+
+{code}
+ <port auto-increment="true" port-count="100">5701</port>
+ <outbound-ports>
+ <!--
+ Allowed port range when connecting to other nodes.
+ 0 or * means use system provided port.
+ -->
+ <ports>0</ports>
+ </outbound-ports>
+{code}
+
+Second, it defines the mechanism used to discover the Cellar nodes: it's the <join/> element.
+
+By default, Hazelcast uses unicast.
+
+You can also use multicast (enabled by default in Cellar):
+
+{code}
+ <multicast enabled="true">
+ <multicast-group>224.2.2.3</multicast-group>
+ <multicast-port>54327</multicast-port>
+ </multicast>
+{code}
+
+Instead of using multicast, you can also explicitly define the host names (or IP addresses) of the different
+Cellar nodes:
+
+{code}
+ <tcp-ip enabled="true">
+ <interface>127.0.0.1</interface>
+ </tcp-ip>
+{code}
+
+You can also discover nodes located on a Amazon instance:
+
+{code}
+ <aws enabled="true">
+ <access-key>my-access-key</access-key>
+ <secret-key>my-secret-key</secret-key>
+ <!--optional, default is us-east-1 -->
+ <region>us-west-1</region>
+ <!--optional, default is ec2.amazonaws.com. If set, region shouldn't be set as it will override this property -->
+ <host-header>ec2.amazonaws.com</host-header>
+ <!-- optional, only instances belonging to this group will be discovered, default will try all running instances -->
+ <security-group-name>hazelcast-sg</security-group-name>
+ <tag-key>type</tag-key>
+ <tag-value>hz-nodes</tag-value>
+ </aws>
+{code}
+
+Third, you can specific on which network interface the cluster is running. By default, Hazelcast listens on all interfaces (0.0.0.0).
+But you can specify an interface:
+
+{code}
+ <interfaces enabled="true">
+ <interface>10.10.1.*</interface>
+ </interfaces>
+{code}
+
+Finally, you can also enable security transport on the cluster.
+Two modes are supported:
+* SSL:
+{code}
+ <ssl enabled="true"/>
+{code}
+* Symmetric Encryption:
+{code}
+ <symmetric-encryption enabled="true">
+ <!--
+ encryption algorithm such as
+ DES/ECB/PKCS5Padding,
+ PBEWithMD5AndDES,
+ AES/CBC/PKCS5Padding,
+ Blowfish,
+ DESede
+ -->
+ <algorithm>PBEWithMD5AndDES</algorithm>
+ <!-- salt value to use when generating the secret key -->
+ <salt>thesalt</salt>
+ <!-- pass phrase to use when generating the secret key -->
+ <password>thepass</password>
+ <!-- iteration count to use when generating the secret key -->
+ <iteration-count>19</iteration-count>
+ </symmetric-encryption>
+{code}
\ No newline at end of file
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/index.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/index.conf b/manual/src/main/webapp/user-guide/index.conf
index 7650138..c31b553 100644
--- a/manual/src/main/webapp/user-guide/index.conf
+++ b/manual/src/main/webapp/user-guide/index.conf
@@ -3,6 +3,7 @@ h1. Karaf Cellar User Guide
* [Karaf Cellar Introduction|/user-guide/introduction]
* [Installing Karaf Cellar|/user-guide/installation]
* [Start Karaf Cellar|/user-guide/deploy]
+* [Core configuration|/user-guide/hazelcast]
* [Nodes in Karaf Cellar|/user-guide/nodes]
* [Groups in Karaf Cellar|/user-guide/groups]
* [OBR in Karaf Cellar|/user-guide/obr]
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/installation.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/installation.conf b/manual/src/main/webapp/user-guide/installation.conf
index 4b8dddc..9eec2da 100644
--- a/manual/src/main/webapp/user-guide/installation.conf
+++ b/manual/src/main/webapp/user-guide/installation.conf
@@ -4,11 +4,13 @@ This chapter describes how to install Apache Karaf Cellar into your existing Kar
h2. Pre-Installation Requirements
-As Cellar is a Karaf sub-project, you need a running Karaf instance.
+Cellar is installed on running Karaf instances.
-Karaf Cellar is provided under a Karaf features descriptor. The easiest way to install is just to
+Cellar is provided as a Karaf features descriptor. The easiest way to install is just to
have an internet connection from the Karaf running instance.
+See [deploy] to how to install and start Cellar.
+
h2. Building from Sources
If you intend to build Karaf Cellar from the sources, the requirements are:
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/introduction.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/introduction.conf b/manual/src/main/webapp/user-guide/introduction.conf
index c9d46d1..477d08c 100644
--- a/manual/src/main/webapp/user-guide/introduction.conf
+++ b/manual/src/main/webapp/user-guide/introduction.conf
@@ -4,56 +4,31 @@ h2. Karaf Cellar use cases
The first purpose of Cellar is to synchronize the state of several Karaf instances (named nodes).
-It means that all resources modified (installed, started, etc) on one Karaf instance will be propagated to all others
-nodes.
-Concretely, Cellar will broadcast an event to others nodes when you perform an action.
+Cellar provides dedicated shell commands and MBeans to administrate the cluster, and manipulate the resources on the cluster.
-The nodes list could be discovered (using multicast/unicast), or explicitly defined (using a couple hostname or IP
+It's also possible to enable local resources listeners: these listeners broadcast local resource changes as cluster events.
+Please note that this behavior is disabled by default as it can have side effects (especially when a node is stopped).
+Enabling listeners is at your own risk.
+
+The nodes list could be discovered (using unicast or multicast), or "staticly" defined (using a couple hostname or IP
and port list).
Cellar is able to synchronize:
- bundles (remote, local, or from an OBR)
- config
- features
+- eventadmin
+
+Optionally, Cellar also support synchronization of OSGi EventAdmin, OBR (URLs and bundles).
The second purpose is to provide a Distributed OSGi runtime. It means that using Cellar, you are able to call an OSGi
service located on a remote instance. See the [Transport and DOSGi] section of the user guide.
-h2. Cellar network
-
-Cellar relies on Hazelcast (http://www.hazelcast.com), a memory data grid implementation.
-
-You have a full access to the Hazelcast configuration (in etc/hazelcast.xml) allowing you to specify the network
-configuration.
-
-Especially, you can enable or not the multicast support and choose the multicast group and port number.
-
-You can also configure on which interface and IP address you configure Cellar and port number used by Cellar:
-
-{code}
- <network>
- <port auto-increment="true">5701</port>
- <join>
- <multicast enabled="true">
- <multicast-group>224.2.2.3</multicast-group>
- <multicast-port>54327</multicast-port>
- </multicast>
- <tcp-ip enabled="false">
- <interface>127.0.0.1</interface>
- </tcp-ip>
- <aws enabled="false">
- <access-key>my-access-key</access-key>
- <secret-key>my-secret-key</secret-key>
- <region>us-east-1</region>
- </aws>
- </join>
- <interfaces enabled="false">
- <interface>10.10.1.*</interface>
- </interfaces>
- </network>
-{code}
-
-By default, the Cellar node will start from network port 5701, each new node will use an incremented port number.
+Finally, Cellar also provides "runtime clustering" by providing dedicated feature like:
+- HTTP load balancing
+- HTTP sessions replication
+- log centralization
+Please, see the sections dedicated to those features.
h2. Cross topology
http://git-wip-us.apache.org/repos/asf/karaf-cellar/blob/7a598b28/manual/src/main/webapp/user-guide/nodes.conf
----------------------------------------------------------------------
diff --git a/manual/src/main/webapp/user-guide/nodes.conf b/manual/src/main/webapp/user-guide/nodes.conf
index 10a71f6..a367be8 100644
--- a/manual/src/main/webapp/user-guide/nodes.conf
+++ b/manual/src/main/webapp/user-guide/nodes.conf
@@ -31,29 +31,148 @@ from 2: req=node1:5701 time=12 ms
from 3: req=node1:5701 time=13 ms
from 4: req=node1:5701 time=7 ms
from 5: req=node1:5701 time=12 ms
+{code}
+
+h2. Node Components: listener, producer, handler, consume, and synchronizer
+
+A Cellar node is actually a set of components, each component is dedicated to a special purpose.
+
+The etc/org.apache.karaf.cellar.node.cfg configuration file is dedicated to the configuration of the local node.
+It's where you can control the status of the different components.
+
+h3. Synchronizers and sync policy
+
+A synchronizer is invoked when you:
+* Cellar starts
+* a node joins a cluster group (see [groups] for details about cluster groups)
+* you explicitly call the cluster:sync command
+
+We have a synchronizer per resource: feature, bundle, config, obr (optional).
+
+Cellar supports three sync policies:
+* cluster (default): if the node is the first one in the cluster, it pushes its local state to the cluster, else if it's
+not the first node in the cluster, the node will update its local state with the cluster one (meaning that the cluster
+is the master).
+* node: in this case, the node is the master, it means that the cluster state will be overwritten by the node state.
+* disabled: in this case, it means that the synchronizer is not used at all, meaning the node or the cluster are not
+updated at all (at sync time).
+
+You can configure the sync policy (for each resource, and each cluster group) in the etc/org.apache.karaf.cellar.groups.cfg
+configuration file:
{code}
+default.bundle.sync = cluster
+default.config.sync = cluster
+default.feature.sync = cluster
+default.obr.urls.sync = cluster
+{code}
+
+The cluster:sync command allows you to "force" the sync:
+
+{code}
+karaf@node1()> cluster:sync
+Synchronizing cluster group default
+ bundle: done
+ config: done
+ feature: done
+ obr.urls: No synchronizer found for obr.urls
+{code}
+
+It's also possible to sync only a resource using:
+* -b (--bundle) for bundle
+* -f (--feature) for feature
+* -c (--config) for configuration
+* -o (--obr) for OBR URLs
+
+or a given cluster group using the -g (--group) option.
+
+h3. Producer, consumer, and handlers
-h2. Nodes sync
+To notify the other nodes in the cluster, Cellar produces a cluster event.
-Cellar allows nodes to 'sync' state. It currently covers features, configs, and bundles.
+For that, the local node uses a producer to create and send the cluster event.
+You can see the current status of the local producer using the cluster:producer-status command:
-For instance, if you install a feature (eventadmin for example) on node1:
+{code}
+karaf@node1()> cluster:producer-status
+ | Node | Status
+-----------------------------
+x | 172.17.42.1:5701 | ON
+{code}
+
+The cluster:producer-stop and cluster:producer-start commands allow you to stop or start the local cluster event
+producer:
{code}
-karaf@root> feature:install eventadmin
-karaf@root()> feature:list |grep -i eventadmin
-eventadmin | 3.0.1 | x | standard-3.0.1 | OSGi Event Admin service specification for event-b
+karaf@node1()> cluster:producer-stop
+ | Node | Status
+-----------------------------
+x | 172.17.42.1:5701 | OFF
+karaf@node1()> cluster:producer-start
+ | Node | Status
+-----------------------------
+x | 172.17.42.1:5701 | ON
{code}
-You can see that the eventadmin feature has been installed on node2:
+When the producer is off, it means that the node is "isolated" from the cluster as it doesn't send "outbound" cluster events
+to the other nodes.
+On the other hand, a node receives the cluster events on a consumer. Like for the producer, you can see and control the
+consumer using dedicated command:
+
+{code}
+karaf@node1()> cluster:consumer-status
+ | Node | Status
+---------------------------
+x | localhost:5701 | ON
+karaf@node1()> cluster:consumer-stop
+ | Node | Status
+---------------------------
+x | localhost:5701 | OFF
+karaf@node1()> cluster:consumer-start
+ | Node | Status
+---------------------------
+x | localhost:5701 | ON
{code}
-karaf@root()> feature:list |grep -i eventadmin
-eventadmin | 3.0.1 | x | standard-3.0.1 | OSGi Event Admin service specification for event-b
+
+When the consumer is off, it means that node is "isolated" from the cluster as it doesn't receive "inbound" cluster events
+from the other nodes.
+
+Different cluster events are involved. For instance, we have cluster event for feature, for bundle, for configuration, for OBR, etc.
+When a consumer receives a cluster event, it delegates the handling of the cluster event to a specific handler, depending of the
+type of the cluster event.
+You can see the different handlers and their status using the cluster:handler-status command:
{code}
+karaf@node1()> cluster:handler-status
+ | Node | Status | Event Handler
+--------------------------------------------------------------------------------------
+x | localhost:5701 | ON | org.apache.karaf.cellar.config.ConfigurationEventHandler
+x | localhost:5701 | ON | org.apache.karaf.cellar.bundle.BundleEventHandler
+x | localhost:5701 | ON | org.apache.karaf.cellar.features.FeaturesEventHandler
+{code}
+
+You can stop or start a specific handler using the cluster:handler-stop and cluster:handler-start commands.
+
+When a handler is stopped, it means that the node will receive the cluster event, but will not update the local resources
+dealt by the handler.
+
+h3. Listeners
+
+The listeners are listening for local resource change.
+
+For instance, when you install a feature (with feature:install), the feature listener traps the change and broadcast this
+change as a cluster event to other nodes.
+
+To avoid some unexpected behaviors (especially when you stop a node), most of the listeners are switch off by default.
+
+The listeners status are configured in the etc/org.apache.karaf.cellar.node.cfg configuration file.
+
+NB: enabling listeners is at your own risk. We encourage you to use cluster dedicated commands and MBeans to manipulate
+the resources on the cluster.
+
+h2. Clustered resources
-Features uninstall works in the same way. Basically, Cellar synchronisation is completely transparent.
+Cellar provides dedicated commands and MBeans for clustered resources.
-Configuration is also synchronized.
+Please, go into the [cluster groups|groups] section for details.
\ No newline at end of file