You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@activemq.apache.org by cl...@apache.org on 2014/12/08 16:49:45 UTC

[14/25] activemq-6 git commit: ACTIVEMQ6-9 - port to markdown

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/filter-expressions.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/filter-expressions.xml b/docs/user-manual/en/filter-expressions.xml
deleted file mode 100644
index b841ecb..0000000
--- a/docs/user-manual/en/filter-expressions.xml
+++ /dev/null
@@ -1,86 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-
-<!-- ============================================================================= -->
-<!-- Licensed to the Apache Software Foundation (ASF) under one or more            -->
-<!-- contributor license agreements. See the NOTICE file distributed with          -->
-<!-- this work for additional information regarding copyright ownership.           -->
-<!-- The ASF licenses this file to You under the Apache License, Version 2.0       -->
-<!-- (the "License"); you may not use this file except in compliance with          -->
-<!-- the License. You may obtain a copy of the License at                          -->
-<!--                                                                               -->
-<!--     http://www.apache.org/licenses/LICENSE-2.0                                -->
-<!--                                                                               -->
-<!-- Unless required by applicable law or agreed to in writing, software           -->
-<!-- distributed under the License is distributed on an "AS IS" BASIS,             -->
-<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.      -->
-<!-- See the License for the specific language governing permissions and           -->
-<!-- limitations under the License.                                                -->
-<!-- ============================================================================= -->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "ActiveMQ_User_Manual.ent">
-%BOOK_ENTITIES;
-]>
-
-<chapter id="filter-expressions">
-    <title>Filter Expressions</title>
-    <para>ActiveMQ provides a powerful filter language based on a subset of the SQL 92
-        expression syntax.</para>
-    <para>It is the same as the syntax used for JMS selectors, but the predefined identifiers are
-        different. For documentation on JMS selector syntax please the JMS javadoc for <ulink
-            url="http://docs.oracle.com/javaee/6/api/javax/jms/Message.html"
-            >javax.jms.Message</ulink>.</para>
-    <para>Filter expressions are used in several places in ActiveMQ</para>
-    <itemizedlist>
-        <listitem>
-            <para>Predefined Queues. When pre-defining a queue, either in <literal
-                    >activemq-configuration.xml</literal> or <literal>activemq-jms.xml</literal> a filter
-                expression can be defined for a queue. Only messages that match the filter
-                expression will enter the queue.</para>
-        </listitem>
-        <listitem>
-            <para>Core bridges can be defined with an optional filter expression, only matching
-                messages will be bridged (see <xref linkend="core-bridges"/>).</para>
-        </listitem>
-        <listitem>
-            <para>Diverts can be defined with an optional filter expression, only matching messages
-                will be diverted (see <xref linkend="diverts" />).</para>
-        </listitem>
-        <listitem>
-            <para>Filter are also used programmatically when creating consumers, queues and in
-                several places as described in <xref linkend="management"/>.</para>
-        </listitem>
-    </itemizedlist>
-    <para>There are some differences between JMS selector expressions and ActiveMQ core
-        filter expressions. Whereas JMS selector expressions operate on a JMS message, ActiveMQ
-        core filter expressions operate on a core message.</para>
-    <para>The following identifiers can be used in a core filter expressions to refer to attributes
-        of the core message in an expression:</para>
-    <itemizedlist>
-        <listitem>
-            <para><literal>HQPriority</literal>. To refer to the priority of a message. Message
-                priorities are integers with valid values from <literal>0 - 9</literal>. <literal
-                    >0</literal> is the lowest priority and <literal>9</literal> is the highest.
-                E.g. <literal>HQPriority = 3 AND animal = 'aardvark'</literal></para>
-        </listitem>
-        <listitem>
-            <para><literal>HQExpiration</literal>. To refer to the expiration time of a message.
-                The value is a long integer.</para>
-        </listitem>
-        <listitem>
-            <para><literal>HQDurable</literal>. To refer to whether a message is durable or not.
-                The value is a string with valid values: <literal>DURABLE</literal> or <literal
-                    >NON_DURABLE</literal>.</para>
-        </listitem>
-        <listitem>
-            <para><literal>HQTimestamp</literal>. The timestamp of when the message was created.
-                The value is a long integer.</para>
-        </listitem>
-        <listitem>
-            <para><literal>HQSize</literal>. The size of a message in bytes. The value is an
-                integer.</para>
-        </listitem>
-    </itemizedlist>
-    <para>Any other identifiers used in core filter expressions will be assumed to be properties of
-        the message.</para>
-</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/flow-control.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/flow-control.md b/docs/user-manual/en/flow-control.md
new file mode 100644
index 0000000..bdaac25
--- /dev/null
+++ b/docs/user-manual/en/flow-control.md
@@ -0,0 +1,304 @@
+Flow Control
+============
+
+Flow control is used to limit the flow of data between a client and
+server, or a server and another server in order to prevent the client or
+server being overwhelmed with data.
+
+Consumer Flow Control
+=====================
+
+This controls the flow of data between the server and the client as the
+client consumes messages. For performance reasons clients normally
+buffer messages before delivering to the consumer via the `receive()`
+method or asynchronously via a message listener. If the consumer cannot
+process messages as fast as they are being delivered and stored in the
+internal buffer, then you could end up with a situation where messages
+would keep building up possibly causing out of memory on the client if
+they cannot be processed in time.
+
+Window-Based Flow Control
+-------------------------
+
+By default, ActiveMQ consumers buffer messages from the server in a
+client side buffer before the client consumes them. This improves
+performance: otherwise every time the client consumes a message,
+ActiveMQ would have to go the server to request the next message. In
+turn, this message would then get sent to the client side, if one was
+available.
+
+A network round trip would be involved for *every* message and
+considerably reduce performance.
+
+To prevent this, ActiveMQ pre-fetches messages into a buffer on each
+consumer. The total maximum size of messages (in bytes) that will be
+buffered on each consumer is determined by the `consumer-window-size`
+parameter.
+
+By default, the `consumer-window-size` is set to 1 MiB (1024 \* 1024
+bytes).
+
+The value can be:
+
+-   `-1` for an *unbounded* buffer
+
+-   `0` to not buffer any messages. See ? for working example of a
+    consumer with no buffering.
+
+-   `>0` for a buffer with the given maximum size in bytes.
+
+Setting the consumer window size can considerably improve performance
+depending on the messaging use case. As an example, let's consider the
+two extremes:
+
+Fast consumers
+:   Fast consumers can process messages as fast as they consume them (or
+    even faster)
+
+    To allow fast consumers, set the `consumer-window-size` to -1. This
+    will allow *unbounded* message buffering on the client side.
+
+    Use this setting with caution: it can overflow the client memory if
+    the consumer is not able to process messages as fast as it receives
+    them.
+
+Slow consumers
+:   Slow consumers takes significant time to process each message and it
+    is desirable to prevent buffering messages on the client side so
+    that they can be delivered to another consumer instead.
+
+    Consider a situation where a queue has 2 consumers; 1 of which is
+    very slow. Messages are delivered in a round robin fashion to both
+    consumers, the fast consumer processes all of its messages very
+    quickly until its buffer is empty. At this point there are still
+    messages awaiting to be processed in the buffer of the slow consumer
+    thus preventing them being processed by the fast consumer. The fast
+    consumer is therefore sitting idle when it could be processing the
+    other messages.
+
+    To allow slow consumers, set the `consumer-window-size` to 0 (for no
+    buffer at all). This will prevent the slow consumer from buffering
+    any messages on the client side. Messages will remain on the server
+    side ready to be consumed by other consumers.
+
+    Setting this to 0 can give deterministic distribution between
+    multiple consumers on a queue.
+
+Most of the consumers cannot be clearly identified as fast or slow
+consumers but are in-between. In that case, setting the value of
+`consumer-window-size` to optimize performance depends on the messaging
+use case and requires benchmarks to find the optimal value, but a value
+of 1MiB is fine in most cases.
+
+### Using Core API
+
+If ActiveMQ Core API is used, the consumer window size is specified by
+`ServerLocator.setConsumerWindowSize()` method and some of the
+`ClientSession.createConsumer()` methods.
+
+### Using JMS
+
+If JNDI is used on the client to instantiate and look up the connection
+factory the consumer window size is configured in the JNDI context
+environment, e.g. `jndi.properties`. Here's a simple example using the
+"ConnectionFactory" connection factory which is available in the context
+by default:
+
+    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
+    java.naming.provider.url=tcp://localhost:5445
+    connection.ConnectionFactory.consumerWindowSize=0
+
+If the connection factory is directly instantiated, the consumer window
+size is specified by `ActiveMQConnectionFactory.setConsumerWindowSize()`
+method.
+
+Please see ? for an example which shows how to configure ActiveMQ to
+prevent consumer buffering when dealing with slow consumers.
+
+Rate limited flow control
+-------------------------
+
+It is also possible to control the *rate* at which a consumer can
+consume messages. This is a form of throttling and can be used to make
+sure that a consumer never consumes messages at a rate faster than the
+rate specified.
+
+The rate must be a positive integer to enable this functionality and is
+the maximum desired message consumption rate specified in units of
+messages per second. Setting this to `-1` disables rate limited flow
+control. The default value is `-1`.
+
+Please see ? for a working example of limiting consumer rate.
+
+### Using Core API
+
+If the ActiveMQ core API is being used the rate can be set via the
+`ServerLocator.setConsumerMaxRate(int consumerMaxRate)` method or
+alternatively via some of the `ClientSession.createConsumer()` methods.
+
+### Using JMS
+
+If JNDI is used to instantiate and look up the connection factory, the
+max rate can be configured in the JNDI context environment, e.g.
+`jndi.properties`. Here's a simple example using the "ConnectionFactory"
+connection factory which is available in the context by default:
+
+    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
+    java.naming.provider.url=tcp://localhost:5445
+    connection.ConnectionFactory.consumerMaxRate=10
+
+If the connection factory is directly instantiated, the max rate size
+can be set via the `ActiveMQConnectionFactory.setConsumerMaxRate(int
+                  consumerMaxRate)` method.
+
+> **Note**
+>
+> Rate limited flow control can be used in conjunction with window based
+> flow control. Rate limited flow control only effects how many messages
+> a client can consume in a second and not how many messages are in its
+> buffer. So if you had a slow rate limit and a high window based limit
+> the clients internal buffer would soon fill up with messages.
+
+Please see ? for an example which shows how to configure ActiveMQ to
+prevent consumer buffering when dealing with slow consumers.
+
+Producer flow control
+=====================
+
+ActiveMQ also can limit the amount of data sent from a client to a
+server to prevent the server being overwhelmed.
+
+Window based flow control
+-------------------------
+
+In a similar way to consumer window based flow control, ActiveMQ
+producers, by default, can only send messages to an address as long as
+they have sufficient credits to do so. The amount of credits required to
+send a message is given by the size of the message.
+
+As producers run low on credits they request more from the server, when
+the server sends them more credits they can send more messages.
+
+The amount of credits a producer requests in one go is known as the
+*window size*.
+
+The window size therefore determines the amount of bytes that can be
+in-flight at any one time before more need to be requested - this
+prevents the remoting connection from getting overloaded.
+
+### Using Core API
+
+If the ActiveMQ core API is being used, window size can be set via the
+`ServerLocator.setProducerWindowSize(int producerWindowSize)` method.
+
+### Using JMS
+
+If JNDI is used to instantiate and look up the connection factory, the
+producer window size can be configured in the JNDI context environment,
+e.g. `jndi.properties`. Here's a simple example using the
+"ConnectionFactory" connection factory which is available in the context
+by default:
+
+    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
+    java.naming.provider.url=tcp://localhost:5445
+    connection.ConnectionFactory.producerWindowSize=10
+
+If the connection factory is directly instantiated, the producer window
+size can be set via the
+`ActiveMQConnectionFactory.setProducerWindowSize(int
+                  producerWindowSize)` method.
+
+### Blocking producer window based flow control
+
+Normally the server will always give the same number of credits as have
+been requested. However, it is also possible to set a maximum size on
+any address, and the server will never send more credits than could
+cause the address's upper memory limit to be exceeded.
+
+For example, if I have a JMS queue called "myqueue", I could set the
+maximum memory size to 10MiB, and the the server will control the number
+of credits sent to any producers which are sending any messages to
+myqueue such that the total messages in the queue never exceeds 10MiB.
+
+When the address gets full, producers will block on the client side
+until more space frees up on the address, i.e. until messages are
+consumed from the queue thus freeing up space for more messages to be
+sent.
+
+We call this blocking producer flow control, and it's an efficient way
+to prevent the server running out of memory due to producers sending
+more messages than can be handled at any time.
+
+It is an alternative approach to paging, which does not block producers
+but instead pages messages to storage.
+
+To configure an address with a maximum size and tell the server that you
+want to block producers for this address if it becomes full, you need to
+define an AddressSettings (?) block for the address and specify
+`max-size-bytes` and `address-full-policy`
+
+The address block applies to all queues registered to that address. I.e.
+the total memory for all queues bound to that address will not exceed
+`max-size-bytes`. In the case of JMS topics this means the *total*
+memory of all subscriptions in the topic won't exceed max-size-bytes.
+
+Here's an example:
+
+    <address-settings>
+       <address-setting match="jms.queue.exampleQueue">
+          <max-size-bytes>100000</max-size-bytes>
+          <address-full-policy>BLOCK</address-full-policy>
+       </address-setting>
+    </address-settings>
+
+The above example would set the max size of the JMS queue "exampleQueue"
+to be 100000 bytes and would block any producers sending to that address
+to prevent that max size being exceeded.
+
+Note the policy must be set to `BLOCK` to enable blocking producer flow
+control.
+
+> **Note**
+>
+> Note that in the default configuration all addresses are set to block
+> producers after 10 MiB of message data is in the address. This means
+> you cannot send more than 10MiB of message data to an address without
+> it being consumed before the producers will be blocked. If you do not
+> want this behaviour increase the `max-size-bytes` parameter or change
+> the address full message policy.
+
+Rate limited flow control
+-------------------------
+
+ActiveMQ also allows the rate a producer can emit message to be limited,
+in units of messages per second. By specifying such a rate, ActiveMQ
+will ensure that producer never produces messages at a rate higher than
+that specified.
+
+The rate must be a positive integer to enable this functionality and is
+the maximum desired message consumption rate specified in units of
+messages per second. Setting this to `-1` disables rate limited flow
+control. The default value is `-1`.
+
+Please see the ? for a working example of limiting producer rate.
+
+### Using Core API
+
+If the ActiveMQ core API is being used the rate can be set via the
+`ServerLocator.setProducerMaxRate(int producerMaxRate)` method or
+alternatively via some of the `ClientSession.createProducer()` methods.
+
+### Using JMS
+
+If JNDI is used to instantiate and look up the connection factory, the
+max rate size can be configured in the JNDI context environment, e.g.
+`jndi.properties`. Here's a simple example using the "ConnectionFactory"
+connection factory which is available in the context by default:
+
+    java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
+    java.naming.provider.url=tcp://localhost:5445
+    connection.ConnectionFactory.producerMaxRate=10
+
+If the connection factory is directly instantiated, the max rate size
+can be set via the `ActiveMQConnectionFactory.setProducerMaxRate(int
+                  producerMaxRate)` method.

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/flow-control.xml
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/flow-control.xml b/docs/user-manual/en/flow-control.xml
deleted file mode 100644
index 70ab4e2..0000000
--- a/docs/user-manual/en/flow-control.xml
+++ /dev/null
@@ -1,290 +0,0 @@
-<?xml version="1.0" encoding="UTF-8"?>
-<!-- ============================================================================= -->
-<!-- Licensed to the Apache Software Foundation (ASF) under one or more            -->
-<!-- contributor license agreements. See the NOTICE file distributed with          -->
-<!-- this work for additional information regarding copyright ownership.           -->
-<!-- The ASF licenses this file to You under the Apache License, Version 2.0       -->
-<!-- (the "License"); you may not use this file except in compliance with          -->
-<!-- the License. You may obtain a copy of the License at                          -->
-<!--                                                                               -->
-<!--     http://www.apache.org/licenses/LICENSE-2.0                                -->
-<!--                                                                               -->
-<!-- Unless required by applicable law or agreed to in writing, software           -->
-<!-- distributed under the License is distributed on an "AS IS" BASIS,             -->
-<!-- WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.      -->
-<!-- See the License for the specific language governing permissions and           -->
-<!-- limitations under the License.                                                -->
-<!-- ============================================================================= -->
-
-<!DOCTYPE chapter PUBLIC "-//OASIS//DTD DocBook XML V4.5//EN" "http://www.oasis-open.org/docbook/xml/4.5/docbookx.dtd" [
-<!ENTITY % BOOK_ENTITIES SYSTEM "ActiveMQ_User_Manual.ent">
-%BOOK_ENTITIES;
-]>
-<chapter id="flow-control">
-   <title>Flow Control</title>
-   <para>Flow control is used to limit the flow of data between a client and server, or a server and
-      another server in order to prevent the client or server being overwhelmed with data.</para>
-   <section>
-      <title>Consumer Flow Control</title>
-      <para>This controls the flow of data between the server and the client as the client consumes
-         messages. For performance reasons clients normally buffer messages before delivering to the
-         consumer via the <literal>receive()</literal> method or asynchronously via a message
-         listener. If the consumer cannot process messages as fast as they are being delivered and
-         stored in the internal buffer, then you could end up with a situation where messages would
-         keep building up possibly causing out of memory on the client if they cannot be processed
-         in time.</para>
-      <section id="flow-control.consumer.window">
-         <title>Window-Based Flow Control</title>
-         <para>By default, ActiveMQ consumers buffer messages from the server in a client side buffer
-            before the client consumes them. This improves performance: otherwise every time the
-            client consumes a message, ActiveMQ would have to go the server to request the next
-            message. In turn, this message would then get sent to the client side, if one was
-            available.</para>
-         <para>A network round trip would be involved for <emphasis>every</emphasis> message and
-            considerably reduce performance.</para>
-         <para>To prevent this, ActiveMQ pre-fetches messages into a buffer on each consumer. The
-            total maximum size of messages (in bytes) that will be buffered on each consumer is
-            determined by the <literal>consumer-window-size</literal> parameter.</para>
-         <para>By default, the <literal>consumer-window-size</literal> is set to 1 MiB (1024 * 1024
-            bytes).</para>
-         <para>The value can be:</para>
-         <itemizedlist>
-            <listitem>
-               <para><literal>-1</literal> for an <emphasis>unbounded</emphasis> buffer</para>
-            </listitem>
-            <listitem>
-               <para><literal>0</literal> to not buffer any messages. See <xref
-                     linkend="examples.no-consumer-buffering"/> for working example of a consumer
-                  with no buffering.</para>
-            </listitem>
-            <listitem>
-               <para><literal>>0</literal> for a buffer with the given maximum size in
-                  bytes.</para>
-            </listitem>
-         </itemizedlist>
-         <para>Setting the consumer window size can considerably improve performance depending on
-            the messaging use case. As an example, let's consider the two extremes: </para>
-         <variablelist>
-            <varlistentry>
-               <term>Fast consumers</term>
-               <listitem>
-                  <para>Fast consumers can process messages as fast as they consume them (or even
-                     faster)</para>
-                  <para>To allow fast consumers, set the <literal>consumer-window-size</literal> to
-                     -1. This will allow <emphasis>unbounded</emphasis> message buffering on the
-                     client side.</para>
-                  <para>Use this setting with caution: it can overflow the client memory if the
-                     consumer is not able to process messages as fast as it receives them.</para>
-               </listitem>
-            </varlistentry>
-            <varlistentry>
-               <term>Slow consumers</term>
-               <listitem>
-                  <para>Slow consumers takes significant time to process each message and it is
-                     desirable to prevent buffering messages on the client side so that they can be
-                     delivered to another consumer instead.</para>
-                  <para>Consider a situation where a queue has 2 consumers; 1 of which is very slow.
-                     Messages are delivered in a round robin fashion to both consumers, the fast
-                     consumer processes all of its messages very quickly until its buffer is empty.
-                     At this point there are still messages awaiting to be processed in the buffer
-                     of the slow consumer thus preventing them being processed by the fast consumer.
-                     The fast consumer is therefore sitting idle when it could be processing the
-                     other messages. </para>
-                  <para>To allow slow consumers, set the <literal>consumer-window-size</literal> to
-                     0 (for no buffer at all). This will prevent the slow consumer from buffering
-                     any messages on the client side. Messages will remain on the server side ready
-                     to be consumed by other consumers.</para>
-                  <para>Setting this to 0 can give deterministic distribution between multiple
-                     consumers on a queue.</para>
-               </listitem>
-            </varlistentry>
-         </variablelist>
-         <para>Most of the consumers cannot be clearly identified as fast or slow consumers but are
-            in-between. In that case, setting the value of <literal>consumer-window-size</literal>
-            to optimize performance depends on the messaging use case and requires benchmarks to
-            find the optimal value, but a value of 1MiB is fine in most cases.</para>
-         <section id="flow-control.core.api">
-            <title>Using Core API</title>
-            <para>If ActiveMQ Core API is used, the consumer window size is specified by <literal
-                  >ServerLocator.setConsumerWindowSize()</literal> method and some of the
-                  <literal>ClientSession.createConsumer()</literal> methods.</para>
-         </section>
-         <section>
-            <title>Using JMS</title>
-            <para>If JNDI is used on the client to instantiate and look up the connection factory the consumer window
-               size is configured in the JNDI context environment, e.g. <literal>jndi.properties</literal>. Here's a
-               simple example using the "ConnectionFactory" connection factory which is available in the context by
-               default:</para>
-            <programlisting>
-java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
-java.naming.provider.url=tcp://localhost:5445
-connection.ConnectionFactory.consumerWindowSize=0</programlisting>
-            <para>If the connection factory is directly instantiated, the consumer window size is
-               specified by <literal>ActiveMQConnectionFactory.setConsumerWindowSize()</literal>
-               method.</para>
-            <para>Please see <xref linkend="examples.no-consumer-buffering"/> for an example which
-               shows how to configure ActiveMQ to prevent consumer buffering when dealing with slow
-               consumers.</para>
-         </section>
-      </section>
-      <section>
-         <title>Rate limited flow control</title>
-         <para>It is also possible to control the <emphasis>rate</emphasis> at which a consumer can
-            consume messages. This is a form of throttling and can be used to make sure that a
-            consumer never consumes messages at a rate faster than the rate specified. </para>
-         <para>The rate must be a positive integer to enable this functionality and is the maximum
-            desired message consumption rate specified in units of messages per second. Setting this
-            to <literal>-1</literal> disables rate limited flow control. The default value is
-               <literal>-1</literal>.</para>
-         <para>Please see <xref linkend="examples.consumer-rate-limit"/> for a working example of
-            limiting consumer rate.</para>
-         <section id="flow-control.rate.core.api">
-            <title>Using Core API</title>
-            <para>If the ActiveMQ core API is being used the rate can be set via the <literal
-                  >ServerLocator.setConsumerMaxRate(int consumerMaxRate)</literal> method or
-               alternatively via some of the <literal>ClientSession.createConsumer()</literal>
-               methods. </para>
-         </section>
-         <section>
-            <title>Using JMS</title>
-            <para>If JNDI is used to instantiate and look up the connection factory, the max rate can be configured in
-               the JNDI context environment, e.g. <literal>jndi.properties</literal>. Here's a simple example using the
-               "ConnectionFactory" connection factory which is available in the context by default:</para>
-            <programlisting>
-java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
-java.naming.provider.url=tcp://localhost:5445
-connection.ConnectionFactory.consumerMaxRate=10</programlisting>
-            <para>If the connection factory is directly instantiated, the max rate size can be set
-               via the <literal>ActiveMQConnectionFactory.setConsumerMaxRate(int
-                  consumerMaxRate)</literal> method.</para>
-            <note>
-               <para>Rate limited flow control can be used in conjunction with window based flow
-                  control. Rate limited flow control only effects how many messages a client can
-                  consume in a second and not how many messages are in its buffer. So if you had a
-                  slow rate limit and a high window based limit the clients internal buffer would
-                  soon fill up with messages.</para>
-            </note>
-            <para>Please see <xref linkend="examples.consumer-rate-limit"/> for an example which
-               shows how to configure ActiveMQ to prevent consumer buffering when dealing with slow
-               consumers.</para>
-         </section>
-      </section>
-   </section>
-   <section>
-      <title>Producer flow control</title>
-      <para>ActiveMQ also can limit the amount of data sent from a client to a server to prevent the
-         server being overwhelmed.</para>
-      <section>
-         <title>Window based flow control</title>
-         <para>In a similar way to consumer window based flow control, ActiveMQ producers, by
-            default, can only send messages to an address as long as they have sufficient credits to
-            do so. The amount of credits required to send a message is given by the size of the
-            message.</para>
-         <para>As producers run low on credits they request more from the server, when the server
-            sends them more credits they can send more messages.</para>
-         <para>The amount of credits a producer requests in one go is known as the <emphasis
-               role="italic">window size</emphasis>.</para>
-         <para>The window size therefore determines the amount of bytes that can be in-flight at any
-            one time before more need to be requested - this prevents the remoting connection from
-            getting overloaded.</para>
-         <section>
-            <title>Using Core API</title>
-            <para>If the ActiveMQ core API is being used, window size can be set via the <literal
-                  >ServerLocator.setProducerWindowSize(int producerWindowSize)</literal>
-               method.</para>
-         </section>
-         <section>
-            <title>Using JMS</title>
-            <para>If JNDI is used to instantiate and look up the connection factory, the producer window size can be
-               configured in the JNDI context environment, e.g. <literal>jndi.properties</literal>. Here's a simple
-               example using the "ConnectionFactory" connection factory which is available in the context by default:</para>
-            <programlisting>
-java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
-java.naming.provider.url=tcp://localhost:5445
-connection.ConnectionFactory.producerWindowSize=10</programlisting>
-            <para>If the connection factory is directly instantiated, the producer window size can
-               be set via the <literal>ActiveMQConnectionFactory.setProducerWindowSize(int
-                  producerWindowSize)</literal> method.</para>
-         </section>
-         <section>
-            <title>Blocking producer window based flow control</title>
-            <para>Normally the server will always give the same number of credits as have been
-               requested. However, it is also possible to set a maximum size on any address, and the
-               server will never send more credits than could cause the address's upper memory limit
-               to be exceeded.</para>
-            <para>For example, if I have a JMS queue called "myqueue", I could set the maximum
-               memory size to 10MiB, and the the server will control the number of credits sent to
-               any producers which are sending any messages to myqueue such that the total messages
-               in the queue never exceeds 10MiB.</para>
-            <para>When the address gets full, producers will block on the client side until more
-               space frees up on the address, i.e. until messages are consumed from the queue thus
-               freeing up space for more messages to be sent.</para>
-            <para>We call this blocking producer flow control, and it's an efficient way to prevent
-               the server running out of memory due to producers sending more messages than can be
-               handled at any time.</para>
-            <para>It is an alternative approach to paging, which does not block producers but
-               instead pages messages to storage.</para>
-            <para>To configure an address with a maximum size and tell the server that you want to
-               block producers for this address if it becomes full, you need to define an
-               AddressSettings (<xref linkend="queue-attributes.address-settings"/>) block for the
-               address and specify <literal>max-size-bytes</literal> and <literal
-                  >address-full-policy</literal></para>
-            <para>The address block applies to all queues registered to that address. I.e. the total
-               memory for all queues bound to that address will not exceed <literal
-                  >max-size-bytes</literal>. In the case of JMS topics this means the <emphasis
-                  role="italic">total</emphasis> memory of all subscriptions in the topic won't
-               exceed max-size-bytes.</para>
-            <para>Here's an example:</para>
-            <programlisting>
-&lt;address-settings>
-   &lt;address-setting match="jms.queue.exampleQueue">
-      &lt;max-size-bytes>100000&lt;/max-size-bytes>
-      &lt;address-full-policy>BLOCK&lt;/address-full-policy>
-   &lt;/address-setting>
-&lt;/address-settings></programlisting>
-            <para>The above example would set the max size of the JMS queue "exampleQueue" to be
-               100000 bytes and would block any producers sending to that address to prevent that
-               max size being exceeded.</para>
-            <para>Note the policy must be set to <literal>BLOCK</literal> to enable blocking producer
-            flow control.</para>
-            <note><para>Note that in the default configuration all addresses are set to block producers after 10 MiB of message data
-            is in the address. This means you cannot send more than 10MiB of message data to an address without it being consumed before the producers
-            will be blocked. If you do not want this behaviour increase the <literal>max-size-bytes</literal> parameter or change the 
-            address full message policy.</para>
-            </note>            
-         </section>
-      </section>
-      <section>
-         <title>Rate limited flow control</title>
-         <para>ActiveMQ also allows the rate a producer can emit message to be limited, in units of
-            messages per second. By specifying such a rate, ActiveMQ will ensure that producer never
-            produces messages at a rate higher than that specified.</para>
-         <para>The rate must be a positive integer to enable this functionality and is the maximum
-            desired message consumption rate specified in units of messages per second. Setting this
-            to <literal>-1</literal> disables rate limited flow control. The default value is
-               <literal>-1</literal>.</para>
-         <para>Please see the <xref linkend="producer-rate-limiting-example"/> for a working example
-            of limiting producer rate.</para>
-         <section id="flow-control.producer.rate.core.api">
-            <title>Using Core API</title>
-            <para>If the ActiveMQ core API is being used the rate can be set via the <literal
-                  >ServerLocator.setProducerMaxRate(int producerMaxRate)</literal> method or
-               alternatively via some of the <literal>ClientSession.createProducer()</literal>
-               methods. </para>
-         </section>
-         <section>
-            <title>Using JMS</title>
-            <para>If JNDI is used to instantiate and look up the connection factory, the max rate size can be
-               configured in the JNDI context environment, e.g. <literal>jndi.properties</literal>. Here's a simple
-               example using the "ConnectionFactory" connection factory which is available in the context by default:</para>
-            <programlisting>
-java.naming.factory.initial=org.apache.activemq.jndi.ActiveMQInitialContextFactory
-java.naming.provider.url=tcp://localhost:5445
-connection.ConnectionFactory.producerMaxRate=10</programlisting>
-            <para>If the connection factory is directly instantiated, the max rate size can be set
-               via the <literal>ActiveMQConnectionFactory.setProducerMaxRate(int
-                  producerMaxRate)</literal> method.</para>
-         </section>
-      </section>
-   </section>
-</chapter>

http://git-wip-us.apache.org/repos/asf/activemq-6/blob/4245a6b4/docs/user-manual/en/ha.md
----------------------------------------------------------------------
diff --git a/docs/user-manual/en/ha.md b/docs/user-manual/en/ha.md
new file mode 100644
index 0000000..a35536c
--- /dev/null
+++ b/docs/user-manual/en/ha.md
@@ -0,0 +1,892 @@
+High Availability and Failover
+==============================
+
+We define high availability as the *ability for the system to continue
+functioning after failure of one or more of the servers*.
+
+A part of high availability is *failover* which we define as the
+*ability for client connections to migrate from one server to another in
+event of server failure so client applications can continue to operate*.
+
+Live - Backup Groups
+====================
+
+ActiveMQ allows servers to be linked together as *live - backup* groups
+where each live server can have 1 or more backup servers. A backup
+server is owned by only one live server. Backup servers are not
+operational until failover occurs, however 1 chosen backup, which will
+be in passive mode, announces its status and waits to take over the live
+servers work
+
+Before failover, only the live server is serving the ActiveMQ clients
+while the backup servers remain passive or awaiting to become a backup
+server. When a live server crashes or is brought down in the correct
+mode, the backup server currently in passive mode will become live and
+another backup server will become passive. If a live server restarts
+after a failover then it will have priority and be the next server to
+become live when the current live server goes down, if the current live
+server is configured to allow automatic failback then it will detect the
+live server coming back up and automatically stop.
+
+HA Policies
+-----------
+
+ActiveMQ supports two different strategies for backing up a server
+*shared store* and *replication*. Which is configured via the
+`ha-policy` configuration element.
+
+    <ha-policy>
+      <replication/>
+    </ha-policy>
+               
+
+or
+
+    <ha-policy>
+       <shared-store/>
+    </ha-policy>
+               
+
+As well as these 2 strategies there is also a 3rd called `live-only`.
+This of course means there will be no Backup Strategy and is the default
+if none is provided, however this is used to configure `scale-down`
+which we will cover in a later chapter.
+
+> **Note**
+>
+> The `ha-policy` configurations replaces any current HA configuration
+> in the root of the `activemq-configuration.xml` configuration. All old
+> configuration is now deprecated altho best efforts will be made to
+> honour it if configured this way.
+
+> **Note**
+>
+> Only persistent message data will survive failover. Any non persistent
+> message data will not be available after failover.
+
+The `ha-policy` type configures which strategy a cluster should use to
+provide the backing up of a servers data. Within this configuration
+element is configured how a server should behave within the cluster,
+either as a master (live), slave (backup) or colocated (both live and
+backup). This would look something like:
+
+    <ha-policy>
+       <replication>
+          <master/>
+       </replication>
+    </ha-policy>
+               
+
+or
+
+    <ha-policy>
+       <shared-store/>
+          <slave/>
+       </shared-store/>
+    </ha-policy>
+               
+
+or
+
+    <ha-policy>
+       <replication>
+          <colocated/>
+       </replication>
+    </ha-policy>
+               
+
+Data Replication
+----------------
+
+Support for network-based data replication was added in version 2.3.
+
+When using replication, the live and the backup servers do not share the
+same data directories, all data synchronization is done over the
+network. Therefore all (persistent) data received by the live server
+will be duplicated to the backup.
+
+c
+
+Notice that upon start-up the backup server will first need to
+synchronize all existing data from the live server before becoming
+capable of replacing the live server should it fail. So unlike when
+using shared storage, a replicating backup will not be a fully
+operational backup right after start-up, but only after it finishes
+synchronizing the data with its live server. The time it will take for
+this to happen will depend on the amount of data to be synchronized and
+the connection speed.
+
+> **Note**
+>
+> Synchronization occurs in parallel with current network traffic so
+> this won't cause any blocking on current clients.
+
+Replication will create a copy of the data at the backup. One issue to
+be aware of is: in case of a successful fail-over, the backup's data
+will be newer than the one at the live's storage. If you configure your
+live server to perform a ? when restarted, it will synchronize its data
+with the backup's. If both servers are shutdown, the administrator will
+have to determine which one has the latest data.
+
+The replicating live and backup pair must be part of a cluster. The
+Cluster Connection also defines how backup servers will find the remote
+live servers to pair with. Refer to ? for details on how this is done,
+and how to configure a cluster connection. Notice that:
+
+-   Both live and backup servers must be part of the same cluster.
+    Notice that even a simple live/backup replicating pair will require
+    a cluster configuration.
+
+-   Their cluster user and password must match.
+
+Within a cluster, there are two ways that a backup server will locate a
+live server to replicate from, these are:
+
+-   `specifying a node group`. You can specify a group of live servers
+    that a backup server can connect to. This is done by configuring
+    `group-name` in either the `master` or the `slave` element of the
+    `activemq-configuration.xml`. A Backup server will only connect to a
+    live server that shares the same node group name
+
+-   `connecting to any live`. This will be the behaviour if `group-name`
+    is not configured allowing a backup server to connect to any live
+    server
+
+> **Note**
+>
+> A `group-name` example: suppose you have 5 live servers and 6 backup
+> servers:
+>
+> -   `live1`, `live2`, `live3`: with `group-name=fish`
+>
+> -   `live4`, `live5`: with `group-name=bird`
+>
+> -   `backup1`, `backup2`, `backup3`, `backup4`: with `group-name=fish`
+>
+> -   `backup5`, `backup6`: with `group-name=bird`
+>
+> After joining the cluster the backups with `group-name=fish` will
+> search for live servers with `group-name=fish` to pair with. Since
+> there is one backup too many, the `fish` will remain with one spare
+> backup.
+>
+> The 2 backups with `group-name=bird` (`backup5` and `backup6`) will
+> pair with live servers `live4` and `live5`.
+
+The backup will search for any live server that it is configured to
+connect to. It then tries to replicate with each live server in turn
+until it finds a live server that has no current backup configured. If
+no live server is available it will wait until the cluster topology
+changes and repeats the process.
+
+> **Note**
+>
+> This is an important distinction from a shared-store backup, if a
+> backup starts and does not find a live server, the server will just
+> activate and start to serve client requests. In the replication case,
+> the backup just keeps waiting for a live server to pair with. Note
+> that in replication the backup server does not know whether any data
+> it might have is up to date, so it really cannot decide to activate
+> automatically. To activate a replicating backup server using the data
+> it has, the administrator must change its configuration to make it a
+> live server by changing `slave` to `master`.
+
+Much like in the shared-store case, when the live server stops or
+crashes, its replicating backup will become active and take over its
+duties. Specifically, the backup will become active when it loses
+connection to its live server. This can be problematic because this can
+also happen because of a temporary network problem. In order to address
+this issue, the backup will try to determine whether it still can
+connect to the other servers in the cluster. If it can connect to more
+than half the servers, it will become active, if more than half the
+servers also disappeared with the live, the backup will wait and try
+reconnecting with the live. This avoids a split brain situation.
+
+### Configuration
+
+To configure the live and backup servers to be a replicating pair,
+configure the live server in ' `activemq-configuration.xml` to have:
+
+    <ha-policy>
+       <replication>
+          <master/>
+       </replication>
+    </ha-policy>
+    .
+    <cluster-connections>
+       <cluster-connection name="my-cluster">
+          ...
+       </cluster-connection>
+    </cluster-connections>
+                    
+
+The backup server must be similarly configured but as a `slave`
+
+    <ha-policy>
+       <replication>
+          <slave/>
+       </replication>
+    </ha-policy>
+
+### All Replication Configuration
+
+The following table lists all the `ha-policy` configuration elements for
+HA strategy Replication for `master`:
+
+  name                      Description
+  ------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+  `check-for-live-server`   Whether to check the cluster for a (live) server using our own server ID when starting up. This option is only necessary for performing 'fail-back' on replicating servers.
+  `cluster-name`            Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured)
+  `group-name`              If set, backup servers will only pair with live servers with matching group-name
+
+The following table lists all the `ha-policy` configuration elements for
+HA strategy Replication for `slave`:
+
+  name                                   Description
+  -------------------------------------- -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+  `cluster-name`                         Name of the cluster configuration to use for replication. This setting is only necessary if you configure multiple cluster connections. If configured then the connector configuration of the cluster configuration with this name will be used when connecting to the cluster to discover if a live server is already running, see `check-for-live-server`. If unset then the default cluster connections configuration is used (the first one configured)
+  `group-name`                           If set, backup servers will only pair with live servers with matching group-name
+  `max-saved-replicated-journals-size`   This specifies how many times a replicated backup server can restart after moving its files on start. Once there are this number of backup journal files the server will stop permanently after if fails back.
+  `allow-failback`                       Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over
+  `failback-delay`                       delay to wait before fail-back occurs on (failed over live's) restart
+
+Shared Store
+------------
+
+When using a shared store, both live and backup servers share the *same*
+entire data directory using a shared file system. This means the paging
+directory, journal directory, large messages and binding journal.
+
+When failover occurs and a backup server takes over, it will load the
+persistent storage from the shared file system and clients can connect
+to it.
+
+This style of high availability differs from data replication in that it
+requires a shared file system which is accessible by both the live and
+backup nodes. Typically this will be some kind of high performance
+Storage Area Network (SAN). We do not recommend you use Network Attached
+Storage (NAS), e.g. NFS mounts to store any shared journal (NFS is
+slow).
+
+The advantage of shared-store high availability is that no replication
+occurs between the live and backup nodes, this means it does not suffer
+any performance penalties due to the overhead of replication during
+normal operation.
+
+The disadvantage of shared store replication is that it requires a
+shared file system, and when the backup server activates it needs to
+load the journal from the shared store which can take some time
+depending on the amount of data in the store.
+
+If you require the highest performance during normal operation, have
+access to a fast SAN and live with a slightly slower failover (depending
+on amount of data).
+
+![ActiveMQ ha-shared-store.png](images/ha-shared-store.png)
+
+### Configuration
+
+To configure the live and backup servers to share their store, configure
+id via the `ha-policy` configuration in `activemq-configuration.xml`:
+
+    <ha-policy>
+       <shared-store>
+          <master/>
+       </shared-store>
+    </ha-policy>
+    .
+    <cluster-connections>
+       <cluster-connection name="my-cluster">
+    ...
+       </cluster-connection>
+    </cluster-connections>
+                   
+
+The backup server must also be configured as a backup.
+
+    <ha-policy>
+       <shared-store>
+          <slave/>
+       </shared-store>
+    </ha-policy>
+                   
+
+In order for live - backup groups to operate properly with a shared
+store, both servers must have configured the location of journal
+directory to point to the *same shared location* (as explained in ?)
+
+> **Note**
+>
+> todo write something about GFS
+
+Also each node, live and backups, will need to have a cluster connection
+defined even if not part of a cluster. The Cluster Connection info
+defines how backup servers announce there presence to its live server or
+any other nodes in the cluster. Refer to ? for details on how this is
+done.
+
+Failing Back to live Server
+---------------------------
+
+After a live server has failed and a backup taken has taken over its
+duties, you may want to restart the live server and have clients fail
+back.
+
+In case of "shared disk", simply restart the original live server and
+kill the new live server by can do this by killing the process itself.
+Alternatively you can set `allow-fail-back` to `true` on the slave
+config which will force the backup that has become live to automatically
+stop. This configuration would look like:
+
+    <ha-policy>
+       <shared-store>
+          <slave>
+             <allow-failback>true</allow-failback>
+             <failback-delay>5000</failback-delay>
+          </slave>
+       </shared-store>
+    </ha-policy>
+               
+
+The `failback-delay` configures how long the backup must wait after
+automatically stopping before it restarts. This is to gives the live
+server time to start and obtain its lock.
+
+In replication HA mode you need to set an extra property
+`check-for-live-server` to `true` in the `master` configuration. If set
+to true, during start-up a live server will first search the cluster for
+another server using its nodeID. If it finds one, it will contact this
+server and try to "fail-back". Since this is a remote replication
+scenario, the "starting live" will have to synchronize its data with the
+server running with its ID, once they are in sync, it will request the
+other server (which it assumes it is a back that has assumed its duties)
+to shutdown for it to take over. This is necessary because otherwise the
+live server has no means to know whether there was a fail-over or not,
+and if there was if the server that took its duties is still running or
+not. To configure this option at your `activemq-configuration.xml`
+configuration file as follows:
+
+    <ha-policy>
+       <replication>
+          <master>
+             <check-for-live-server>true</check-for-live-server>
+          <master>
+       </replication>
+    </ha-policy>
+
+> **Warning**
+>
+> Be aware that if you restart a live server while after failover has
+> occurred then this value must be set to ``. If not the live server
+> will restart and server the same messages that the backup has already
+> handled causing duplicates.
+
+It is also possible, in the case of shared store, to cause failover to
+occur on normal server shutdown, to enable this set the following
+property to true in the `ha-policy` configuration on either the `master`
+or `slave` like so:
+
+    <ha-policy>
+       <shared-store>
+          <master>
+             <failover-on-shutdown>true</failover-on-shutdown>
+          </master>
+       </shared-store>
+    </ha-policy>
+
+By default this is set to false, if by some chance you have set this to
+false but still want to stop the server normally and cause failover then
+you can do this by using the management API as explained at ?
+
+You can also force the running live server to shutdown when the old live
+server comes back up allowing the original live server to take over
+automatically by setting the following property in the
+`activemq-configuration.xml` configuration file as follows:
+
+    <ha-policy>
+       <shared-store>
+          <slave>
+             <allow-failback>true</allow-failback>
+          </slave>
+       </shared-store>
+    </ha-policy>
+
+### All Shared Store Configuration
+
+The following table lists all the `ha-policy` configuration elements for
+HA strategy shared store for `master`:
+
+  name                            Description
+  ------------------------------- ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+  `failback-delay`                If a backup server is detected as being live, via the lock file, then the live server will wait announce itself as a backup and wait this amount of time (in ms) before starting as a live
+  `failover-on-server-shutdown`   If set to true then when this server is stopped normally the backup will become live assuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at ?
+
+The following table lists all the `ha-policy` configuration elements for
+HA strategy Shared Store for `slave`:
+
+  name                            Description
+  ------------------------------- ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
+  `failover-on-server-shutdown`   In the case of a backup that has become live. then when set to true then when this server is stopped normally the backup will become liveassuming failover. If false then the backup server will remain passive. Note that if false you want failover to occur the you can use the the management API as explained at ?
+  `allow-failback`                Whether a server will automatically stop when a another places a request to take over its place. The use case is when the backup has failed over.
+  `failback-delay`                After failover and the slave has become live, this is set on the new live server. When starting If a backup server is detected as being live, via the lock file, then the live server will wait announce itself as a backup and wait this amount of time (in ms) before starting as a live, however this is unlikely since this backup has just stopped anyway. It is also used as the delay after failback before this backup will restart (if `allow-failback` is set to true.
+
+Colocated Backup Servers
+------------------------
+
+It is also possible when running standalone to colocate backup servers
+in the same JVM as another live server. Live Servers can be configured
+to request another live server in the cluster to start a backup server
+in the same JVM either using shared store or replication. The new backup
+server will inherit its configuration from the live server creating it
+apart from its name, which will be set to `colocated_backup_n` where n
+is the number of backups the server has created, and any directories and
+its Connectors and Acceptors which are discussed later on in this
+chapter. A live server can also be configured to allow requests from
+backups and also how many backups a live server can start. this way you
+can evenly distribute backups around the cluster. This is configured via
+the `ha-policy` element in the `activemq-configuration.xml` file like
+so:
+
+    <ha-policy>
+       <replication>
+          <colocated>
+             <request-backup>true</request-backup>
+             <max-backups>1</max-backups>
+             <backup-request-retries>-1</backup-request-retries>
+             <backup-request-retry-interval>5000</backup-request-retry-interval>
+             <master/>
+             <slave/>
+          </colocated>
+       <replication>
+    </ha-policy>
+                
+
+the above example is configured to use replication, in this case the
+`master` and `slave` configurations must match those for normal
+replication as in the previous chapter. `shared-store` is also supported
+
+![ActiveMQ ha-colocated.png](images/ha-colocated.png)
+
+### Configuring Connectors and Acceptors
+
+If the HA Policy is colocated then connectors and acceptors will be
+inherited from the live server creating it and offset depending on the
+setting of `backup-port-offset` configuration element. If this is set to
+say 100 (which is the default) and a connector is using port 5445 then
+this will be set to 5545 for the first server created, 5645 for the
+second and so on.
+
+> **Note**
+>
+> for INVM connectors and Acceptors the id will have
+> `colocated_backup_n` appended, where n is the backup server number.
+
+#### Remote Connectors
+
+It may be that some of the Connectors configured are for external
+servers and hence should be excluded from the offset. for instance a
+Connector used by the cluster connection to do quorum voting for a
+replicated backup server, these can be omitted from being offset by
+adding them to the `ha-policy` configuration like so:
+
+    <ha-policy>
+       <replication>
+          <colocated>
+             <excludes>
+                <connector-ref>remote-connector</connector-ref>
+             </excludes>
+    .........
+    </ha-policy>
+                     
+
+### Configuring Directories
+
+Directories for the Journal, Large messages and Paging will be set
+according to what the HA strategy is. If shared store the the requesting
+server will notify the target server of which directories to use. If
+replication is configured then directories will be inherited from the
+creating server but have the new backups name appended.
+
+The following table lists all the `ha-policy` configuration elements:
+
+  name                              Description
+  --------------------------------- ---------------------------------------------------------------------------------------
+  `request-backup`                  If true then the server will request a backup on another node
+  `backup-request-retries`          How many times the live server will try to request a backup, -1 means for ever.
+  `backup-request-retry-interval`   How long to wait for retries between attempts to request a backup server.
+  `max-backups`                     Whether or not this live server will accept backup requests from other live servers.
+  `backup-port-offset`              The offset to use for the Connectors and Acceptors when creating a new backup server.
+
+Scaling Down
+============
+
+An alternative to using Live/Backup groups is to configure scaledown.
+when configured for scale down a server can copy all its messages and
+transaction state to another live server. The advantage of this is that
+you dont need full backups to provide some form of HA, however there are
+disadvantages with this approach the first being that it only deals with
+a server being stopped and not a server crash. The caveat here is if you
+configure a backup to scale down.
+
+Another disadvantage is that it is possible to lose message ordering.
+This happens in the following scenario, say you have 2 live servers and
+messages are distributed evenly between the servers from a single
+producer, if one of the servers scales down then the messages sent back
+to the other server will be in the queue after the ones already there,
+so server 1 could have messages 1,3,5,7,9 and server 2 would have
+2,4,6,8,10, if server 2 scales down the order in server 1 would be
+1,3,5,7,9,2,4,6,8,10.
+
+![ActiveMQ ha-scaledown.png](images/ha-scaledown.png)
+
+The configuration for a live server to scale down would be something
+like:
+
+    <ha-policy>
+       <live-only>
+          <scale-down>
+             <connectors>
+                <connector-ref>server1-connector</connector-ref>
+             </connectors>
+          </scale-down>
+       </live-only>
+    </ha-policy>
+          
+
+In this instance the server is configured to use a specific connector to
+scale down, if a connector is not specified then the first INVM
+connector is chosen, this is to make scale down fromm a backup server
+easy to configure. It is also possible to use discovery to scale down,
+this would look like:
+
+    <ha-policy>
+       <live-only>
+          <scale-down>
+             <discovery-group>my-discovery-group</discovery-group>
+          </scale-down>
+       </live-only>
+    </ha-policy>
+          
+
+Scale Down with groups
+----------------------
+
+It is also possible to configure servers to only scale down to servers
+that belong in the same group. This is done by configuring the group
+like so:
+
+    <ha-policy>
+       <live-only>
+          <scale-down>
+             ...
+             <group-name>my-group</group-name>
+          </scale-down>
+       </live-only>
+    </ha-policy>
+             
+
+In this scenario only servers that belong to the group `my-group` will
+be scaled down to
+
+Scale Down and Backups
+----------------------
+
+It is also possible to mix scale down with HA via backup servers. If a
+slave is configured to scale down then after failover has occurred,
+instead of starting fully the backup server will immediately scale down
+to another live server. The most appropriate configuration for this is
+using the `colocated` approach. it means as you bring up live server
+they will automatically be backed up by server and as live servers are
+shutdown, there messages are made available on another live server. A
+typical configuration would look like:
+
+    <ha-policy>
+       <replication>
+          <colocated>
+             <backup-request-retries>44</backup-request-retries>
+             <backup-request-retry-interval>33</backup-request-retry-interval>
+             <max-backups>3</max-backups>
+             <request-backup>false</request-backup>
+             <backup-port-offset>33</backup-port-offset>
+             <master>
+                <group-name>purple</group-name>
+                <check-for-live-server>true</check-for-live-server>
+                <cluster-name>abcdefg</cluster-name>
+             </master>
+             <slave>
+                <group-name>tiddles</group-name>
+                <max-saved-replicated-journals-size>22</max-saved-replicated-journals-size>
+                <cluster-name>33rrrrr</cluster-name>
+                <restart-backup>false</restart-backup>
+                <scale-down>
+                   <!--a grouping of servers that can be scaled down to-->
+                   <group-name>boo!</group-name>
+                   <!--either a discovery group-->
+                   <discovery-group>wahey</discovery-group>
+                </scale-down>
+             </slave>
+          </colocated>
+       </replication>
+    </ha-policy>
+             
+
+Scale Down and Clients
+----------------------
+
+When a server is stopping and preparing to scale down it will send a
+message to all its clients informing them which server it is scaling
+down to before disconnecting them. At this point the client will
+reconnect however this will only succeed once the server has completed
+scaledown. This is to ensure that any state such as queues or
+transactions are there for the client when it reconnects. The normal
+reconnect settings apply when the client is reconnecting so these should
+be high enough to deal with the time needed to scale down.
+
+Failover Modes
+==============
+
+ActiveMQ defines two types of client failover:
+
+-   Automatic client failover
+
+-   Application-level client failover
+
+ActiveMQ also provides 100% transparent automatic reattachment of
+connections to the same server (e.g. in case of transient network
+problems). This is similar to failover, except it is reconnecting to the
+same server and is discussed in ?
+
+During failover, if the client has consumers on any non persistent or
+temporary queues, those queues will be automatically recreated during
+failover on the backup node, since the backup node will not have any
+knowledge of non persistent queues.
+
+Automatic Client Failover
+-------------------------
+
+ActiveMQ clients can be configured to receive knowledge of all live and
+backup servers, so that in event of connection failure at the client -
+live server connection, the client will detect this and reconnect to the
+backup server. The backup server will then automatically recreate any
+sessions and consumers that existed on each connection before failover,
+thus saving the user from having to hand-code manual reconnection logic.
+
+ActiveMQ clients detect connection failure when it has not received
+packets from the server within the time given by
+`client-failure-check-period` as explained in section ?. If the client
+does not receive data in good time, it will assume the connection has
+failed and attempt failover. Also if the socket is closed by the OS,
+usually if the server process is killed rather than the machine itself
+crashing, then the client will failover straight away.
+
+ActiveMQ clients can be configured to discover the list of live-backup
+server groups in a number of different ways. They can be configured
+explicitly or probably the most common way of doing this is to use
+*server discovery* for the client to automatically discover the list.
+For full details on how to configure server discovery, please see ?.
+Alternatively, the clients can explicitly connect to a specific server
+and download the current servers and backups see ?.
+
+To enable automatic client failover, the client must be configured to
+allow non-zero reconnection attempts (as explained in ?).
+
+By default failover will only occur after at least one connection has
+been made to the live server. In other words, by default, failover will
+not occur if the client fails to make an initial connection to the live
+server - in this case it will simply retry connecting to the live server
+according to the reconnect-attempts property and fail after this number
+of attempts.
+
+### Failing over on the Initial Connection
+
+Since the client does not learn about the full topology until after the
+first connection is made there is a window where it does not know about
+the backup. If a failure happens at this point the client can only try
+reconnecting to the original live server. To configure how many attempts
+the client will make you can set the property `initialConnectAttempts`
+on the `ClientSessionFactoryImpl` or `ActiveMQConnectionFactory` or
+`initial-connect-attempts` in xml. The default for this is `0`, that is
+try only once. Once the number of attempts has been made an exception
+will be thrown.
+
+For examples of automatic failover with transacted and non-transacted
+JMS sessions, please see ? and ?.
+
+### A Note on Server Replication
+
+ActiveMQ does not replicate full server state between live and backup
+servers. When the new session is automatically recreated on the backup
+it won't have any knowledge of messages already sent or acknowledged in
+that session. Any in-flight sends or acknowledgements at the time of
+failover might also be lost.
+
+By replicating full server state, theoretically we could provide a 100%
+transparent seamless failover, which would avoid any lost messages or
+acknowledgements, however this comes at a great cost: replicating the
+full server state (including the queues, session, etc.). This would
+require replication of the entire server state machine; every operation
+on the live server would have to replicated on the replica server(s) in
+the exact same global order to ensure a consistent replica state. This
+is extremely hard to do in a performant and scalable way, especially
+when one considers that multiple threads are changing the live server
+state concurrently.
+
+It is possible to provide full state machine replication using
+techniques such as *virtual synchrony*, but this does not scale well and
+effectively serializes all operations to a single thread, dramatically
+reducing concurrency.
+
+Other techniques for multi-threaded active replication exist such as
+replicating lock states or replicating thread scheduling but this is
+very hard to achieve at a Java level.
+
+Consequently it has decided it was not worth massively reducing
+performance and concurrency for the sake of 100% transparent failover.
+Even without 100% transparent failover, it is simple to guarantee *once
+and only once* delivery, even in the case of failure, by using a
+combination of duplicate detection and retrying of transactions. However
+this is not 100% transparent to the client code.
+
+### Handling Blocking Calls During Failover
+
+If the client code is in a blocking call to the server, waiting for a
+response to continue its execution, when failover occurs, the new
+session will not have any knowledge of the call that was in progress.
+This call might otherwise hang for ever, waiting for a response that
+will never come.
+
+To prevent this, ActiveMQ will unblock any blocking calls that were in
+progress at the time of failover by making them throw a
+`javax.jms.JMSException` (if using JMS), or a `ActiveMQException` with
+error code `ActiveMQException.UNBLOCKED`. It is up to the client code to
+catch this exception and retry any operations if desired.
+
+If the method being unblocked is a call to commit(), or prepare(), then
+the transaction will be automatically rolled back and ActiveMQ will
+throw a `javax.jms.TransactionRolledBackException` (if using JMS), or a
+`ActiveMQException` with error code
+`ActiveMQException.TRANSACTION_ROLLED_BACK` if using the core API.
+
+### Handling Failover With Transactions
+
+If the session is transactional and messages have already been sent or
+acknowledged in the current transaction, then the server cannot be sure
+that messages sent or acknowledgements have not been lost during the
+failover.
+
+Consequently the transaction will be marked as rollback-only, and any
+subsequent attempt to commit it will throw a
+`javax.jms.TransactionRolledBackException` (if using JMS), or a
+`ActiveMQException` with error code
+`ActiveMQException.TRANSACTION_ROLLED_BACK` if using the core API.
+
+> **Warning**
+>
+> The caveat to this rule is when XA is used either via JMS or through
+> the core API. If 2 phase commit is used and prepare has already been
+> called then rolling back could cause a `HeuristicMixedException`.
+> Because of this the commit will throw a `XAException.XA_RETRY`
+> exception. This informs the Transaction Manager that it should retry
+> the commit at some later point in time, a side effect of this is that
+> any non persistent messages will be lost. To avoid this use persistent
+> messages when using XA. With acknowledgements this is not an issue
+> since they are flushed to the server before prepare gets called.
+
+It is up to the user to catch the exception, and perform any client side
+local rollback code as necessary. There is no need to manually rollback
+the session - it is already rolled back. The user can then just retry
+the transactional operations again on the same session.
+
+ActiveMQ ships with a fully functioning example demonstrating how to do
+this, please see ?
+
+If failover occurs when a commit call is being executed, the server, as
+previously described, will unblock the call to prevent a hang, since no
+response will come back. In this case it is not easy for the client to
+determine whether the transaction commit was actually processed on the
+live server before failure occurred.
+
+> **Note**
+>
+> If XA is being used either via JMS or through the core API then an
+> `XAException.XA_RETRY` is thrown. This is to inform Transaction
+> Managers that a retry should occur at some point. At some later point
+> in time the Transaction Manager will retry the commit. If the original
+> commit has not occurred then it will still exist and be committed, if
+> it does not exist then it is assumed to have been committed although
+> the transaction manager may log a warning.
+
+To remedy this, the client can simply enable duplicate detection (?) in
+the transaction, and retry the transaction operations again after the
+call is unblocked. If the transaction had indeed been committed on the
+live server successfully before failover, then when the transaction is
+retried, duplicate detection will ensure that any durable messages
+resent in the transaction will be ignored on the server to prevent them
+getting sent more than once.
+
+> **Note**
+>
+> By catching the rollback exceptions and retrying, catching unblocked
+> calls and enabling duplicate detection, once and only once delivery
+> guarantees for messages can be provided in the case of failure,
+> guaranteeing 100% no loss or duplication of messages.
+
+### Handling Failover With Non Transactional Sessions
+
+If the session is non transactional, messages or acknowledgements can be
+lost in the event of failover.
+
+If you wish to provide *once and only once* delivery guarantees for non
+transacted sessions too, enabled duplicate detection, and catch unblock
+exceptions as described in ?
+
+Getting Notified of Connection Failure
+--------------------------------------
+
+JMS provides a standard mechanism for getting notified asynchronously of
+connection failure: `java.jms.ExceptionListener`. Please consult the JMS
+javadoc or any good JMS tutorial for more information on how to use
+this.
+
+The ActiveMQ core API also provides a similar feature in the form of the
+class `org.apache.activemq.core.client.SessionFailureListener`
+
+Any ExceptionListener or SessionFailureListener instance will always be
+called by ActiveMQ on event of connection failure, **irrespective** of
+whether the connection was successfully failed over, reconnected or
+reattached, however you can find out if reconnect or reattach has
+happened by either the `failedOver` flag passed in on the
+`connectionFailed` on `SessionfailureListener` or by inspecting the
+error code on the `javax.jms.JMSException` which will be one of the
+following:
+
+  error code   Description
+  ------------ ---------------------------------------------------------------------------
+  FAILOVER     Failover has occurred and we have successfully reattached or reconnected.
+  DISCONNECT   No failover has occurred and we are disconnected.
+
+  : JMSException error codes
+
+Application-Level Failover
+--------------------------
+
+In some cases you may not want automatic client failover, and prefer to
+handle any connection failure yourself, and code your own manually
+reconnection logic in your own failure handler. We define this as
+*application-level* failover, since the failover is handled at the user
+application level.
+
+To implement application-level failover, if you're using JMS then you
+need to set an `ExceptionListener` class on the JMS connection. The
+`ExceptionListener` will be called by ActiveMQ in the event that
+connection failure is detected. In your `ExceptionListener`, you would
+close your old JMS connections, potentially look up new connection
+factory instances from JNDI and creating new connections. In this case
+you may well be using
+[HA-JNDI](http://www.jboss.org/community/wiki/JBossHAJNDIImpl) to ensure
+that the new connection factory is looked up from a different server.
+
+For a working example of application-level failover, please see ?.
+
+If you are using the core API, then the procedure is very similar: you
+would set a `FailureListener` on the core `ClientSession` instances.