You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@activemq.apache.org by bu...@apache.org on 2016/11/18 20:22:25 UTC

svn commit: r1001331 - in /websites/production/activemq/content: cache/main.pageCache leveldb-store.html replicated-leveldb-store.html

Author: buildbot
Date: Fri Nov 18 20:22:25 2016
New Revision: 1001331

Log:
Production update by buildbot for activemq

Modified:
    websites/production/activemq/content/cache/main.pageCache
    websites/production/activemq/content/leveldb-store.html
    websites/production/activemq/content/replicated-leveldb-store.html

Modified: websites/production/activemq/content/cache/main.pageCache
==============================================================================
Binary files - no diff available.

Modified: websites/production/activemq/content/leveldb-store.html
==============================================================================
--- websites/production/activemq/content/leveldb-store.html (original)
+++ websites/production/activemq/content/leveldb-store.html Fri Nov 18 20:22:25 2016
@@ -81,7 +81,7 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><div class="confluence-information-macro confluence-information-macro-information"><p class="title">Version Compatibility</p><span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>Available in ActiveMQ 5.8.0 and newer</p></div></div><p>The LevelDB Store is a file based persistence database that is local to the message broker that is using it. It has been optimized to provide even faster persistence than KahaDB. It's similar to KahahDB but instead of using a custom B-Tree implementation to index the write ahead logs, it uses <a shape="rect" class="external-link" href="https://code.google.com/p/leveldb/" rel="nofollow">LevelDB</a> based indexes which have several nice properties due to the 'append only' files access patterns :</p><ul><li>Fast updates (No need to do random disk updates)</li><li>Concurrent reads</li><li>Fast index snapshots using hard links</
 li></ul><p>Both KahaDB and the LevelDB store have to do periodic garbage collection cycles to determine which log files can deleted. In the case of KahaDB, this can be quite expensive as you increase the amount of data stored and can cause read/write stalls while the collection occurs. The LevelDB store uses a much cheaper algorithm to determine when log files can be collected and avoids those stalls.</p><h2 id="LevelDBStore-Configuration">Configuration</h2><p>You can configure ActiveMQ to use LevelDB for its persistence adapter - like below :</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
+<div class="wiki-content maincontent"><div class="confluence-information-macro confluence-information-macro-warning"><p class="title">Warning</p><span class="aui-icon aui-icon-small aui-iconfont-error confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>The LevelDB store has been deprecated and is no longer supported or recommended for use. The recommended store is <a shape="rect" href="kahadb.html">KahaDB</a></p></div></div><div class="confluence-information-macro confluence-information-macro-information"><p class="title">Version Compatibility</p><span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>Available in ActiveMQ 5.8.0 and newer</p></div></div><p>The LevelDB Store is a file based persistence database that is local to the message broker that is using it. It has been optimized to provide even faster persistence than KahaDB. It's similar to KahahD
 B but instead of using a custom B-Tree implementation to index the write ahead logs, it uses <a shape="rect" class="external-link" href="https://code.google.com/p/leveldb/" rel="nofollow">LevelDB</a> based indexes which have several nice properties due to the 'append only' files access patterns :</p><ul><li>Fast updates (No need to do random disk updates)</li><li>Concurrent reads</li><li>Fast index snapshots using hard links</li></ul><p>Both KahaDB and the LevelDB store have to do periodic garbage collection cycles to determine which log files can deleted. In the case of KahaDB, this can be quite expensive as you increase the amount of data stored and can cause read/write stalls while the collection occurs. The LevelDB store uses a much cheaper algorithm to determine when log files can be collected and avoids those stalls.</p><h2 id="LevelDBStore-Configuration">Configuration</h2><p>You can configure ActiveMQ to use LevelDB for its persistence adapter - like below :</p><div class="co
 de panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
 <pre class="brush: java; gutter: false; theme: Default" style="font-size:12px;">  &lt;broker brokerName="broker" ... &gt;
     ...
     &lt;persistenceAdapter&gt;

Modified: websites/production/activemq/content/replicated-leveldb-store.html
==============================================================================
--- websites/production/activemq/content/replicated-leveldb-store.html (original)
+++ websites/production/activemq/content/replicated-leveldb-store.html Fri Nov 18 20:22:25 2016
@@ -81,7 +81,7 @@
   <tbody>
         <tr>
         <td valign="top" width="100%">
-<div class="wiki-content maincontent"><h2 id="ReplicatedLevelDBStore-Synopsis">Synopsis</h2><p>The Replicated LevelDB Store uses Apache ZooKeeper to pick a master from a set of broker nodes configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB Stores with the master keeps them up to date by replicating all updates from the master.</p><p>The Replicated LevelDB Store uses the same data files as a LevelDB Store, so you can switch a broker configuration between replicated and non replicated whenever you want.</p><div class="confluence-information-macro confluence-information-macro-information"><p class="title">Version Compatibility</p><span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>Available as of ActiveMQ 5.9.0.</p></div></div><h2 id="ReplicatedLevelDBStore-Howitworks.">How it works.</h2><p><span class="confluence-embedded-file-wrapper"><img class="confluence-embedd
 ed-image" src="replicated-leveldb-store.data/replicated-leveldb-store.png"></span></p><p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache ZooKeeper</a> to coordinate which node in the cluster becomes the master. The elected master broker node starts and accepts client connections. The other nodes go into slave mode and connect the the master and synchronize their persistent state /w it. The slave nodes do not accept client connections. All persistent operations are replicated to the connected slaves. If the master dies, the slaves with the latest update gets promoted to become the master. The failed node can then be brought back online and it will go into slave mode.</p><p>All messaging operations which require a sync to disk will wait for the update to be replicated to a quorum of the nodes before completing. So if you configure the store with <code>replicas="3"</code> then the quorum size is <code>(3/2+1)=2</code>. The master will store the 
 update locally and wait for 1 other slave to store the update before reporting success. Another way to think about it is that store will do synchronous replication to a quorum of the replication nodes and asynchronous replication replication to any additional nodes.</p><p>When a new master is elected, you also need at least a quorum of nodes online to be able to find a node with the lastest updates. The node with the lastest updates will become the new master. Therefore, it's recommend that you run with at least 3 replica nodes so that you can take one down without suffering a service outage.</p><h3 id="ReplicatedLevelDBStore-DeploymentTips">Deployment Tips</h3><p>Clients should be using the <a shape="rect" href="failover-transport-reference.html">Failover Transport</a> to connect to the broker nodes in the replication cluster. e.g. using a URL something like the following:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
+<div class="wiki-content maincontent"><div class="confluence-information-macro confluence-information-macro-warning"><p class="title">Warning</p><span class="aui-icon aui-icon-small aui-iconfont-error confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>The LevelDB store has been deprecated and is no longer supported or recommended for use. The recommended store is <a shape="rect" href="kahadb.html">KahaDB</a></p></div></div><h2 id="ReplicatedLevelDBStore-Synopsis">Synopsis</h2><p>The Replicated LevelDB Store uses Apache ZooKeeper to pick a master from a set of broker nodes configured to replicate a LevelDB Store. Then synchronizes all slave LevelDB Stores with the master keeps them up to date by replicating all updates from the master.</p><p>The Replicated LevelDB Store uses the same data files as a LevelDB Store, so you can switch a broker configuration between replicated and non replicated whenever you want.</p><div class="confluence-informa
 tion-macro confluence-information-macro-information"><p class="title">Version Compatibility</p><span class="aui-icon aui-icon-small aui-iconfont-info confluence-information-macro-icon"></span><div class="confluence-information-macro-body"><p>Available as of ActiveMQ 5.9.0.</p></div></div><h2 id="ReplicatedLevelDBStore-Howitworks.">How it works.</h2><p><span class="confluence-embedded-file-wrapper"><img class="confluence-embedded-image" src="replicated-leveldb-store.data/replicated-leveldb-store.png"></span></p><p>It uses <a shape="rect" class="external-link" href="http://zookeeper.apache.org/">Apache ZooKeeper</a> to coordinate which node in the cluster becomes the master. The elected master broker node starts and accepts client connections. The other nodes go into slave mode and connect the the master and synchronize their persistent state /w it. The slave nodes do not accept client connections. All persistent operations are replicated to the connected slaves. If the master dies, t
 he slaves with the latest update gets promoted to become the master. The failed node can then be brought back online and it will go into slave mode.</p><p>All messaging operations which require a sync to disk will wait for the update to be replicated to a quorum of the nodes before completing. So if you configure the store with <code>replicas="3"</code> then the quorum size is <code>(3/2+1)=2</code>. The master will store the update locally and wait for 1 other slave to store the update before reporting success. Another way to think about it is that store will do synchronous replication to a quorum of the replication nodes and asynchronous replication replication to any additional nodes.</p><p>When a new master is elected, you also need at least a quorum of nodes online to be able to find a node with the lastest updates. The node with the lastest updates will become the new master. Therefore, it's recommend that you run with at least 3 replica nodes so that you can take one down wit
 hout suffering a service outage.</p><h3 id="ReplicatedLevelDBStore-DeploymentTips">Deployment Tips</h3><p>Clients should be using the <a shape="rect" href="failover-transport-reference.html">Failover Transport</a> to connect to the broker nodes in the replication cluster. e.g. using a URL something like the following:</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">
 <pre class="brush: java; gutter: false; theme: Default" style="font-size:12px;">failover:(tcp://broker1:61616,tcp://broker2:61616,tcp://broker3:61616)
 </pre>
 </div></div><p>You should run at least 3 ZooKeeper server nodes so that the ZooKeeper service is highly available. Don't overcommit your ZooKeeper servers. An overworked ZooKeeper might start thinking live replication nodes have gone offline due to delays in processing their 'keep-alive' messages.</p><p>For best results, make sure you explicitly configure the hostname attribute with a hostname or ip address for the node that other cluster members to access the machine with. The automatically determined hostname is not always accessible by the other cluster members and results in slaves not being able to establish a replication session with the master.</p><h2 id="ReplicatedLevelDBStore-Configuration">Configuration</h2><p>You can configure ActiveMQ to use LevelDB for its persistence adapter - like below :</p><div class="code panel pdl" style="border-width: 1px;"><div class="codeContent panelContent pdl">