You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by sr...@apache.org on 2018/05/15 15:20:56 UTC

[3/9] storm-site git commit: Rebuild site

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html b/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
index 26027ed..bba5870 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@ You can refer implementation of DefaultEsTupleMapper to see how to implement you
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html b/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
index 7f7720d..4fe5c12 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@ If you want to send messages to all partitions, use &quot;-1&quot; as partitionI
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-hbase.html b/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
index b88d9bc..8337c96 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -395,7 +395,7 @@ Word: 'watermelon', Count: 6806
         <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopology</span><span class="o">(</span><span class="n">topoName</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">createTopology</span><span class="o">());</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html b/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
index 6219a43..bbe6c8c 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h1 id="hdfs-bolt">HDFS Bolt</h1>
 
@@ -745,7 +745,7 @@ However, the later mechanism is deprecated as it does not allow multiple Hdfs sp
 </tbody></table>
 
 <hr>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-hive.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-hive.html b/content/releases/2.0.0-SNAPSHOT/storm-hive.html
index 0b17b56..9f1ab32 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-hive.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@ User should make sure that Tuple field names are matched to the table column nam
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html b/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
index 38f940c..60a6072 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -399,7 +399,7 @@ storm jar org.apache.storm.jdbc.topology.UserPersistanceTopology <dataSourceClas
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html b/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
index 71148a3..dff0444 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@ DEBUG (backtype.storm.contrib.jms.example.GenericBolt:75) - [ANOTHER_BOLT] ACKin
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html b/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
index 1156492..9867bea 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-jms.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-jms.html b/content/releases/2.0.0-SNAPSHOT/storm-jms.html
index c32754d..d89c265 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-jms.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html b/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
index da69dea..8d34721 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -530,7 +530,7 @@ and Kafka 0.10.1.0 <a href="https://kafka.apache.org/0101/javadoc/index.html?org
 <td><code>UNCOMMITTED_LATEST</code></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-kafka.html b/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
index f55d072..ce5ad77 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -504,7 +504,7 @@ Section &quot;Important configuration properties for the producer&quot; for more
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html b/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
index e0b7868..5f19b6a 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-metrics-for-profiling-various-storm-internal-actions">Storm Metrics for Profiling Various Storm Internal Actions</h1>
+<div class="documentation-content"><h1 id="storm-metrics-for-profiling-various-storm-internal-actions">Storm Metrics for Profiling Various Storm Internal Actions</h1>
 
 <p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
@@ -213,7 +213,7 @@ supervisor.childopts: "-Xmx256m -Dcom.sun.management.jmxremote.port=3337 -Dcom.s
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html b/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
index 1897532..94fa6d0 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-metricstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A metric store (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/MetricStore.java"><code>MetricStore</code></a>) interface was added 
+<div class="documentation-content"><p>A metric store (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/MetricStore.java"><code>MetricStore</code></a>) interface was added 
 to Nimbus to allow storing metric information (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/Metric.java"><code>Metric</code></a>) 
 to a database.  The default implementation 
 (<a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/metricstore/rocksdb/RocksDbStore.java"><code>RocksDbStore</code></a>) is using RocksDB, 
@@ -331,7 +331,7 @@ fields are as follows:</p>
 <td>The sum of the metric values</td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html b/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
index 6ce63ed..bc74d28 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -417,7 +417,7 @@
 
         <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span> <span class="k">new</span> <span class="n">MapGet</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"sum"</span><span class="o">))</span>
                 <span class="o">.</span><span class="na">each</span><span class="o">(</span><span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">,</span> <span class="s">"sum"</span><span class="o">),</span> <span class="k">new</span> <span class="n">PrintFunction</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html b/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
index cf0f260..d68cc5e 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@ keystore/truststore need to be available on all worker nodes where the spout/bol
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-redis.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-redis.html b/content/releases/2.0.0-SNAPSHOT/storm-redis.html
index e043595..9f88575 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-redis.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -382,7 +382,7 @@
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">RedisClusterStateQuerier</span><span class="o">(</span><span class="n">lookupMapper</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">"columnName"</span><span class="o">,</span><span class="s">"columnValue"</span><span class="o">));</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-solr.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-solr.html b/content/releases/2.0.0-SNAPSHOT/storm-solr.html
index 90d52e2..0a32c3c 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-solr.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -308,7 +308,7 @@ and then generate an uber jar with all the dependencies.</p>
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html b/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
index 43fa1db..74214d3 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
+<div class="documentation-content"><p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
 This page is written by &quot;how-to&quot; style so you can follow the step and learn how to utilize Storm SQL step by step. </p>
 
 <h2 id="preparation">Preparation</h2>
@@ -379,7 +379,7 @@ This page assumes that GetTime2 is in classpath, for simplicity.</p>
 (You may noticed that the types of some of output fields are different than output table schema.)</p>
 
 <p>Its behavior is subject to change when Storm SQL changes its backend API to core (tuple by tuple, low-level or high-level) one.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html b/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
index 9d32886..5eb132b 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -195,7 +195,7 @@ You can use <code>--jars</code> or <code>--artifacts</code> option to <code>stor
 (Use <code>--artifacts</code> if your data source JARs are available in Maven repository since it handles transitive dependencies.)</p>
 
 <p>Please refer <a href="storm-sql.html">Storm SQL integration</a> page to how to do it.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html b/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
index d4da8ec..3ba85bc 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql-reference.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
+<div class="documentation-content"><p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
 Storm SQL also adopts Rex compiler from Calcite, so Storm SQL is expected to handle SQL dialect recognized by Calcite&#39;s default SQL parser. </p>
 
 <p>The page is based on Calcite SQL reference on website, and removes the area Storm SQL doesn&#39;t support, and also adds the area Storm SQL supports.</p>
@@ -2101,7 +2101,7 @@ You can use below as working reference for <code>--artifacts</code> option, and
 
 <p>Also, hdfs configuration files should be provided.
 You can put the <code>core-site.xml</code> and <code>hdfs-site.xml</code> into the <code>conf</code> directory which is in Storm installation directory.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-sql.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-sql.html b/content/releases/2.0.0-SNAPSHOT/storm-sql.html
index 403b662..cfec1ae 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-sql.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -284,7 +284,7 @@ LogicalTableModify(table=[[LARGE_ORDERS]], operation=[INSERT], updateColumnList=
 <li>Windowing is yet to be implemented.</li>
 <li>Aggregation and join are not supported (waiting for <code>Streaming SQL</code> to be matured)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html b/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
index 9292700..4e03824 100644
--- a/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
+++ b/content/releases/2.0.0-SNAPSHOT/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@ If you don&#39;t want to execute Storm processes directly (not on command prompt
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Acking-framework-implementation.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Acking-framework-implementation.html b/content/releases/current/Acking-framework-implementation.html
index a9108de..28ec8bc 100644
--- a/content/releases/current/Acking-framework-implementation.html
+++ b/content/releases/current/Acking-framework-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<div class="documentation-content"><p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#39;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
 <p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki -- this explains the internal details.</p>
 
@@ -180,7 +180,7 @@
 <p>Internally, it holds several HashMaps (&#39;buckets&#39;) of its own, each holding a cohort of records that will expire at the same time.  Let&#39;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery -- and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Classpath-handling.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Classpath-handling.html b/content/releases/current/Classpath-handling.html
index f68b86b..634a5ee 100644
--- a/content/releases/current/Classpath-handling.html
+++ b/content/releases/current/Classpath-handling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
+<div class="documentation-content"><h3 id="storm-is-an-application-container">Storm is an Application Container</h3>
 
 <p>Storm provides an application container environment, a la Apache Tomcat, which creates potential for classpath conflicts between Storm and your application.  The most common way of using Storm involves submitting an &quot;uber JAR&quot; containing your application code with all of its dependencies bundled in, and then Storm distributes this JAR to Worker nodes.  Then Storm runs your application within a Storm process called a <code>Worker</code> -- thus the JVM&#39;s classpath contains the dependencies of your JAR as well as whatever dependencies the Worker itself has.  So careful handling of classpaths and dependencies is critical for the correct functioning of Storm.</p>
 
@@ -173,7 +173,7 @@
 <p>When the <code>storm.py</code> script launches a <code>java</code> command, it first constructs the classpath from the optional settings mentioned above, as well as including some default locations such as the <code>${STORM_DIR}/</code>, <code>${STORM_DIR}/lib/</code>, <code>${STORM_DIR}/extlib/</code> and <code>${STORM_DIR}/extlib-daemon/</code> directories.  In past releases, Storm would enumerate all JARs in those directories and then explicitly add all of those JARs into the <code>-cp</code> / <code>--classpath</code> argument to the launched <code>java</code> commands.  As such, the classpath would get so long that the <code>java</code> commands could breach the Linux Kernel process table limit of 4096 bytes for recording commands.  That led to truncated commands in <code>ps</code> output, making it hard to operate Storm clusters because you could not easily differentiate the processes nor easily see from <code>ps</code> which port a worker is listening to.</p>
 
 <p>After Storm dropped support for Java 5, this classpath expansion was no longer necessary, because Java 6 supports classpath wildcards. Classpath wildcards allow you to specify a directory ending with a <code>*</code> element, such as <code>foo/bar/*</code>, and the JVM will automatically expand the classpath to include all <code>.jar</code> files in the wildcard directory.  As of <a href="https://issues.apache.org/jira/browse/STORM-2191">STORM-2191</a> Storm just uses classpath wildcards instead of explicitly listing all JARs, thereby shortening all of the commands and making operating Storm clusters a bit easier.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Clojure-DSL.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Clojure-DSL.html b/content/releases/current/Clojure-DSL.html
index 89fa383..fd2616a 100644
--- a/content/releases/current/Clojure-DSL.html
+++ b/content/releases/current/Clojure-DSL.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
+<div class="documentation-content"><p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#39;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/clojure.clj">org.apache.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -371,7 +371,7 @@
 <h3 id="testing-topologies">Testing topologies</h3>
 
 <p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#39;s powerful built-in facilities for testing topologies in Clojure.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Command-line-client.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Command-line-client.html b/content/releases/current/Command-line-client.html
index 19e9671..b651b35 100644
--- a/content/releases/current/Command-line-client.html
+++ b/content/releases/current/Command-line-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
+<div class="documentation-content"><p>This page describes all the commands that are possible with the &quot;storm&quot; command line client. To learn how to set up your &quot;storm&quot; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-development-environment.html">Setting up development environment</a>. See <a href="Classpath-handling.html">Classpath handling</a> for details on using external libraries in these commands.</p>
 
 <p>These commands are:</p>
 
@@ -423,7 +423,7 @@ and timeout is integer seconds.</p>
 <p>Syntax: <code>storm help [command]</code></p>
 
 <p>Print one help message or list of available commands</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Common-patterns.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Common-patterns.html b/content/releases/current/Common-patterns.html
index 5460965..5333dd7 100644
--- a/content/releases/current/Common-patterns.html
+++ b/content/releases/current/Common-patterns.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists a variety of common patterns in Storm topologies.</p>
+<div class="documentation-content"><p>This page lists a variety of common patterns in Storm topologies.</p>
 
 <ol>
 <li>Batching</li>
@@ -212,7 +212,7 @@
 <p><code>KeyedFairBolt</code> also wraps the bolt containing your logic and makes sure your topology processes multiple DRPC invocations at the same time, instead of doing them serially one at a time.</p>
 
 <p>See <a href="Distributed-RPC.html">Distributed RPC</a> for more details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Concepts.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Concepts.html b/content/releases/current/Concepts.html
index 0c5ea0d..bfd8b7a 100644
--- a/content/releases/current/Concepts.html
+++ b/content/releases/current/Concepts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
+<div class="documentation-content"><p>This page lists the main concepts of Storm and links to resources where you can find more information. The concepts discussed are:</p>
 
 <ol>
 <li>Topologies</li>
@@ -268,7 +268,7 @@
 <ul>
 <li><a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_WORKERS">Config.TOPOLOGY_WORKERS</a>: this config sets the number of workers to allocate for executing the topology</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Configuration.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Configuration.html b/content/releases/current/Configuration.html
index fcee36e..6f300d9 100644
--- a/content/releases/current/Configuration.html
+++ b/content/releases/current/Configuration.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
+<div class="documentation-content"><p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
 <p>Every configuration has a default value defined in <a href="http://github.com/apache/storm/blob/v1.2.1/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="javadocs/org/apache/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &quot;TOPOLOGY&quot;.</p>
 
@@ -175,7 +175,7 @@
 <li><a href="Running-topologies-on-a-production-cluster.html">Running topologies on a production cluster</a>: lists useful configurations when running topologies on a cluster</li>
 <li><a href="Local-mode.html">Local mode</a>: lists useful configurations when using local mode</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Contributing-to-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Contributing-to-Storm.html b/content/releases/current/Contributing-to-Storm.html
index 8badb1c..9fa0bdb 100644
--- a/content/releases/current/Contributing-to-Storm.html
+++ b/content/releases/current/Contributing-to-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="getting-started-with-contributing">Getting started with contributing</h3>
+<div class="documentation-content"><h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
 <p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the <a href="https://issues.apache.org/jira/browse/STORM-2891?jql=project%20%3D%20STORM%20AND%20status%20%3D%20Open%20AND%20labels%20in%20(newbie%2C%20%22newbie%2B%2B%22)">&quot;Newbie&quot;</a> label. If you&#39;re interested in contributing to Storm but don&#39;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
@@ -172,7 +172,7 @@
 <h3 id="contributing-documentation">Contributing documentation</h3>
 
 <p>Documentation contributions are very welcome! The best way to send contributions is as emails through the mailing list.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Creating-a-new-Storm-project.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Creating-a-new-Storm-project.html b/content/releases/current/Creating-a-new-Storm-project.html
index e679958..9dc8638 100644
--- a/content/releases/current/Creating-a-new-Storm-project.html
+++ b/content/releases/current/Creating-a-new-Storm-project.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines how to set up a Storm project for development. The steps are:</p>
+<div class="documentation-content"><p>This page outlines how to set up a Storm project for development. The steps are:</p>
 
 <ol>
 <li>Add Storm jars to classpath</li>
@@ -166,7 +166,7 @@
 <p>For more information on writing topologies in other languages, see <a href="Using-non-JVM-languages-with-Storm.html">Using non-JVM languages with Storm</a>.</p>
 
 <p>To test that everything is working in Eclipse, you should now be able to <code>Run</code> the <code>WordCountTopology.java</code> file. You will see messages being emitted at the console for 10 seconds.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/DSLs-and-multilang-adapters.html
----------------------------------------------------------------------
diff --git a/content/releases/current/DSLs-and-multilang-adapters.html b/content/releases/current/DSLs-and-multilang-adapters.html
index 8be8db5..7f10518 100644
--- a/content/releases/current/DSLs-and-multilang-adapters.html
+++ b/content/releases/current/DSLs-and-multilang-adapters.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/redstorm">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/tomdz/storm-esper">Storm/Esper integration</a>: Streaming SQL on top of Storm</li>
 <li><a href="https://github.com/dan-blanchard/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Daemon-Fault-Tolerance.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Daemon-Fault-Tolerance.html b/content/releases/current/Daemon-Fault-Tolerance.html
index 565e12c..8981fb0 100644
--- a/content/releases/current/Daemon-Fault-Tolerance.html
+++ b/content/releases/current/Daemon-Fault-Tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
+<div class="documentation-content"><p>Storm has several different daemon processes.  Nimbus that schedules workers, supervisors that launch and kill workers, the log viewer that gives access to logs, and the UI that shows the status of a cluster.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
index c3fde21..38f9395 100644
--- a/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/current/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTop
 ologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Distributed-RPC.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Distributed-RPC.html b/content/releases/current/Distributed-RPC.html
index 73e2569..2baa19b 100644
--- a/content/releases/current/Distributed-RPC.html
+++ b/content/releases/current/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -330,7 +330,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Eventlogging.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Eventlogging.html b/content/releases/current/Eventlogging.html
index 8d9a05f..4557c1b 100644
--- a/content/releases/current/Eventlogging.html
+++ b/content/releases/current/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -269,7 +269,7 @@ Alternate implementations of the <code>IEventLogger</code> interface can be adde
 
 <p>Please keep in mind that EventLoggerBolt is just a kind of Bolt, so whole throughput of the topology will go down when registered event loggers cannot keep up handling incoming events, so you may want to take care of the Bolt like normal Bolt.
 One of idea to avoid this is making your implementation of IEventLogger as <code>non-blocking</code> fashion.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/FAQ.html
----------------------------------------------------------------------
diff --git a/content/releases/current/FAQ.html b/content/releases/current/FAQ.html
index 81e8d50..562ee8d 100644
--- a/content/releases/current/FAQ.html
+++ b/content/releases/current/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more an more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Fault-tolerance.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Fault-tolerance.html b/content/releases/current/Fault-tolerance.html
index bf71b1a..61cbf6b 100644
--- a/content/releases/current/Fault-tolerance.html
+++ b/content/releases/current/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Guaranteeing-message-processing.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Guaranteeing-message-processing.html b/content/releases/current/Guaranteeing-message-processing.html
index fe6aadc..e7a81c4 100644
--- a/content/releases/current/Guaranteeing-message-processing.html
+++ b/content/releases/current/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@ This page describes how Storm can guarantee at least once processing.</p>
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Hooks.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Hooks.html b/content/releases/current/Hooks.html
index 138481a..67e52d3 100644
--- a/content/releases/current/Hooks.html
+++ b/content/releases/current/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Implementation-docs.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Implementation-docs.html b/content/releases/current/Implementation-docs.html
index 6dcbf6a..e522728 100644
--- a/content/releases/current/Implementation-docs.html
+++ b/content/releases/current/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -154,7 +154,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Installing-native-dependencies.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Installing-native-dependencies.html b/content/releases/current/Installing-native-dependencies.html
index 1371936..b7fee03 100644
--- a/content/releases/current/Installing-native-dependencies.html
+++ b/content/releases/current/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@ sudo make install
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Joins.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Joins.html b/content/releases/current/Joins.html
index b95e985..410e45a 100644
--- a/content/releases/current/Joins.html
+++ b/content/releases/current/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@ can occur when its value is set to null.</li>
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Kestrel-and-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Kestrel-and-Storm.html b/content/releases/current/Kestrel-and-Storm.html
index c31597d..bd1fb02 100644
--- a/content/releases/current/Kestrel-and-Storm.html
+++ b/content/releases/current/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@ Than, wait about 5 seconds in order to avoid a ConnectException.
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Lifecycle-of-a-topology.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Lifecycle-of-a-topology.html b/content/releases/current/Lifecycle-of-a-topology.html
index 7239101..d91ed32 100644
--- a/content/releases/current/Lifecycle-of-a-topology.html
+++ b/content/releases/current/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Local-mode.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Local-mode.html b/content/releases/current/Local-mode.html
index 5149afd..9152f7e 100644
--- a/content/releases/current/Local-mode.html
+++ b/content/releases/current/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
 
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.storm.LocalCluster</span><span class="o">;</span>
@@ -164,7 +164,7 @@
 <li><strong>Config.TOPOLOGY_MAX_TASK_PARALLELISM</strong>: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.</li>
 <li><strong>Config.TOPOLOGY_DEBUG</strong>: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Logs.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Logs.html b/content/releases/current/Logs.html
index 4d8c3af..314eff2 100644
--- a/content/releases/current/Logs.html
+++ b/content/releases/current/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@ Log Search supports searching in a certain log file or in all of a topology&#39;
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Maven.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Maven.html b/content/releases/current/Maven.html
index 2a9d037..f356085 100644
--- a/content/releases/current/Maven.html
+++ b/content/releases/current/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-core<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/v1.2.1/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Message-passing-implementation.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Message-passing-implementation.html b/content/releases/current/Message-passing-implementation.html
index 0efb3f1..fc46bb0 100644
--- a/content/releases/current/Message-passing-implementation.html
+++ b/content/releases/current/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Metrics.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Metrics.html b/content/releases/current/Metrics.html
index 26d2047..94f1e8e 100644
--- a/content/releases/current/Metrics.html
+++ b/content/releases/current/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -466,7 +466,7 @@ Prior to STORM-2621 (v1.1.1, v1.2.0, and v2.0.0) these were the rate of entries,
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Multilang-protocol.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Multilang-protocol.html b/content/releases/current/Multilang-protocol.html
index 3f3accd..5b65343 100644
--- a/content/releases/current/Multilang-protocol.html
+++ b/content/releases/current/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@ subprocess periodically.  Heartbeat tuple looks like:</p>
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Pacemaker.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Pacemaker.html b/content/releases/current/Pacemaker.html
index 9257f35..7353e9a 100644
--- a/content/releases/current/Pacemaker.html
+++ b/content/releases/current/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -258,7 +258,7 @@ On Gigabit networking, there is a theoretical limit of about 6000 nodes. However
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Powered-By.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Powered-By.html b/content/releases/current/Powered-By.html
index b939e4f..eeb9eb2 100644
--- a/content/releases/current/Powered-By.html
+++ b/content/releases/current/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1169,7 +1169,7 @@ We are using Storm to track internet threats from varied sources around the web.
 
 
 </table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Project-ideas.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Project-ideas.html b/content/releases/current/Project-ideas.html
index ee22774..625f451 100644
--- a/content/releases/current/Project-ideas.html
+++ b/content/releases/current/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Rationale.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Rationale.html b/content/releases/current/Rationale.html
index 2fd316d..6dc60f4 100644
--- a/content/releases/current/Rationale.html
+++ b/content/releases/current/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Resource_Aware_Scheduler_overview.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Resource_Aware_Scheduler_overview.html b/content/releases/current/Resource_Aware_Scheduler_overview.html
index 2055f21..8c3a5d1 100644
--- a/content/releases/current/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/current/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -617,7 +617,7 @@ rack-0 Avail [ CPU 32.78688524590164% MEM 19.51219512195122% Slots 20.0% ] effec
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Running-topologies-on-a-production-cluster.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Running-topologies-on-a-production-cluster.html b/content/releases/current/Running-topologies-on-a-production-cluster.html
index c49b731..af54a31 100644
--- a/content/releases/current/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/current/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/SECURITY.html
----------------------------------------------------------------------
diff --git a/content/releases/current/SECURITY.html b/content/releases/current/SECURITY.html
index 8a6978f..9515823 100644
--- a/content/releases/current/SECURITY.html
+++ b/content/releases/current/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -683,7 +683,7 @@ on all possible worker hosts.</p>
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>