You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by sr...@apache.org on 2018/05/15 15:21:00 UTC

[7/9] storm-site git commit: Rebuild site

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html b/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
index 5cd4db3..3d9d722 100644
--- a/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
+++ b/content/releases/1.1.2/Defining-a-non-jvm-language-dsl-for-storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
+<div class="documentation-content"><p>The right place to start to learn how to make a non-JVM DSL for Storm is <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/storm.thrift">storm-core/src/storm.thrift</a>. Since Storm topologies are just Thrift structures, and Nimbus is a Thrift daemon, you can create and submit topologies in any language.</p>
 
 <p>When you create the Thrift structs for spouts and bolts, the code for the spout or bolt is specified in the ComponentObject struct:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">union ComponentObject {
@@ -165,7 +165,7 @@
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kt">void</span> <span class="nf">submitTopology</span><span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">string</span> <span class="n">name</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">string</span> <span class="n">uploadedJarLocation</span><span class="o">,</span> <span class="mi">3</span><span class="o">:</span> <span class="n">string</span> <span class="n">jsonConf</span><span class="o">,</span> <span class="mi">4</span><span class="o">:</span> <span class="n">StormTopology</span> <span class="n">topology</span><span class="o">)</span> <span class="kd">throws</span> <span class="o">(</span><span class="mi">1</span><span class="o">:</span> <span class="n">AlreadyAliveException</span> <span class="n">e</span><span class="o">,</span> <span class="mi">2</span><span class="o">:</span> <span class="n">InvalidTop
 ologyException</span> <span class="n">ite</span><span class="o">);</span>
 </code></pre></div>
 <p>Finally, one of the key things to do in a non-JVM DSL is make it easy to define the entire topology in one file (the bolts, spouts, and the definition of the topology).</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Distributed-RPC.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Distributed-RPC.html b/content/releases/1.1.2/Distributed-RPC.html
index 473fd08..c8b8ab0 100644
--- a/content/releases/1.1.2/Distributed-RPC.html
+++ b/content/releases/1.1.2/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -330,7 +330,7 @@
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Eventlogging.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Eventlogging.html b/content/releases/1.1.2/Eventlogging.html
index fbb102b..9cba6ca 100644
--- a/content/releases/1.1.2/Eventlogging.html
+++ b/content/releases/1.1.2/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -247,7 +247,7 @@ clicking &quot;Debug&quot; under component actions.</p>
     */</span>
     <span class="kt">void</span> <span class="nf">close</span><span class="o">();</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/FAQ.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/FAQ.html b/content/releases/1.1.2/FAQ.html
index 0a2f46a..ba2aad6 100644
--- a/content/releases/1.1.2/FAQ.html
+++ b/content/releases/1.1.2/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more an more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Fault-tolerance.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Fault-tolerance.html b/content/releases/1.1.2/Fault-tolerance.html
index 72d6e9e..d024133 100644
--- a/content/releases/1.1.2/Fault-tolerance.html
+++ b/content/releases/1.1.2/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Guaranteeing-message-processing.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Guaranteeing-message-processing.html b/content/releases/1.1.2/Guaranteeing-message-processing.html
index ab97d78..4e2c355 100644
--- a/content/releases/1.1.2/Guaranteeing-message-processing.html
+++ b/content/releases/1.1.2/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@ This page describes how Storm can guarantee at least once processing.</p>
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Hooks.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Hooks.html b/content/releases/1.1.2/Hooks.html
index 12b465f..bea8f20 100644
--- a/content/releases/1.1.2/Hooks.html
+++ b/content/releases/1.1.2/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Implementation-docs.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Implementation-docs.html b/content/releases/1.1.2/Implementation-docs.html
index ac18dc9..065f685 100644
--- a/content/releases/1.1.2/Implementation-docs.html
+++ b/content/releases/1.1.2/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -154,7 +154,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Installing-native-dependencies.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Installing-native-dependencies.html b/content/releases/1.1.2/Installing-native-dependencies.html
index 0f6116d..6727a90 100644
--- a/content/releases/1.1.2/Installing-native-dependencies.html
+++ b/content/releases/1.1.2/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@ sudo make install
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Joins.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Joins.html b/content/releases/1.1.2/Joins.html
index 805eab6..f4d7887 100644
--- a/content/releases/1.1.2/Joins.html
+++ b/content/releases/1.1.2/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@ can occur when its value is set to null.</li>
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Kestrel-and-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Kestrel-and-Storm.html b/content/releases/1.1.2/Kestrel-and-Storm.html
index ae4e26d..431068e 100644
--- a/content/releases/1.1.2/Kestrel-and-Storm.html
+++ b/content/releases/1.1.2/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@ Than, wait about 5 seconds in order to avoid a ConnectException.
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Lifecycle-of-a-topology.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Lifecycle-of-a-topology.html b/content/releases/1.1.2/Lifecycle-of-a-topology.html
index 15d0690..1066552 100644
--- a/content/releases/1.1.2/Lifecycle-of-a-topology.html
+++ b/content/releases/1.1.2/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-core/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Local-mode.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Local-mode.html b/content/releases/1.1.2/Local-mode.html
index 1ec6bb1..2bc9724 100644
--- a/content/releases/1.1.2/Local-mode.html
+++ b/content/releases/1.1.2/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>. </p>
 
 <p>To create an in-process cluster, simply use the <code>LocalCluster</code> class. For example:</p>
 <div class="highlight"><pre><code class="language-java" data-lang="java"><span class="kn">import</span> <span class="nn">org.apache.storm.LocalCluster</span><span class="o">;</span>
@@ -164,7 +164,7 @@
 <li><strong>Config.TOPOLOGY_MAX_TASK_PARALLELISM</strong>: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.</li>
 <li><strong>Config.TOPOLOGY_DEBUG</strong>: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Logs.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Logs.html b/content/releases/1.1.2/Logs.html
index 929cb21..2096834 100644
--- a/content/releases/1.1.2/Logs.html
+++ b/content/releases/1.1.2/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@ Log Search supports searching in a certain log file or in all of a topology&#39;
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Maven.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Maven.html b/content/releases/1.1.2/Maven.html
index c00912b..5a63344 100644
--- a/content/releases/1.1.2/Maven.html
+++ b/content/releases/1.1.2/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-core<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/v1.1.2/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Message-passing-implementation.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Message-passing-implementation.html b/content/releases/1.1.2/Message-passing-implementation.html
index f03886e..f1c9217 100644
--- a/content/releases/1.1.2/Message-passing-implementation.html
+++ b/content/releases/1.1.2/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Metrics.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Metrics.html b/content/releases/1.1.2/Metrics.html
index d223258..36ec608 100644
--- a/content/releases/1.1.2/Metrics.html
+++ b/content/releases/1.1.2/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -466,7 +466,7 @@ Prior to STORM-2621 (v1.1.1, v1.2.0, and v2.0.0) these were the rate of entries,
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Multilang-protocol.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Multilang-protocol.html b/content/releases/1.1.2/Multilang-protocol.html
index 3b0d91a..95ff0db 100644
--- a/content/releases/1.1.2/Multilang-protocol.html
+++ b/content/releases/1.1.2/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@ subprocess periodically.  Heartbeat tuple looks like:</p>
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Pacemaker.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Pacemaker.html b/content/releases/1.1.2/Pacemaker.html
index 1486feb..504c01b 100644
--- a/content/releases/1.1.2/Pacemaker.html
+++ b/content/releases/1.1.2/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -258,7 +258,7 @@ On Gigabit networking, there is a theoretical limit of about 6000 nodes. However
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Powered-By.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Powered-By.html b/content/releases/1.1.2/Powered-By.html
index 2e26265..38a5161 100644
--- a/content/releases/1.1.2/Powered-By.html
+++ b/content/releases/1.1.2/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1169,7 +1169,7 @@ We are using Storm to track internet threats from varied sources around the web.
 
 
 </table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Project-ideas.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Project-ideas.html b/content/releases/1.1.2/Project-ideas.html
index 1e9beec..2178686 100644
--- a/content/releases/1.1.2/Project-ideas.html
+++ b/content/releases/1.1.2/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Rationale.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Rationale.html b/content/releases/1.1.2/Rationale.html
index dab35c1..0a0b153 100644
--- a/content/releases/1.1.2/Rationale.html
+++ b/content/releases/1.1.2/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html b/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
index 3cd5533..e20ef70 100644
--- a/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/1.1.2/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -617,7 +617,7 @@ rack-0 Avail [ CPU 32.78688524590164% MEM 19.51219512195122% Slots 20.0% ] effec
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html b/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
index 0a04efd..b662a14 100644
--- a/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/1.1.2/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/SECURITY.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/SECURITY.html b/content/releases/1.1.2/SECURITY.html
index 9de32fd..c671e44 100644
--- a/content/releases/1.1.2/SECURITY.html
+++ b/content/releases/1.1.2/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -681,7 +681,7 @@ on all possible worker hosts.</p>
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/STORM-UI-REST-API.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/STORM-UI-REST-API.html b/content/releases/1.1.2/STORM-UI-REST-API.html
index 1f73954..27c62ed 100644
--- a/content/releases/1.1.2/STORM-UI-REST-API.html
+++ b/content/releases/1.1.2/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -2936,7 +2936,7 @@ daemons.</p>
   </span><span class="s2">"error"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Internal Server Error"</span><span class="p">,</span><span class="w">
   </span><span class="s2">"errorMessage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java.lang.NullPointerException</span><span class="se">\n\t</span><span class="s2">at clojure.core$name.invoke(core.clj:1505)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$component_page.invoke(core.clj:752)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$fn__7766.invoke(core.clj:782)</span><span class="se">\n\t</span><span class="s2">at compojure.core$make_route$fn__5755.invoke(core.clj:93)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_route$fn__5743.invoke(core.clj:39)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_method$fn__5736.invoke(core.clj:24)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing$fn__5761.invoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.core$some.invoke(core.clj:2443)</span><spa
 n class="se">\n\t</span><span class="s2">at compojure.core$routing.doInvoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.lang.RestFn.applyTo(RestFn.java:139)</span><span class="se">\n\t</span><span class="s2">at clojure.core$apply.invoke(core.clj:619)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routes$fn__5765.invoke(core.clj:111)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.params$wrap
 _params$fn__6291.invoke(params.clj:55)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)</span><span class="se">\n\t</span><span class="s2">at
  org.mortbay.jetty.Server.handle(Server.java:326)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)</span><span class="se">\n</span><span class="s2">"</span><span
  class="w">
 </span><span class="p">}</span><span class="w">
-</span></code></pre></div>
+</span></code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Serialization-(prior-to-0.6.0).html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Serialization-(prior-to-0.6.0).html b/content/releases/1.1.2/Serialization-(prior-to-0.6.0).html
index 61de27b..470d891 100644
--- a/content/releases/1.1.2/Serialization-(prior-to-0.6.0).html
+++ b/content/releases/1.1.2/Serialization-(prior-to-0.6.0).html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Serialization.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Serialization.html b/content/releases/1.1.2/Serialization.html
index 22cbee2..b4c4c13 100644
--- a/content/releases/1.1.2/Serialization.html
+++ b/content/releases/1.1.2/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -200,7 +200,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Serializers.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Serializers.html b/content/releases/1.1.2/Serializers.html
index ead48ec..35e0a77 100644
--- a/content/releases/1.1.2/Serializers.html
+++ b/content/releases/1.1.2/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Setting-up-a-Storm-cluster.html b/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
index dd96538..7dcd2e3 100644
--- a/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
+++ b/content/releases/1.1.2/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -246,7 +246,7 @@ The time to allow any given healthcheck script to run before it is marked failed
 </ol>
 
 <p>As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Setting-up-development-environment.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Setting-up-development-environment.html b/content/releases/1.1.2/Setting-up-development-environment.html
index 7035ae4..60c125c 100644
--- a/content/releases/1.1.2/Setting-up-development-environment.html
+++ b/content/releases/1.1.2/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Spout-implementations.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Spout-implementations.html b/content/releases/1.1.2/Spout-implementations.html
index 2f77892..67b5b97 100644
--- a/content/releases/1.1.2/Spout-implementations.html
+++ b/content/releases/1.1.2/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/State-checkpointing.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/State-checkpointing.html b/content/releases/1.1.2/State-checkpointing.html
index c3e696b..9455182 100644
--- a/content/releases/1.1.2/State-checkpointing.html
+++ b/content/releases/1.1.2/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -303,7 +303,7 @@ for e.g. the CheckpointSpout to save its state.</p>
 a <code>StateProvider</code> implementation which can load and return the state based on the namespace. Each state belongs to a unique namespace.
 The namespace is typically unique per task so that each task can have its own state. The StateProvider and the corresponding
 State implementation should be available in the class path of Storm (by placing them in the extlib directory).</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Storm-Scheduler.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Storm-Scheduler.html b/content/releases/1.1.2/Storm-Scheduler.html
index 1d22766..a9fb237 100644
--- a/content/releases/1.1.2/Storm-Scheduler.html
+++ b/content/releases/1.1.2/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Storm-multi-language-protocol-(versions-0.7.0-and-below).html b/content/releases/1.1.2/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
index 215e239..463fb20 100644
--- a/content/releases/1.1.2/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
+++ b/content/releases/1.1.2/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@ file lets the supervisor know the PID so it can shutdown the process later on.</
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Structure-of-the-codebase.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Structure-of-the-codebase.html b/content/releases/1.1.2/Structure-of-the-codebase.html
index 7d8e81c..92dba98 100644
--- a/content/releases/1.1.2/Structure-of-the-codebase.html
+++ b/content/releases/1.1.2/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -287,7 +287,7 @@
 <p><a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/util.clj">org.apache.storm.util</a>: Contains generic utility functions used throughout the code base.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/clj/org/apache/storm/zookeeper.clj">org.apache.storm.zookeeper</a>: Clojure wrapper around the Zookeeper API and implements some &quot;high-level&quot; stuff like &quot;mkdirs&quot; and &quot;delete-recursive&quot;.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Support-for-non-java-languages.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Support-for-non-java-languages.html b/content/releases/1.1.2/Support-for-non-java-languages.html
index 20efd49..46a7de2 100644
--- a/content/releases/1.1.2/Support-for-non-java-languages.html
+++ b/content/releases/1.1.2/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Transactional-topologies.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Transactional-topologies.html b/content/releases/1.1.2/Transactional-topologies.html
index 2164c6f..e161f67 100644
--- a/content/releases/1.1.2/Transactional-topologies.html
+++ b/content/releases/1.1.2/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Trident-API-Overview.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Trident-API-Overview.html b/content/releases/1.1.2/Trident-API-Overview.html
index 1be4956..f1d5a9c 100644
--- a/content/releases/1.1.2/Trident-API-Overview.html
+++ b/content/releases/1.1.2/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@ Partition 2:
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Trident-RAS-API.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Trident-RAS-API.html b/content/releases/1.1.2/Trident-RAS-API.html
index 2ad8546..1700759 100644
--- a/content/releases/1.1.2/Trident-RAS-API.html
+++ b/content/releases/1.1.2/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@ Operations that are combined by Trident into single Bolts will have their resour
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Trident-spouts.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Trident-spouts.html b/content/releases/1.1.2/Trident-spouts.html
index 9eac47c..7f62f1a 100644
--- a/content/releases/1.1.2/Trident-spouts.html
+++ b/content/releases/1.1.2/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Trident-state.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Trident-state.html b/content/releases/1.1.2/Trident-state.html
index 325c150..0bf639c 100644
--- a/content/releases/1.1.2/Trident-state.html
+++ b/content/releases/1.1.2/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@ apple =&gt; [count=10, txid=2]
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/v1.1.2/storm-core/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Trident-tutorial.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Trident-tutorial.html b/content/releases/1.1.2/Trident-tutorial.html
index ca8a161..7ac0c37 100644
--- a/content/releases/1.1.2/Trident-tutorial.html
+++ b/content/releases/1.1.2/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Troubleshooting.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Troubleshooting.html b/content/releases/1.1.2/Troubleshooting.html
index e7d877a..1b386ad 100644
--- a/content/releases/1.1.2/Troubleshooting.html
+++ b/content/releases/1.1.2/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@ Caused by: java.util.ConcurrentModificationException
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Tutorial.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Tutorial.html b/content/releases/1.1.2/Tutorial.html
index 87e55dc..a776b00 100644
--- a/content/releases/1.1.2/Tutorial.html
+++ b/content/releases/1.1.2/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -428,7 +428,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
index 0d8e717..789c697 100644
--- a/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/1.1.2/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@ $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html b/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
index 5de792e..cebdacd 100644
--- a/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/1.1.2/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/Windowing.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/Windowing.html b/content/releases/1.1.2/Windowing.html
index b5f3bc9..ff97307 100644
--- a/content/releases/1.1.2/Windowing.html
+++ b/content/releases/1.1.2/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -380,7 +380,7 @@ tuples can be received within the timeout period.</p>
 
 <p>An example toplogy <code>SlidingWindowTopology</code> shows how to use the apis to compute a sliding window sum and a tumbling window 
 average.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/distcache-blobstore.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/distcache-blobstore.html b/content/releases/1.1.2/distcache-blobstore.html
index d7cb463..b53790f 100644
--- a/content/releases/1.1.2/distcache-blobstore.html
+++ b/content/releases/1.1.2/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@ struct BeginDownloadResult {
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/dynamic-log-level-settings.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/dynamic-log-level-settings.html b/content/releases/1.1.2/dynamic-log-level-settings.html
index 0b24d50..f5d4a50 100644
--- a/content/releases/1.1.2/dynamic-log-level-settings.html
+++ b/content/releases/1.1.2/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/dynamic-worker-profiling.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/dynamic-worker-profiling.html b/content/releases/1.1.2/dynamic-worker-profiling.html
index c2b58ed..7b0a298 100644
--- a/content/releases/1.1.2/dynamic-worker-profiling.html
+++ b/content/releases/1.1.2/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/flux.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/flux.html b/content/releases/1.1.2/flux.html
index 78f6e4e..f97ee3f 100644
--- a/content/releases/1.1.2/flux.html
+++ b/content/releases/1.1.2/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -908,7 +908,7 @@ same file. Includes may be either files, or classpath resources.</p>
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/index.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/index.html b/content/releases/1.1.2/index.html
index 27992a8..66752c4 100644
--- a/content/releases/1.1.2/index.html
+++ b/content/releases/1.1.2/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<blockquote>
+<div class="documentation-content"><blockquote>
 <h4 id="note">NOTE</h4>
 
 <p>In the latest version, the class packages have been changed from &quot;backtype.storm&quot; to &quot;org.apache.storm&quot; so the topology code compiled with older version won&#39;t run on the Storm 1.0.0 just like that. Backward compatibility is available through following configuration </p>
@@ -284,7 +284,7 @@ But small change will not affect the user experience. We will notify the user wh
 <li><a href="Multilang-protocol.html">Multilang protocol</a> (how to provide support for another language)</li>
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/nimbus-ha-design.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/nimbus-ha-design.html b/content/releases/1.1.2/nimbus-ha-design.html
index 9755cf5..75d5b37 100644
--- a/content/releases/1.1.2/nimbus-ha-design.html
+++ b/content/releases/1.1.2/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@ The default is 60 seconds, a value of -1 indicates to wait for ever.
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/storm-cassandra.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/storm-cassandra.html b/content/releases/1.1.2/storm-cassandra.html
index f22f5c8..e879609 100644
--- a/content/releases/1.1.2/storm-cassandra.html
+++ b/content/releases/1.1.2/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@ The stream is partitioned among the bolt&#39;s tasks based on the specified row
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/storm-elasticsearch.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/storm-elasticsearch.html b/content/releases/1.1.2/storm-elasticsearch.html
index bf4253d..c335318 100644
--- a/content/releases/1.1.2/storm-elasticsearch.html
+++ b/content/releases/1.1.2/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@ You can refer implementation of DefaultEsTupleMapper to see how to implement you
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/storm-eventhubs.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/storm-eventhubs.html b/content/releases/1.1.2/storm-eventhubs.html
index f1fed63..27752c6 100644
--- a/content/releases/1.1.2/storm-eventhubs.html
+++ b/content/releases/1.1.2/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@ If you want to send messages to all partitions, use &quot;-1&quot; as partitionI
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/1.1.2/storm-hbase.html
----------------------------------------------------------------------
diff --git a/content/releases/1.1.2/storm-hbase.html b/content/releases/1.1.2/storm-hbase.html
index d9dd98c..6e21b15 100644
--- a/content/releases/1.1.2/storm-hbase.html
+++ b/content/releases/1.1.2/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -359,7 +359,7 @@ Word: 'watermelon', Count: 6806
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>