You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by sr...@apache.org on 2018/05/15 15:20:57 UTC

[4/9] storm-site git commit: Rebuild site

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html b/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
index bd26e03..a2a2198 100644
--- a/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
+++ b/content/releases/2.0.0-SNAPSHOT/Distributed-RPC.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
+<div class="documentation-content"><p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
 <p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#39;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#39;s so useful that it&#39;s bundled with Storm.</p>
 
@@ -347,7 +347,7 @@ try (DRPCClient drpc = DRPCClient.getConfiguredClient(conf)) {
 <li>KeyedFairBolt for weaving the processing of multiple requests at the same time</li>
 <li>How to use <code>CoordinatedBolt</code> directly</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Eventlogging.html b/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
index a6e5945..e6564ce 100644
--- a/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
+++ b/content/releases/2.0.0-SNAPSHOT/Eventlogging.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>Topology event inspector provides the ability to view the tuples as it flows through different stages in a storm topology.
 This could be useful for inspecting the tuples emitted at a spout or a bolt in the topology pipeline while the topology is running, without stopping or redeploying the topology. The normal flow of tuples from the spouts to the bolts is not affected by turning on event logging.</p>
@@ -269,7 +269,7 @@ Alternate implementations of the <code>IEventLogger</code> interface can be adde
 
 <p>Please keep in mind that EventLoggerBolt is just a kind of Bolt, so whole throughput of the topology will go down when registered event loggers cannot keep up handling incoming events, so you may want to take care of the Bolt like normal Bolt.
 One of idea to avoid this is making your implementation of IEventLogger as <code>non-blocking</code> fashion.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/FAQ.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/FAQ.html b/content/releases/2.0.0-SNAPSHOT/FAQ.html
index e2639e0..66a996e 100644
--- a/content/releases/2.0.0-SNAPSHOT/FAQ.html
+++ b/content/releases/2.0.0-SNAPSHOT/FAQ.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="best-practices">Best Practices</h2>
+<div class="documentation-content"><h2 id="best-practices">Best Practices</h2>
 
 <h3 id="what-rules-of-thumb-can-you-give-me-for-configuring-storm-trident">What rules of thumb can you give me for configuring Storm+Trident?</h3>
 
@@ -276,7 +276,7 @@
 <li>When possible, make your process incremental: each value that comes in makes the answer more and more true. A Trident ReducerAggregator is an operator that takes a prior result and a set of new records and returns a new result. This lets the result be cached and serialized to a datastore; if a server drops off line for a day and then comes back with a full day&#39;s worth of data in a rush, the old results will be calmly retrieved and updated.</li>
 <li>Lambda architecture: Record all events into an archival store (S3, HBase, HDFS) on receipt. in the fast layer, once the time window is clear, process the bucket to get an actionable answer, and ignore everything older than the time window. Periodically run a global aggregation to calculate a &quot;correct&quot; answer.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html b/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
index f691c2c..adcfd30 100644
--- a/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
+++ b/content/releases/2.0.0-SNAPSHOT/Fault-tolerance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
+<div class="documentation-content"><p>This page explains the design details of Storm that make it a fault-tolerant system.</p>
 
 <h2 id="what-happens-when-a-worker-dies">What happens when a worker dies?</h2>
 
@@ -169,7 +169,7 @@
 <h2 id="how-does-storm-guarantee-data-processing">How does Storm guarantee data processing?</h2>
 
 <p>Storm provides mechanisms to guarantee data processing even if nodes die or messages are lost. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for the details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html b/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
index 503abfc..655cf8c 100644
--- a/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
+++ b/content/releases/2.0.0-SNAPSHOT/Guaranteeing-message-processing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
+<div class="documentation-content"><p>Storm offers several different levels of guaranteed message processing, including best effort, at least once, and exactly once through <a href="Trident-tutorial.html">Trident</a>.
 This page describes how Storm can guarantee at least once processing.</p>
 
 <h3 id="what-does-it-mean-for-a-message-to-be-fully-processed">What does it mean for a message to be &quot;fully processed&quot;?</h3>
@@ -301,7 +301,7 @@ This page describes how Storm can guarantee at least once processing.</p>
 <p>The second way is to remove reliability on a message by message basis. You can turn off tracking for an individual spout tuple by omitting a message id in the <code>SpoutOutputCollector.emit</code> method.</p>
 
 <p>Finally, if you don&#39;t care if a particular subset of the tuples downstream in the topology fail to be processed, you can emit them as unanchored tuples. Since they&#39;re not anchored to any spout tuples, they won&#39;t cause any spout tuples to fail if they aren&#39;t acked.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Hooks.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Hooks.html b/content/releases/2.0.0-SNAPSHOT/Hooks.html
index 20ac5c7..7b71432 100644
--- a/content/releases/2.0.0-SNAPSHOT/Hooks.html
+++ b/content/releases/2.0.0-SNAPSHOT/Hooks.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
+<div class="documentation-content"><p>Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the <a href="javadocs/org/apache/storm/hooks/BaseTaskHook.html">BaseTaskHook</a> class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:</p>
 
 <ol>
 <li>In the open method of your spout or prepare method of your bolt using the <a href="javadocs/org/apache/storm/task/TopologyContext.html#addTaskHook">TopologyContext</a> method.</li>
 <li>Through the Storm configuration using the <a href="javadocs/org/apache/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS">&quot;topology.auto.task.hooks&quot;</a> config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html b/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
index 366d728..e9de91f 100644
--- a/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
+++ b/content/releases/2.0.0-SNAPSHOT/IConfigLoader.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>IConfigLoader is an interface designed to allow dynamic loading of scheduler resource constraints. Currently, the MultiTenant scheduler uses this interface to dynamically load the number of isolated nodes a given user has been guaranteed, and the ResoureAwareScheduler uses the interface to dynamically load per user resource guarantees.</p>
 
@@ -195,7 +195,7 @@ For <code>FileConfigLoader</code>, this is the URI pointing to a file.</li>
 <li>scheduler.config.loader.polltime.secs: Currently only used in <code>ArtifactoryConfigLoader</code>. It is the frequency at which the plugin will call out to artifactory instead of returning the most recently cached result. The default is 600 seconds.</li>
 <li>scheduler.config.loader.artifactory.base.directory: Only used in <code>ArtifactoryConfigLoader</code>. It is the part of the uri, configurable in Artifactory, which represents the top of the directory tree. It defaults to &quot;/artifactory&quot;.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html b/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
index 6d9dae9..d5d6d11 100644
--- a/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
+++ b/content/releases/2.0.0-SNAPSHOT/Implementation-docs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
+<div class="documentation-content"><p>This section of the wiki is dedicated to explaining how Storm is implemented. You should have a good grasp of how to use Storm before reading these sections. </p>
 
 <ul>
 <li><a href="Structure-of-the-codebase.html">Structure of the codebase</a></li>
@@ -155,7 +155,7 @@
 <li><a href="nimbus-ha-design.html">Nimbus HA</a></li>
 <li><a href="storm-sql-internal.html">Storm SQL</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html b/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
index fb68946..abfce3d 100644
--- a/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
+++ b/content/releases/2.0.0-SNAPSHOT/Installing-native-dependencies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
+<div class="documentation-content"><p>The native dependencies are only needed on actual Storm clusters. When running Storm in local mode, Storm uses a pure Java messaging system so that you don&#39;t need to install native dependencies on your development machine.</p>
 
 <p>Installing ZeroMQ and JZMQ is usually straightforward. Sometimes, however, people run into issues with autoconf and get strange errors. If you run into any issues, please email the <a href="http://groups.google.com/group/storm-user">Storm mailing list</a> or come get help in the #storm-user room on freenode. </p>
 
@@ -175,7 +175,7 @@ sudo make install
 </ol>
 
 <p>If you run into any errors when running <code>./configure</code>, <a href="http://stackoverflow.com/questions/3522248/how-do-i-compile-jzmq-for-zeromq-on-osx">this thread</a> may provide a solution.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Joins.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Joins.html b/content/releases/2.0.0-SNAPSHOT/Joins.html
index 809ab78..96b615f 100644
--- a/content/releases/2.0.0-SNAPSHOT/Joins.html
+++ b/content/releases/2.0.0-SNAPSHOT/Joins.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
+<div class="documentation-content"><p>Storm core supports joining multiple data streams into one with the help of <code>JoinBolt</code>.
 <code>JoinBolt</code> is a Windowed bolt, i.e. it waits for the configured window duration to match up the
 tuples among the streams being joined. This helps align the streams within a Window boundary.</p>
 
@@ -272,7 +272,7 @@ can occur when its value is set to null.</li>
 <li>Lastly, keep the window size to the minimum value necessary for solving the problem at hand.</li>
 </ul></li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html b/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
index 6658e34..6167aec 100644
--- a/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/Kestrel-and-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
+<div class="documentation-content"><p>This page explains how to use Storm to consume items from a Kestrel cluster.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -334,7 +334,7 @@ Than, wait about 5 seconds in order to avoid a ConnectException.
 Now execute the program to add items to the queue and launch the Storm topology. The order in which you launch the programs is of no importance.
 
 If you run the topology with TOPOLOGY_DEBUG you should see tuples being emitted in the topology.
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html b/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
index eebe42d..d2765a6 100644
--- a/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
+++ b/content/releases/2.0.0-SNAPSHOT/Lifecycle-of-a-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-client/src</code> rather than <code>src/</code>.)</p>
+<div class="documentation-content"><p>(<strong>NOTE</strong>: this page is based on the 0.7.1 code; many things have changed since then, including a split between tasks and executors, and a reorganization of the code under <code>storm-client/src</code> rather than <code>src/</code>.)</p>
 
 <p>This page explains in detail the lifecycle of a topology from running the &quot;storm jar&quot; command to uploading the topology to Nimbus to the supervisors starting/stopping workers to workers and tasks setting themselves up. It also explains how Nimbus monitors topologies and how topologies are shutdown when they are killed.</p>
 
@@ -261,7 +261,7 @@
 <li>Removing a topology cleans out the assignment and static information from ZK <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L116">code</a></li>
 <li>A separate cleanup thread runs the <code>do-cleanup</code> function which will clean up the heartbeat dir and the jars/configs stored locally. <a href="https://github.com/apache/storm/blob/0.7.1/src/clj/org/apache/storm/daemon/nimbus.clj#L577">code</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Local-mode.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Local-mode.html b/content/releases/2.0.0-SNAPSHOT/Local-mode.html
index fef6c8b..f306942 100644
--- a/content/releases/2.0.0-SNAPSHOT/Local-mode.html
+++ b/content/releases/2.0.0-SNAPSHOT/Local-mode.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>.</p>
+<div class="documentation-content"><p>Local mode simulates a Storm cluster in process and is useful for developing and testing topologies. Running topologies in local mode is similar to running topologies <a href="Running-topologies-on-a-production-cluster.html">on a cluster</a>.</p>
 
 <p>To run a topology in local mode you have two options.  The most common option is to run your topology with <code>storm local</code> instead of <code>storm jar</code></p>
 
@@ -213,7 +213,7 @@
 
 <p>These, like all other configs, can be set on the command line when launching your toplogy with the <code>-c</code> flag.  The flag is of the form <code>-c &lt;conf_name&gt;=&lt;JSON_VALUE&gt;</code>  so to enable debugging when launching your topology in local mode you could run</p>
 <div class="highlight"><pre><code class="language-" data-lang="">storm local topology.jar &lt;MY_MAIN_CLASS&gt; -c topology.debug=true
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Logs.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Logs.html b/content/releases/2.0.0-SNAPSHOT/Logs.html
index 4d4e70f..ad0e965 100644
--- a/content/releases/2.0.0-SNAPSHOT/Logs.html
+++ b/content/releases/2.0.0-SNAPSHOT/Logs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
+<div class="documentation-content"><p>Logs in Storm are essential for tracking the status, operations, error messages and debug information for all the 
 daemons (e.g., nimbus, supervisor, logviewer, drpc, ui, pacemaker) and topologies&#39; workers.</p>
 
 <h3 id="location-of-the-logs">Location of the Logs</h3>
@@ -171,7 +171,7 @@ Log Search supports searching in a certain log file or in all of a topology&#39;
 <p>Search in a topology: a user can also search a string for a certain topology by clicking the icon of magnifying lens at the top right corner of the UI page. This means the UI will try to search on all the supervisor nodes in a distributed way to find the matched string in all logs for this topology. The search can happen for either normal text log files or rolled zip log files by checking/unchecking the &quot;Search archived logs:&quot; box. Then the matched results can be shown on the UI with url links, directing the user to the certain logs on each supervisor node. This powerful feature is very helpful for users to find certain problematic supervisor nodes running this topology.</p>
 
 <p><img src="images/search-a-topology.png" alt="Search in a topology" title="Search in a topology"></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Maven.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Maven.html b/content/releases/2.0.0-SNAPSHOT/Maven.html
index ce7b7a2..a4be378 100644
--- a/content/releases/2.0.0-SNAPSHOT/Maven.html
+++ b/content/releases/2.0.0-SNAPSHOT/Maven.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
+<div class="documentation-content"><p>To develop topologies, you&#39;ll need the Storm jars on your classpath. You should either include the unpacked jars in the classpath for your project or use Maven to include Storm as a development dependency. Storm is hosted on Maven Central. To include Storm in your project as a development dependency, add the following to your pom.xml:</p>
 <div class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt">&lt;dependency&gt;</span>
   <span class="nt">&lt;groupId&gt;</span>org.apache.storm<span class="nt">&lt;/groupId&gt;</span>
   <span class="nt">&lt;artifactId&gt;</span>storm-client<span class="nt">&lt;/artifactId&gt;</span>
@@ -157,7 +157,7 @@
 <h3 id="developing-storm">Developing Storm</h3>
 
 <p>Please refer to <a href="http://github.com/apache/storm/blob/master/DEVELOPER.md">DEVELOPER.md</a> for more details.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html b/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
index a598345..3bc6c66 100644
--- a/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
+++ b/content/releases/2.0.0-SNAPSHOT/Message-passing-implementation.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
+<div class="documentation-content"><p>(Note: this walkthrough is out of date as of 0.8.0. 0.8.0 revamped the message passing infrastructure to be based on the Disruptor)</p>
 
 <p>This page walks through how emitting and transferring tuples works in Storm.</p>
 
@@ -186,7 +186,7 @@
 </ul></li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Metrics.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Metrics.html b/content/releases/2.0.0-SNAPSHOT/Metrics.html
index 576c702..248755b 100644
--- a/content/releases/2.0.0-SNAPSHOT/Metrics.html
+++ b/content/releases/2.0.0-SNAPSHOT/Metrics.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm exposes a metrics interface to report summary statistics across the full topology.
+<div class="documentation-content"><p>Storm exposes a metrics interface to report summary statistics across the full topology.
 The numbers you see on the UI come from some of these built in metrics, but are reported through the worker heartbeats instead of through the IMetricsConsumer described below.</p>
 
 <h3 id="metric-types">Metric Types</h3>
@@ -474,7 +474,7 @@ Prior to STORM-2621 (v1.1.1, v1.2.0, and v2.0.0) these were the rate of entries,
 <li><code>newWorkerEvent</code> is 1 when a worker is first started and 0 all other times.  This can be used to tell when a worker has crashed and is restarted.</li>
 <li><code>startTimeSecs</code> is when the worker started in seconds since the epoch</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html b/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
index aebc4c4..91f34d4 100644
--- a/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
+++ b/content/releases/2.0.0-SNAPSHOT/Multilang-protocol.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
+<div class="documentation-content"><p>This page explains the multilang protocol as of Storm 0.7.1. Versions prior to 0.7.1 used a somewhat different protocol, documented [here](Storm-multi-language-protocol-(versions-0.7.0-and-below).html).</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -436,7 +436,7 @@ subprocess periodically.  Heartbeat tuple looks like:</p>
 </code></pre></div>
 <p>When subprocess receives heartbeat tuple, it must send a <code>sync</code> command back to
 ShellBolt.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Pacemaker.html b/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
index 0412e35..4011335 100644
--- a/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
+++ b/content/releases/2.0.0-SNAPSHOT/Pacemaker.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="introduction">Introduction</h3>
+<div class="documentation-content"><h3 id="introduction">Introduction</h3>
 
 <p>Pacemaker is a storm daemon designed to process heartbeats from workers. As Storm is scaled up, ZooKeeper begins to become a bottleneck due to high volumes of writes from workers doing heartbeats. Lots of writes to disk and too much traffic across the network is generated as ZooKeeper tries to maintain consistency.</p>
 
@@ -253,7 +253,7 @@ On Gigabit networking, there is a theoretical limit of about 6000 nodes. However
 On a 270 supervisor cluster, fully scheduled with topologies, Pacemaker resource utilization was 70% of one core and nearly 1GiB of RAM on a machine with 4 <code>Intel(R) Xeon(R) CPU E5530 @ 2.40GHz</code> and 24GiB of RAM.</p>
 
 <p>Pacemaker now supports HA. Multiple Pacemaker instances can be used at once in a storm cluster to allow massive scalability. Just include the names of the Pacemaker hosts in the pacemaker.servers config and workers and Nimbus will start communicating with them. They&#39;re fault tolerant as well. The system keeps on working as long as there is at least one pacemaker left running - provided it can handle the load.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Performance.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Performance.html b/content/releases/2.0.0-SNAPSHOT/Performance.html
index 2224555..ba1089d 100644
--- a/content/releases/2.0.0-SNAPSHOT/Performance.html
+++ b/content/releases/2.0.0-SNAPSHOT/Performance.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Latency, throughput and resource consumption are the three key dimensions involved in performance tuning.
+<div class="documentation-content"><p>Latency, throughput and resource consumption are the three key dimensions involved in performance tuning.
 In the following sections we discuss the settings that can used to tune along these dimension and understand the trade-offs.</p>
 
 <p>It is important to understand that these settings can vary depending on the topology, the type of hardware and the number of hosts used by the topology.</p>
@@ -311,7 +311,7 @@ Executors that are not expected to be busy can be allocated a smaller fraction o
 core for executors that are not likely to saturate the CPU.</p>
 
 <p>The <em>system bolt</em> generally processes very few messages per second, and so requires very little cpu (typically less than 10% of a physical core).</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Powered-By.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Powered-By.html b/content/releases/2.0.0-SNAPSHOT/Powered-By.html
index f7851d4..06e6898 100644
--- a/content/releases/2.0.0-SNAPSHOT/Powered-By.html
+++ b/content/releases/2.0.0-SNAPSHOT/Powered-By.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
+<div class="documentation-content"><p>Want to be added to this page? Send an email <a href="mailto:nathan.marz@gmail.com">here</a>.</p>
 
 <table>
 
@@ -1179,7 +1179,7 @@ We are using Storm to track internet threats from varied sources around the web.
 
 
 </table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Project-ideas.html b/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
index 0149c2a..b6a3e04 100644
--- a/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
+++ b/content/releases/2.0.0-SNAPSHOT/Project-ideas.html
@@ -144,12 +144,12 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><strong>DSLs for non-JVM languages:</strong> These DSL&#39;s should be all-inclusive and not require any Java for the creation of topologies, spouts, or bolts. Since topologies are <a href="http://thrift.apache.org/">Thrift</a> structs, Nimbus is a Thrift service, and bolts can be written in any language, this is possible.</li>
 <li><strong>Online machine learning algorithms:</strong> Something like <a href="http://mahout.apache.org/">Mahout</a> but for online algorithms</li>
 <li><strong>Suite of performance benchmarks:</strong> These benchmarks should test Storm&#39;s performance on CPU and IO intensive workloads. There should be benchmarks for different classes of applications, such as stream processing (where throughput is the priority) and distributed RPC (where latency is the priority). </li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Rationale.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Rationale.html b/content/releases/2.0.0-SNAPSHOT/Rationale.html
index 6afc3b6..9005e8d 100644
--- a/content/releases/2.0.0-SNAPSHOT/Rationale.html
+++ b/content/releases/2.0.0-SNAPSHOT/Rationale.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
+<div class="documentation-content"><p>The past decade has seen a revolution in data processing. MapReduce, Hadoop, and related technologies have made it possible to store and process data at scales previously unthinkable. Unfortunately, these data processing technologies are not realtime systems, nor are they meant to be. There&#39;s no hack that will turn Hadoop into a realtime system; realtime data processing has a fundamentally different set of requirements than batch processing.</p>
 
 <p>However, realtime data processing at massive scale is becoming more and more of a requirement for businesses. The lack of a &quot;Hadoop of realtime&quot; has become the biggest hole in the data processing ecosystem.</p>
 
@@ -176,7 +176,7 @@
 <li><strong>Fault-tolerant</strong>: If there are faults during execution of your computation, Storm will reassign tasks as necessary. Storm makes sure that a computation can run forever (or until you kill the computation).</li>
 <li><strong>Programming language agnostic</strong>: Robust and scalable realtime processing shouldn&#39;t be limited to a single platform. Storm topologies and processing components can be defined in any language, making Storm accessible to nearly anyone.</li>
 </ol>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html b/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
index 814b5e2..a1a4bff 100644
--- a/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
+++ b/content/releases/2.0.0-SNAPSHOT/Resource_Aware_Scheduler_overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="introduction">Introduction</h1>
+<div class="documentation-content"><h1 id="introduction">Introduction</h1>
 
 <p>The purpose of this document is to provide a description of the Resource Aware Scheduler for the Storm distributed real-time computation system.  This document will provide you with both a high level description of the resource aware scheduler in Storm.  Some of the benefits are using a resource aware scheduler on top of Storm is outlined in the following presentation at Hadoop Summit 2016:</p>
 
@@ -691,7 +691,7 @@ The effective resource of a rack, which is also the subordinate resource, is com
 <td><img src="images/ras_new_strategy_runtime_yahoo.png" alt=""></td>
 </tr>
 </tbody></table>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html b/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
index 0cfa1a1..e978c7c 100644
--- a/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
+++ b/content/releases/2.0.0-SNAPSHOT/Running-topologies-on-a-production-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
+<div class="documentation-content"><p>Running topologies on a production cluster is similar to running in <a href="Local-mode.html">Local mode</a>. Here are the steps:</p>
 
 <p>1) Define the topology (Use <a href="javadocs/org/apache/storm/topology/TopologyBuilder.html">TopologyBuilder</a> if defining using Java)</p>
 
@@ -212,7 +212,7 @@
 <p>The best place to monitor a topology is using the Storm UI. The Storm UI provides information about errors happening in tasks and fine-grained stats on the throughput and latency performance of each component of each running topology.</p>
 
 <p>You can also look at the worker logs on the cluster machines.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/SECURITY.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/SECURITY.html b/content/releases/2.0.0-SNAPSHOT/SECURITY.html
index 4ceb90e..6770736 100644
--- a/content/releases/2.0.0-SNAPSHOT/SECURITY.html
+++ b/content/releases/2.0.0-SNAPSHOT/SECURITY.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
+<div class="documentation-content"><h1 id="running-apache-storm-securely">Running Apache Storm Securely</h1>
 
 <p>Apache Storm offers a range of configuration options when trying to secure
 your cluster.  By default all authentication and authorization is disabled but 
@@ -709,7 +709,7 @@ on all possible worker hosts.</p>
  | storm.zookeeper.topology.auth.payload | A string representing the payload for topology Zookeeper authentication. |</p>
 
 <p>Note: If storm.zookeeper.topology.auth.payload isn&#39;t set,storm will generate a ZooKeeper secret payload for MD5-digest with generateZookeeperDigestSecretPayload() method.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html b/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
index f014c10..e196203 100644
--- a/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
+++ b/content/releases/2.0.0-SNAPSHOT/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -3118,7 +3118,7 @@ to use the POST option instead.</p>
 <h3 id="drpc-func-get">/drpc/:func (GET)</h3>
 
 <p>In some rare cases <code>:args</code> may not be needed by the DRPC command.  If no <code>:args</code> section is given in the DRPC request and empty string <code>&quot;&quot;</code> will be used for the arguments.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Serialization-(prior-to-0.6.0).html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Serialization-(prior-to-0.6.0).html b/content/releases/2.0.0-SNAPSHOT/Serialization-(prior-to-0.6.0).html
index a7a1baf..56a6d2f 100644
--- a/content/releases/2.0.0-SNAPSHOT/Serialization-(prior-to-0.6.0).html
+++ b/content/releases/2.0.0-SNAPSHOT/Serialization-(prior-to-0.6.0).html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Serialization.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Serialization.html b/content/releases/2.0.0-SNAPSHOT/Serialization.html
index ba39709..40fad17 100644
--- a/content/releases/2.0.0-SNAPSHOT/Serialization.html
+++ b/content/releases/2.0.0-SNAPSHOT/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -206,7 +206,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Serializers.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Serializers.html b/content/releases/2.0.0-SNAPSHOT/Serializers.html
index 43c4c5c..46a75c0 100644
--- a/content/releases/2.0.0-SNAPSHOT/Serializers.html
+++ b/content/releases/2.0.0-SNAPSHOT/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html b/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
index b3cb6a0..a3892eb 100644
--- a/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
+++ b/content/releases/2.0.0-SNAPSHOT/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -260,7 +260,7 @@ The time to allow any given healthcheck script to run before it is marked failed
 <p>DRPC optionally offers a REST API as well.  To enable this set teh config <code>drpc.http.port</code> to the port you want to run on before launching the DRPC server. See the <a href="STORM-UI-REST-API.html">REST documentation</a> for more information on how to use it.</p>
 
 <p>It also supports SSL by setting <code>drpc.https.port</code> along with the keystore and optional truststore similar to how you would configure the UI.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html b/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
index 3ebcf1c..0823b69 100644
--- a/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
+++ b/content/releases/2.0.0-SNAPSHOT/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html b/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
index 1a7fe46..8aebe85 100644
--- a/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
+++ b/content/releases/2.0.0-SNAPSHOT/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html b/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
index 5a22519..f28830c 100644
--- a/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
+++ b/content/releases/2.0.0-SNAPSHOT/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -421,7 +421,7 @@ Even if worker crashes at commit phase, after restart it will read pending-commi
 </ul>
 
 <p><code>org.apache.storm:storm-hbase:&lt;storm-version&gt;</code></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html b/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
index 278dc3e..feadece 100644
--- a/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
+++ b/content/releases/2.0.0-SNAPSHOT/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/IsolationScheduler.java">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/DefaultScheduler.java">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/IsolationScheduler.java">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/master/storm-server/src/main/java/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-(versions-0.7.0-and-below).html b/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
index 53222cd..30c07b4 100644
--- a/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
+++ b/content/releases/2.0.0-SNAPSHOT/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@ file lets the supervisor know the PID so it can shutdown the process later on.</
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Stream-API.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Stream-API.html b/content/releases/2.0.0-SNAPSHOT/Stream-API.html
index 630b79e..082c1bb 100644
--- a/content/releases/2.0.0-SNAPSHOT/Stream-API.html
+++ b/content/releases/2.0.0-SNAPSHOT/Stream-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="#concepts">Concepts</a>
 
 <ul>
@@ -565,7 +565,7 @@ via <code>build()</code> and submit it like a normal storm topology via <code>St
   <span class="n">StormSubmitter</span><span class="o">.</span><span class="na">submitTopologyWithProgressBar</span><span class="o">(</span><span class="s">"topology-name"</span><span class="o">,</span> <span class="n">config</span><span class="o">,</span> <span class="n">builder</span><span class="o">.</span><span class="na">build</span><span class="o">());</span>
 </code></pre></div>
 <p>More examples are available under <a href="../examples/storm-starter/src/jvm/org/apache/storm/starter/streams">storm-starter</a> which will help you get started.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html b/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
index 157e82b..4a5be6e 100644
--- a/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
+++ b/content/releases/2.0.0-SNAPSHOT/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -276,7 +276,7 @@
 <p><a href="http://github.com/apache/storm/blob/master/storm-clojure/src/clj/org/apache/storm/testing.clj">org.apache.storm.testing</a>: Implementation of facilities used to test Storm topologies. Includes time simulation, <code>complete-topology</code> for running a fixed set of tuples through a topology and capturing the output, tracker topologies for having fine grained control over detecting when a cluster is &quot;idle&quot;, and other utilities.</p>
 
 <p><a href="http://github.com/apache/storm/blob/master/storm-core/src/clj/org/apache/storm/ui">org.apache.storm.ui.*</a>: Implementation of Storm UI. Completely independent from rest of code base and uses the Nimbus Thrift API to get data.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html b/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
index a105966..49ada0a 100644
--- a/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
+++ b/content/releases/2.0.0-SNAPSHOT/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html b/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
index 02f5400..83708e3 100644
--- a/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
+++ b/content/releases/2.0.0-SNAPSHOT/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html b/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
index f150428..4b74eff 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@ Partition 2:
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html b/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
index c48ea3f..8a29333 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@ Operations that are combined by Trident into single Bolts will have their resour
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html b/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
index 75ef12b..5ee9589 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Trident-state.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-state.html b/content/releases/2.0.0-SNAPSHOT/Trident-state.html
index 05479b6..dd816e1 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-state.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@ apple =&gt; [count=10, txid=2]
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/master/storm-client/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html b/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
index 54c0c45..010901c 100644
--- a/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
+++ b/content/releases/2.0.0-SNAPSHOT/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html b/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
index 0064fe9..cbd7795 100644
--- a/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
+++ b/content/releases/2.0.0-SNAPSHOT/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@ Caused by: java.util.ConcurrentModificationException
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Tutorial.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Tutorial.html b/content/releases/2.0.0-SNAPSHOT/Tutorial.html
index e742464..f173466 100644
--- a/content/releases/2.0.0-SNAPSHOT/Tutorial.html
+++ b/content/releases/2.0.0-SNAPSHOT/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -402,7 +402,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
index a63e6b1..76054be 100644
--- a/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/2.0.0-SNAPSHOT/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@ $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html b/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
index e58522d..91972de 100644
--- a/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/Windowing.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/Windowing.html b/content/releases/2.0.0-SNAPSHOT/Windowing.html
index 4d6c90b..24fd4ef 100644
--- a/content/releases/2.0.0-SNAPSHOT/Windowing.html
+++ b/content/releases/2.0.0-SNAPSHOT/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -479,7 +479,7 @@ and will throw an <code>UnsupportedOperationException</code>.</p>
 
 <p>For more details take a look at the sample topology in storm-starter <a href="../examples/storm-starter/src/jvm/org/apache/storm/starter/PersistentWindowingTopology.java">PersistentWindowingTopology</a>
 which will help you get started.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html b/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
index e243ecb..a0d4b23 100644
--- a/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
+++ b/content/releases/2.0.0-SNAPSHOT/cgroups_in_storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="cgroups-in-storm">CGroups in Storm</h1>
+<div class="documentation-content"><h1 id="cgroups-in-storm">CGroups in Storm</h1>
 
 <p>CGroups are used by Storm to limit the resource usage of workers to guarantee fairness and QOS.  </p>
 
@@ -315,7 +315,7 @@ out.</p>
 <h2 id="future-work">Future Work</h2>
 
 <p>There is a lot of work on adding in elasticity to storm.  Eventually we hope to be able to do all of the above analysis for you and grow/shrink your topology on demand.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html b/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
index a1769be..9af143f 100644
--- a/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
+++ b/content/releases/2.0.0-SNAPSHOT/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@ struct BeginDownloadResult {
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html b/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
index edb2513..73baee2 100644
--- a/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
+++ b/content/releases/2.0.0-SNAPSHOT/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html b/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
index fb18c59..9f9eb4a 100644
--- a/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
+++ b/content/releases/2.0.0-SNAPSHOT/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/flux.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/flux.html b/content/releases/2.0.0-SNAPSHOT/flux.html
index 3de3d98..879519c 100644
--- a/content/releases/2.0.0-SNAPSHOT/flux.html
+++ b/content/releases/2.0.0-SNAPSHOT/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -886,7 +886,7 @@ same file. Includes may be either files, or classpath resources.</p>
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/index.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/index.html b/content/releases/2.0.0-SNAPSHOT/index.html
index 18c8d5d..5ee043b 100644
--- a/content/releases/2.0.0-SNAPSHOT/index.html
+++ b/content/releases/2.0.0-SNAPSHOT/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="basics-of-storm">Basics of Storm</h3>
+<div class="documentation-content"><h3 id="basics-of-storm">Basics of Storm</h3>
 
 <ul>
 <li><a href="javadocs/index.html">Javadoc</a></li>
@@ -289,7 +289,7 @@ But small change will not affect the user experience. We will notify the user wh
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 <li><a href="storm-metricstore.html">Storm Metricstore</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/metrics_v2.html b/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
index 04c988a..d345b35 100644
--- a/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
+++ b/content/releases/2.0.0-SNAPSHOT/metrics_v2.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Apache Storm version 1.2 introduced a new metrics system for reporting
+<div class="documentation-content"><p>Apache Storm version 1.2 introduced a new metrics system for reporting
 internal statistics (e.g. acked, failed, emitted, transferred, disruptor queue metrics, etc.) as well as a 
 new API for user defined metrics.</p>
 
@@ -274,7 +274,7 @@ interface:</p>
     <span class="kt">boolean</span> <span class="nf">matches</span><span class="o">(</span><span class="n">String</span> <span class="n">name</span><span class="o">,</span> <span class="n">Metric</span> <span class="n">metric</span><span class="o">);</span>
 
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html b/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
index 0310222..17a9a28 100644
--- a/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
+++ b/content/releases/2.0.0-SNAPSHOT/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@ The default is 60 seconds, a value of -1 indicates to wait for ever.
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
----------------------------------------------------------------------
diff --git a/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html b/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
index 162bff7..76a35fb 100644
--- a/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
+++ b/content/releases/2.0.0-SNAPSHOT/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@ The stream is partitioned among the bolt&#39;s tasks based on the specified row
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>