You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by sr...@apache.org on 2018/05/15 15:20:55 UTC

[2/9] storm-site git commit: Rebuild site

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/STORM-UI-REST-API.html
----------------------------------------------------------------------
diff --git a/content/releases/current/STORM-UI-REST-API.html b/content/releases/current/STORM-UI-REST-API.html
index 92aca68..12e9159 100644
--- a/content/releases/current/STORM-UI-REST-API.html
+++ b/content/releases/current/STORM-UI-REST-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
+<div class="documentation-content"><p>The Storm UI daemon provides a REST API that allows you to interact with a Storm cluster, which includes retrieving
 metrics data and configuration information as well as management operations such as starting or stopping topologies.</p>
 
 <h1 id="data-format">Data format</h1>
@@ -2936,7 +2936,7 @@ daemons.</p>
   </span><span class="s2">"error"</span><span class="p">:</span><span class="w"> </span><span class="s2">"Internal Server Error"</span><span class="p">,</span><span class="w">
   </span><span class="s2">"errorMessage"</span><span class="p">:</span><span class="w"> </span><span class="s2">"java.lang.NullPointerException</span><span class="se">\n\t</span><span class="s2">at clojure.core$name.invoke(core.clj:1505)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$component_page.invoke(core.clj:752)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$fn__7766.invoke(core.clj:782)</span><span class="se">\n\t</span><span class="s2">at compojure.core$make_route$fn__5755.invoke(core.clj:93)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_route$fn__5743.invoke(core.clj:39)</span><span class="se">\n\t</span><span class="s2">at compojure.core$if_method$fn__5736.invoke(core.clj:24)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routing$fn__5761.invoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.core$some.invoke(core.clj:2443)</span><spa
 n class="se">\n\t</span><span class="s2">at compojure.core$routing.doInvoke(core.clj:106)</span><span class="se">\n\t</span><span class="s2">at clojure.lang.RestFn.applyTo(RestFn.java:139)</span><span class="se">\n\t</span><span class="s2">at clojure.core$apply.invoke(core.clj:619)</span><span class="se">\n\t</span><span class="s2">at compojure.core$routes$fn__5765.invoke(core.clj:111)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.reload$wrap_reload$fn__6880.invoke(reload.clj:14)</span><span class="se">\n\t</span><span class="s2">at org.apache.storm.ui.core$catch_errors$fn__7800.invoke(core.clj:836)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.keyword_params$wrap_keyword_params$fn__6319.invoke(keyword_params.clj:27)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.nested_params$wrap_nested_params$fn__6358.invoke(nested_params.clj:65)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.params$wrap
 _params$fn__6291.invoke(params.clj:55)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.multipart_params$wrap_multipart_params$fn__6386.invoke(multipart_params.clj:103)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.flash$wrap_flash$fn__6675.invoke(flash.clj:14)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.session$wrap_session$fn__6664.invoke(session.clj:43)</span><span class="se">\n\t</span><span class="s2">at ring.middleware.cookies$wrap_cookies$fn__6595.invoke(cookies.clj:160)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty$proxy_handler$fn__6112.invoke(jetty.clj:16)</span><span class="se">\n\t</span><span class="s2">at ring.adapter.jetty.proxy$org.mortbay.jetty.handler.AbstractHandler$0.handle(Unknown Source)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.handler.HandlerWrapper.handle(HandlerWrapper.java:152)</span><span class="se">\n\t</span><span class="s2">at
  org.mortbay.jetty.Server.handle(Server.java:326)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handleRequest(HttpConnection.java:542)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection$RequestHandler.headerComplete(HttpConnection.java:928)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseNext(HttpParser.java:549)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpParser.parseAvailable(HttpParser.java:212)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.HttpConnection.handle(HttpConnection.java:404)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.jetty.bio.SocketConnector$Connection.run(SocketConnector.java:228)</span><span class="se">\n\t</span><span class="s2">at org.mortbay.thread.QueuedThreadPool$PoolThread.run(QueuedThreadPool.java:582)</span><span class="se">\n</span><span class="s2">"</span><span
  class="w">
 </span><span class="p">}</span><span class="w">
-</span></code></pre></div>
+</span></code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Serialization-(prior-to-0.6.0).html
----------------------------------------------------------------------
diff --git a/content/releases/current/Serialization-(prior-to-0.6.0).html b/content/releases/current/Serialization-(prior-to-0.6.0).html
index dab36c9..8b1b245 100644
--- a/content/releases/current/Serialization-(prior-to-0.6.0).html
+++ b/content/releases/current/Serialization-(prior-to-0.6.0).html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
+<div class="documentation-content"><p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks. By default Storm can serialize ints, shorts, longs, floats, doubles, bools, bytes, strings, and byte arrays, but if you want to use another type in your tuples, you&#39;ll need to implement a custom serializer.</p>
 
 <h3 id="dynamic-typing">Dynamic typing</h3>
 
@@ -188,7 +188,7 @@
 <p>Storm provides helpers for registering serializers in a topology config. The <a href="javadocs/backtype/storm/Config.html">Config</a> class has a method called <code>addSerialization</code> that takes in a serializer class to add to the config.</p>
 
 <p>There&#39;s an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can&#39;t find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the <code>storm.yaml</code> files.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Serialization.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Serialization.html b/content/releases/current/Serialization.html
index b52937a..a79aeed 100644
--- a/content/releases/current/Serialization.html
+++ b/content/releases/current/Serialization.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
+<div class="documentation-content"><p>This page is about how the serialization system in Storm works for versions 0.6.0 and onwards. Storm used a different serialization system prior to 0.6.0 which is documented on <a href="Serialization-(prior-to-0.6.0).html">Serialization (prior to 0.6.0)</a>. </p>
 
 <p>Tuples can be comprised of objects of any types. Since Storm is a distributed system, it needs to know how to serialize and deserialize objects when they&#39;re passed between tasks.</p>
 
@@ -200,7 +200,7 @@
 <p>When a topology is submitted, a single set of serializations is chosen to be used by all components in the topology for sending messages. This is done by merging the component-specific serializer registrations with the regular set of serialization registrations. If two components define serializers for the same class, one of the serializers is chosen arbitrarily.</p>
 
 <p>To force a serializer for a particular class if there&#39;s a conflict between two component-specific registrations, just define the serializer you want to use in the topology-specific configuration. The topology-specific configuration has precedence over component-specific configurations for serialization registrations.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Serializers.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Serializers.html b/content/releases/current/Serializers.html
index 200c717..f2d3acb 100644
--- a/content/releases/current/Serializers.html
+++ b/content/releases/current/Serializers.html
@@ -144,10 +144,10 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/rapportive-oss/storm-json">storm-json</a>: Simple JSON serializer for Storm</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Setting-up-a-Storm-cluster.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Setting-up-a-Storm-cluster.html b/content/releases/current/Setting-up-a-Storm-cluster.html
index 2fcab0c..0592dd3 100644
--- a/content/releases/current/Setting-up-a-Storm-cluster.html
+++ b/content/releases/current/Setting-up-a-Storm-cluster.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
+<div class="documentation-content"><p>This page outlines the steps for getting a Storm cluster up and running. If you&#39;re on AWS, you should check out the <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> project. <a href="https://github.com/nathanmarz/storm-deploy/wiki">storm-deploy</a> completely automates the provisioning, configuration, and installation of Storm clusters on EC2. It also sets up Ganglia for you so you can monitor CPU, disk, and network usage.</p>
 
 <p>If you run into difficulties with your Storm cluster, first check for a solution is in the <a href="Troubleshooting.html">Troubleshooting</a> page. Otherwise, email the mailing list.</p>
 
@@ -246,7 +246,7 @@ The time to allow any given healthcheck script to run before it is marked failed
 </ol>
 
 <p>As you can see, running the daemons is very straightforward. The daemons will log to the logs/ directory in wherever you extracted the Storm release.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Setting-up-development-environment.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Setting-up-development-environment.html b/content/releases/current/Setting-up-development-environment.html
index 73bbd95..5e8e70d 100644
--- a/content/releases/current/Setting-up-development-environment.html
+++ b/content/releases/current/Setting-up-development-environment.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
+<div class="documentation-content"><p>This page outlines what you need to do to get a Storm development environment set up. In summary, the steps are:</p>
 
 <ol>
 <li>Download a <a href="..//downloads.html">Storm release</a> , unpack it, and put the unpacked <code>bin/</code> directory on your PATH</li>
@@ -171,7 +171,7 @@
 
 <p>The previous step installed the <code>storm</code> client on your machine which is used to communicate with remote Storm clusters. Now all you have to do is tell the client which Storm cluster to talk to. To do this, all you have to do is put the host address of the master in the <code>~/.storm/storm.yaml</code> file. It should look something like this:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">nimbus.seeds: ["123.45.678.890"]
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Spout-implementations.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Spout-implementations.html b/content/releases/current/Spout-implementations.html
index 64223b1..ad75ae1 100644
--- a/content/releases/current/Spout-implementations.html
+++ b/content/releases/current/Spout-implementations.html
@@ -144,14 +144,14 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/nathanmarz/storm-kestrel">storm-kestrel</a>: Adapter to use Kestrel as a spout</li>
 <li><a href="https://github.com/rapportive-oss/storm-amqp-spout">storm-amqp-spout</a>: Adapter to use AMQP source as a spout</li>
 <li><a href="https://github.com/ptgoetz/storm-jms">storm-jms</a>: Adapter to use a JMS source as a spout</li>
 <li><a href="https://github.com/sorenmacbeth/storm-redis-pubsub">storm-redis-pubsub</a>: A spout that subscribes to a Redis pubsub stream</li>
 <li><a href="https://github.com/haitaoyao/storm-beanstalkd-spout">storm-beanstalkd-spout</a>: A spout that subscribes to a beanstalkd queue</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/State-checkpointing.html
----------------------------------------------------------------------
diff --git a/content/releases/current/State-checkpointing.html b/content/releases/current/State-checkpointing.html
index 458070b..1425498 100644
--- a/content/releases/current/State-checkpointing.html
+++ b/content/releases/current/State-checkpointing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="state-support-in-core-storm">State support in core storm</h1>
+<div class="documentation-content"><h1 id="state-support-in-core-storm">State support in core storm</h1>
 
 <p>Storm core has abstractions for bolts to save and retrieve the state of its operations. There is a default in-memory
 based state implementation and also a Redis backed implementation that provides state persistence.</p>
@@ -419,7 +419,7 @@ Even if worker crashes at commit phase, after restart it will read pending-commi
 </ul>
 
 <p><code>org.apache.storm:storm-hbase:&lt;storm-version&gt;</code></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Storm-Scheduler.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Storm-Scheduler.html b/content/releases/current/Storm-Scheduler.html
index ca72cc0..805fac2 100644
--- a/content/releases/current/Storm-Scheduler.html
+++ b/content/releases/current/Storm-Scheduler.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
+<div class="documentation-content"><p>Storm now has 4 kinds of built-in schedulers: <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/DefaultScheduler.clj">DefaultScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/scheduler/IsolationScheduler.clj">IsolationScheduler</a>, <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/scheduler/multitenant/MultitenantScheduler.java">MultitenantScheduler</a>, <a href="Resource_Aware_Scheduler_overview.html">ResourceAwareScheduler</a>. </p>
 
 <h2 id="pluggable-scheduler">Pluggable scheduler</h2>
 
@@ -163,7 +163,7 @@
 <p>Any topologies submitted to the cluster not listed there will not be isolated. Note that there is no way for a user of Storm to affect their isolation settings – this is only allowed by the administrator of the cluster (this is very much intentional).</p>
 
 <p>The isolation scheduler solves the multi-tenancy problem – avoiding resource contention between topologies – by providing full isolation between topologies. The intention is that &quot;productionized&quot; topologies should be listed in the isolation config, and test or in-development topologies should not. The remaining machines on the cluster serve the dual role of failover for isolated topologies and for running the non-isolated topologies.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
----------------------------------------------------------------------
diff --git a/content/releases/current/Storm-multi-language-protocol-(versions-0.7.0-and-below).html b/content/releases/current/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
index 1c41348..d9df735 100644
--- a/content/releases/current/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
+++ b/content/releases/current/Storm-multi-language-protocol-(versions-0.7.0-and-below).html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
+<div class="documentation-content"><p>This page explains the multilang protocol for versions 0.7.0 and below. The protocol changed in version 0.7.1.</p>
 
 <h1 id="storm-multi-language-protocol">Storm Multi-Language Protocol</h1>
 
@@ -253,7 +253,7 @@ file lets the supervisor know the PID so it can shutdown the process later on.</
 <p>Note: This command is not JSON encoded, it is sent as a simple string.</p>
 
 <p>This lets the parent bolt know that the script has finished processing and is ready for another tuple.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Structure-of-the-codebase.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Structure-of-the-codebase.html b/content/releases/current/Structure-of-the-codebase.html
index f095080..ffe035b 100644
--- a/content/releases/current/Structure-of-the-codebase.html
+++ b/content/releases/current/Structure-of-the-codebase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>There are three distinct layers to Storm&#39;s codebase.</p>
+<div class="documentation-content"><p>There are three distinct layers to Storm&#39;s codebase.</p>
 
 <p>First, Storm was designed from the very beginning to be compatible with multiple languages. Nimbus is a Thrift service and topologies are defined as Thrift structures. The usage of Thrift allows Storm to be used from any language.</p>
 
@@ -287,7 +287,7 @@
 <p><a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/util.clj">org.apache.storm.util</a>: Contains generic utility functions used throughout the code base.</p>
 
 <p><a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/clj/org/apache/storm/zookeeper.clj">org.apache.storm.zookeeper</a>: Clojure wrapper around the Zookeeper API and implements some &quot;high-level&quot; stuff like &quot;mkdirs&quot; and &quot;delete-recursive&quot;.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Support-for-non-java-languages.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Support-for-non-java-languages.html b/content/releases/current/Support-for-non-java-languages.html
index ab0c42b..e7bce3a 100644
--- a/content/releases/current/Support-for-non-java-languages.html
+++ b/content/releases/current/Support-for-non-java-languages.html
@@ -144,13 +144,13 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li><a href="https://github.com/velvia/ScalaStorm">Scala DSL</a></li>
 <li><a href="https://github.com/colinsurprenant/storm-jruby">JRuby DSL</a></li>
 <li><a href="Clojure-DSL.html">Clojure DSL</a></li>
 <li><a href="https://github.com/gphat/io-storm">io-storm</a>: Perl multilang adapter</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Transactional-topologies.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Transactional-topologies.html b/content/releases/current/Transactional-topologies.html
index 37b4863..36b65bf 100644
--- a/content/releases/current/Transactional-topologies.html
+++ b/content/releases/current/Transactional-topologies.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
+<div class="documentation-content"><p><strong>NOTE</strong>: Transactional topologies have been deprecated -- use the <a href="Trident-tutorial.html">Trident</a> framework instead.</p>
 
 <hr>
 
@@ -510,7 +510,7 @@
 <li>so it can&#39;t call finishbatch until it&#39;s received all tuples from all subscribed components AND its received the commit stream tuple (for committers). this ensures that it can&#39;t prematurely call finishBatch</li>
 </ul></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Trident-API-Overview.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Trident-API-Overview.html b/content/releases/current/Trident-API-Overview.html
index 36dff27..eb5cdf5 100644
--- a/content/releases/current/Trident-API-Overview.html
+++ b/content/releases/current/Trident-API-Overview.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
+<div class="documentation-content"><p>The core data model in Trident is the &quot;Stream&quot;, processed as a series of batches. A stream is partitioned among the nodes in the cluster, and operations applied to a stream are applied in parallel across each partition.</p>
 
 <p>There are five kinds of operations in Trident:</p>
 
@@ -669,7 +669,7 @@ Partition 2:
 <p>You might be wondering – how do you do something like a &quot;windowed join&quot;, where tuples from one side of the join are joined against the last hour of tuples from the other side of the join.</p>
 
 <p>To do this, you would make use of partitionPersist and stateQuery. The last hour of tuples from one side of the join would be stored and rotated in a source of state, keyed by the join field. Then the stateQuery would do lookups by the join field to perform the &quot;join&quot;.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Trident-RAS-API.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Trident-RAS-API.html b/content/releases/current/Trident-RAS-API.html
index 428dd6f..d18217c 100644
--- a/content/releases/current/Trident-RAS-API.html
+++ b/content/releases/current/Trident-RAS-API.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="trident-ras-api">Trident RAS API</h2>
+<div class="documentation-content"><h2 id="trident-ras-api">Trident RAS API</h2>
 
 <p>The Trident RAS (Resource Aware Scheduler) API provides a mechanism to allow users to specify the resource consumption of a Trident topology. The API looks exactly like the base RAS API, only it is called on Trident Streams instead of Bolts and Spouts.</p>
 
@@ -192,7 +192,7 @@ Operations that are combined by Trident into single Bolts will have their resour
 <p>Resource declarations may be called after any operation. The operations without explicit resources will get the defaults. If you choose to set resources for only some operations, defaults must be declared, or topology submission will fail.
 Resource declarations have the same <em>boundaries</em> as parallelism hints. They don&#39;t cross any groupings, shufflings, or any other kind of repartitioning.
 Resources are declared per operation, but get combined within boundaries.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Trident-spouts.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Trident-spouts.html b/content/releases/current/Trident-spouts.html
index d08a745..e0b736d 100644
--- a/content/releases/current/Trident-spouts.html
+++ b/content/releases/current/Trident-spouts.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="trident-spouts">Trident spouts</h1>
+<div class="documentation-content"><h1 id="trident-spouts">Trident spouts</h1>
 
 <p>Like in the vanilla Storm API, spouts are the source of streams in a Trident topology. On top of the vanilla Storm spouts, Trident exposes additional APIs for more sophisticated spouts.</p>
 
@@ -182,7 +182,7 @@
 </ol>
 
 <p>And, like mentioned in the beginning of this tutorial, you can use regular IRichSpout&#39;s as well.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Trident-state.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Trident-state.html b/content/releases/current/Trident-state.html
index a174820..2c9e059 100644
--- a/content/releases/current/Trident-state.html
+++ b/content/releases/current/Trident-state.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
+<div class="documentation-content"><p>Trident has first-class abstractions for reading from and writing to stateful sources. The state can either be internal to the topology – e.g., kept in-memory and backed by HDFS – or externally stored in a database like Memcached or Cassandra. There&#39;s no difference in the Trident API for either case.</p>
 
 <p>Trident manages state in a fault-tolerant way so that state updates are idempotent in the face of retries and failures. This lets you reason about Trident topologies as if each message were processed exactly-once.</p>
 
@@ -415,7 +415,7 @@ apple =&gt; [count=10, txid=2]
 <p>Finally, Trident provides the <a href="http://github.com/apache/storm/blob/v1.2.1/storm-core/src/jvm/org/apache/storm/trident/state/map/SnapshottableMap.java">SnapshottableMap</a> class that turns a MapState into a Snapshottable object, by storing global aggregations into a fixed key.</p>
 
 <p>Take a look at the implementation of <a href="https://github.com/nathanmarz/trident-memcached/blob/master/src/jvm/trident/memcached/MemcachedState.java">MemcachedState</a> to see how all these utilities can be put together to make a high performance MapState implementation. MemcachedState allows you to choose between opaque transactional, transactional, and non-transactional semantics.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Trident-tutorial.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Trident-tutorial.html b/content/releases/current/Trident-tutorial.html
index 4403c50..4d2bbbb 100644
--- a/content/releases/current/Trident-tutorial.html
+++ b/content/releases/current/Trident-tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
+<div class="documentation-content"><p>Trident is a high-level abstraction for doing realtime computing on top of Storm. It allows you to seamlessly intermix high throughput (millions of messages per second), stateful stream processing with low latency distributed querying. If you&#39;re familiar with high level batch processing tools like Pig or Cascading, the concepts of Trident will be very familiar – Trident has joins, aggregations, grouping, functions, and filters. In addition to these, Trident adds primitives for doing stateful, incremental processing on top of any database or persistence store. Trident has consistent, exactly-once semantics, so it is easy to reason about Trident topologies.</p>
 
 <h2 id="illustrative-example">Illustrative example</h2>
 
@@ -356,7 +356,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>Trident makes realtime computation elegant. You&#39;ve seen how high throughput stream processing, state manipulation, and low-latency querying can be seamlessly intermixed via Trident&#39;s API. Trident lets you express your realtime computations in a natural way while still getting maximal performance.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Troubleshooting.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Troubleshooting.html b/content/releases/current/Troubleshooting.html
index 721c844..8ed7a9b 100644
--- a/content/releases/current/Troubleshooting.html
+++ b/content/releases/current/Troubleshooting.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page lists issues people have run into when using Storm along with their solutions.</p>
+<div class="documentation-content"><p>This page lists issues people have run into when using Storm along with their solutions.</p>
 
 <h3 id="worker-processes-are-crashing-on-startup-with-no-stack-trace">Worker processes are crashing on startup with no stack trace</h3>
 
@@ -279,7 +279,7 @@ Caused by: java.util.ConcurrentModificationException
 <ul>
 <li>This means that you&#39;re emitting a mutable object as an output tuple. Everything you emit into the output collector must be immutable. What&#39;s happening is that your bolt is modifying the object while it is being serialized to be sent over the network.</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Tutorial.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Tutorial.html b/content/releases/current/Tutorial.html
index ecf28c1..45eb3cd 100644
--- a/content/releases/current/Tutorial.html
+++ b/content/releases/current/Tutorial.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
+<div class="documentation-content"><p>In this tutorial, you&#39;ll learn how to create Storm topologies and deploy them to a Storm cluster. Java will be the main language used, but a few examples will use Python to illustrate Storm&#39;s multi-language capabilities.</p>
 
 <h2 id="preliminaries">Preliminaries</h2>
 
@@ -428,7 +428,7 @@
 <h2 id="conclusion">Conclusion</h2>
 
 <p>This tutorial gave a broad overview of developing, testing, and deploying Storm topologies. The rest of the documentation dives deeper into all the aspects of using Storm.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html b/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
index d337ef5..b965f89 100644
--- a/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
+++ b/content/releases/current/Understanding-the-parallelism-of-a-Storm-topology.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
+<div class="documentation-content"><h2 id="what-makes-a-running-topology-worker-processes-executors-and-tasks">What makes a running topology: worker processes, executors and tasks</h2>
 
 <p>Storm distinguishes between the following three main entities that are used to actually run a topology in a Storm cluster:</p>
 
@@ -274,7 +274,7 @@ $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
 <li><a href="Tutorial.html">Tutorial</a></li>
 <li><a href="javadocs/">Storm API documentation</a>, most notably the class <code>Config</code></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Using-non-JVM-languages-with-Storm.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Using-non-JVM-languages-with-Storm.html b/content/releases/current/Using-non-JVM-languages-with-Storm.html
index 59f7a38..23253db 100644
--- a/content/releases/current/Using-non-JVM-languages-with-Storm.html
+++ b/content/releases/current/Using-non-JVM-languages-with-Storm.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<ul>
+<div class="documentation-content"><ul>
 <li>two pieces: creating topologies and implementing spouts and bolts in other languages</li>
 <li>creating topologies in another language is easy since topologies are just thrift structures (link to storm.thrift)</li>
 <li>implementing spouts and bolts in another language is called a &quot;multilang components&quot; or &quot;shelling&quot;
@@ -198,7 +198,7 @@
 <p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#39;s the submitTopology definition:</p>
 <div class="highlight"><pre><code class="language-" data-lang="">void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology)
     throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/Windowing.html
----------------------------------------------------------------------
diff --git a/content/releases/current/Windowing.html b/content/releases/current/Windowing.html
index 68428f2..939177f 100644
--- a/content/releases/current/Windowing.html
+++ b/content/releases/current/Windowing.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
+<div class="documentation-content"><p>Storm core has support for processing a group of tuples that falls within a window. Windows are specified with the 
 following two parameters,</p>
 
 <ol>
@@ -380,7 +380,7 @@ tuples can be received within the timeout period.</p>
 
 <p>An example toplogy <code>SlidingWindowTopology</code> shows how to use the apis to compute a sliding window sum and a tumbling window 
 average.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/distcache-blobstore.html
----------------------------------------------------------------------
diff --git a/content/releases/current/distcache-blobstore.html b/content/releases/current/distcache-blobstore.html
index 7a03da4..b359881 100644
--- a/content/releases/current/distcache-blobstore.html
+++ b/content/releases/current/distcache-blobstore.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
+<div class="documentation-content"><h1 id="storm-distributed-cache-api">Storm Distributed Cache API</h1>
 
 <p>The distributed cache feature in storm is used to efficiently distribute files
 (or blobs, which is the equivalent terminology for a file in the distributed
@@ -799,7 +799,7 @@ struct BeginDownloadResult {
  2: required string session;
  3: optional i64 data_size;
 }
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/dynamic-log-level-settings.html
----------------------------------------------------------------------
diff --git a/content/releases/current/dynamic-log-level-settings.html b/content/releases/current/dynamic-log-level-settings.html
index c26d773..82f8a9b 100644
--- a/content/releases/current/dynamic-log-level-settings.html
+++ b/content/releases/current/dynamic-log-level-settings.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
+<div class="documentation-content"><p>We have added the ability to set log level settings for a running topology using the Storm UI and the Storm CLI. </p>
 
 <p>The log level settings apply the same way as you&#39;d expect from log4j, as all we are doing is telling log4j to set the level of the logger you provide. If you set the log level of a parent logger, the children loggers start using that level (unless the children have a more restrictive level already). A timeout can optionally be provided (except for DEBUG mode, where it’s required in the UI), if workers should reset log levels automatically.</p>
 
@@ -179,7 +179,7 @@
 <p><code>./bin/storm set_log_level my_topology -r ROOT</code></p>
 
 <p>Clears the ROOT logger dynamic log level, resetting it to its original value.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/dynamic-worker-profiling.html
----------------------------------------------------------------------
diff --git a/content/releases/current/dynamic-worker-profiling.html b/content/releases/current/dynamic-worker-profiling.html
index eb939d3..e915903 100644
--- a/content/releases/current/dynamic-worker-profiling.html
+++ b/content/releases/current/dynamic-worker-profiling.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
+<div class="documentation-content"><p>In multi-tenant mode, storm launches long-running JVMs across cluster without sudo access to user. Self-serving of Java heap-dumps, jstacks and java profiling of these JVMs would improve users&#39; ability to analyze and debug issues when monitoring it actively.</p>
 
 <p>The storm dynamic profiler lets you dynamically take heap-dumps, jprofile or jstack for a worker jvm running on stock cluster. It let user download these dumps from the browser and use your favorite tools to analyze it  The UI component page provides list workers for the component and action buttons. The logviewer lets you download the dumps generated by these logs. Please see the screenshots for more information.</p>
 
@@ -171,7 +171,7 @@
 <h2 id="configuration">Configuration</h2>
 
 <p>The &quot;worker.profiler.command&quot; can be configured to point to specific pluggable profiler, heapdump commands. The &quot;worker.profiler.enabled&quot; can be disabled if plugin is not available or jdk does not support Jprofile flight recording so that worker JVM options will not have &quot;worker.profiler.childopts&quot;. To use different profiler plugin, you can change these configuration.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/flux.html
----------------------------------------------------------------------
diff --git a/content/releases/current/flux.html b/content/releases/current/flux.html
index a3afd83..e43b36a 100644
--- a/content/releases/current/flux.html
+++ b/content/releases/current/flux.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
+<div class="documentation-content"><p>A framework for creating and deploying Apache Storm streaming computations with less friction.</p>
 
 <h2 id="definition">Definition</h2>
 
@@ -908,7 +908,7 @@ same file. Includes may be either files, or classpath resources.</p>
   <span class="na">className</span><span class="pi">:</span> <span class="s2">"</span><span class="s">org.apache.storm.flux.test.TridentTopologySource"</span>
   <span class="c1"># Flux will look for "getTopology", this will override that.</span>
   <span class="na">methodName</span><span class="pi">:</span> <span class="s2">"</span><span class="s">getTopologyWithDifferentMethodName"</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/index.html
----------------------------------------------------------------------
diff --git a/content/releases/current/index.html b/content/releases/current/index.html
index 860b688..93d1cea 100644
--- a/content/releases/current/index.html
+++ b/content/releases/current/index.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<blockquote>
+<div class="documentation-content"><blockquote>
 <h4 id="note">NOTE</h4>
 
 <p>In the latest version, the class packages have been changed from &quot;backtype.storm&quot; to &quot;org.apache.storm&quot; so the topology code compiled with older version won&#39;t run on the Storm 1.0.0 just like that. Backward compatibility is available through following configuration </p>
@@ -286,7 +286,7 @@ But small change will not affect the user experience. We will notify the user wh
 <li><a href="Multilang-protocol.html">Multilang protocol</a> (how to provide support for another language)</li>
 <li><a href="Implementation-docs.html">Implementation docs</a></li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/metrics_v2.html
----------------------------------------------------------------------
diff --git a/content/releases/current/metrics_v2.html b/content/releases/current/metrics_v2.html
index 7e1cba5..47f8f10 100644
--- a/content/releases/current/metrics_v2.html
+++ b/content/releases/current/metrics_v2.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Apache Storm version 1.2 introduces a new metrics system for reporting
+<div class="documentation-content"><p>Apache Storm version 1.2 introduces a new metrics system for reporting
 internal statistics (e.g. acked, failed, emitted, transferred, disruptor queue metrics, etc.) as well as a 
 new API for user defined metrics.</p>
 
@@ -274,7 +274,7 @@ interface:</p>
     <span class="kt">boolean</span> <span class="nf">matches</span><span class="o">(</span><span class="n">String</span> <span class="n">name</span><span class="o">,</span> <span class="n">Metric</span> <span class="n">metric</span><span class="o">);</span>
 
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/nimbus-ha-design.html
----------------------------------------------------------------------
diff --git a/content/releases/current/nimbus-ha-design.html b/content/releases/current/nimbus-ha-design.html
index 7bd56b1..4ee5b46 100644
--- a/content/releases/current/nimbus-ha-design.html
+++ b/content/releases/current/nimbus-ha-design.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="problem-statement">Problem Statement:</h2>
+<div class="documentation-content"><h2 id="problem-statement">Problem Statement:</h2>
 
 <p>Currently the storm master aka nimbus, is a process that runs on a single machine under supervision. In most cases the 
 nimbus failure is transient and it is restarted by the supervisor. However sometimes when disks fail and networks 
@@ -361,7 +361,7 @@ The default is 60 seconds, a value of -1 indicates to wait for ever.
 <p>Note: Even though all nimbus hosts have watchers on zookeeper to be notified immediately as soon as a new topology is available for code
 download, the callback pretty much never results in code download. In practice we have observed that the desired replication is only achieved once the background-thread runs. 
 So you should expect your topology submission time to be somewhere between 0 to (2 * nimbus.code.sync.freq.secs) for any nimbus.min.replication.count &gt; 1.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-cassandra.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-cassandra.html b/content/releases/current/storm-cassandra.html
index d0f47e4..ec5bc9d 100644
--- a/content/releases/current/storm-cassandra.html
+++ b/content/releases/current/storm-cassandra.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
+<div class="documentation-content"><h3 id="bolt-api-implementation-for-apache-cassandra">Bolt API implementation for Apache Cassandra</h3>
 
 <p>This library provides core storm bolt on top of Apache Cassandra.
 Provides simple DSL to map storm <em>Tuple</em> to Cassandra Query Language <em>Statement</em>.</p>
@@ -373,7 +373,7 @@ The stream is partitioned among the bolt&#39;s tasks based on the specified row
         <span class="n">CassandraStateFactory</span> <span class="n">selectWeatherStationStateFactory</span> <span class="o">=</span> <span class="n">getSelectWeatherStationStateFactory</span><span class="o">();</span>
         <span class="n">TridentState</span> <span class="n">selectState</span> <span class="o">=</span> <span class="n">topology</span><span class="o">.</span><span class="na">newStaticState</span><span class="o">(</span><span class="n">selectWeatherStationStateFactory</span><span class="o">);</span>
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">selectState</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"weather_station_id"</span><span class="o">),</span> <span class="k">new</span> <span class="n">CassandraQuery</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"name"</span><span class="o">));</span>         
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-elasticsearch.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-elasticsearch.html b/content/releases/current/storm-elasticsearch.html
index 9477383..3696122 100644
--- a/content/releases/current/storm-elasticsearch.html
+++ b/content/releases/current/storm-elasticsearch.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
+<div class="documentation-content"><h1 id="storm-elasticsearch-bolt-trident-state">Storm Elasticsearch Bolt &amp; Trident State</h1>
 
 <p>EsIndexBolt, EsPercolateBolt and EsState allows users to stream data from storm into Elasticsearch directly.
   For detailed description, please refer to the following.</p>
@@ -245,7 +245,7 @@ You can refer implementation of DefaultEsTupleMapper to see how to implement you
 <li>Sriharsha Chintalapani (<a href="https://github.com/harshach">@harshach</a>)</li>
 <li>Jungtaek Lim (<a href="https://github.com/HeartSaVioR">@HeartSaVioR</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-eventhubs.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-eventhubs.html b/content/releases/current/storm-eventhubs.html
index dd8e158..4f0ac92 100644
--- a/content/releases/current/storm-eventhubs.html
+++ b/content/releases/current/storm-eventhubs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
+<div class="documentation-content"><p>Storm spout and bolt implementation for Microsoft Azure Eventhubs</p>
 
 <h3 id="build">build</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">mvn clean package
@@ -178,7 +178,7 @@ If you want to send messages to all partitions, use &quot;-1&quot; as partitionI
 
 <h3 id="windows-azure-eventhubs">Windows Azure Eventhubs</h3>
 <div class="highlight"><pre><code class="language-" data-lang="">http://azure.microsoft.com/en-us/services/event-hubs/
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-hbase.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-hbase.html b/content/releases/current/storm-hbase.html
index 87e2e25..3cb5653 100644
--- a/content/releases/current/storm-hbase.html
+++ b/content/releases/current/storm-hbase.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://hbase.apache.org">Apache HBase</a></p>
 
 <h2 id="usage">Usage</h2>
 
@@ -368,7 +368,7 @@ Word: 'watermelon', Count: 6806
         <span class="o">}</span>
     <span class="o">}</span>
 <span class="o">}</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-hdfs.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-hdfs.html b/content/releases/current/storm-hdfs.html
index d0c0266..86f3d5c 100644
--- a/content/releases/current/storm-hdfs.html
+++ b/content/releases/current/storm-hdfs.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm components for interacting with HDFS file systems</p>
+<div class="documentation-content"><p>Storm components for interacting with HDFS file systems</p>
 
 <h2 id="usage">Usage</h2>
 
@@ -469,7 +469,7 @@ hdfs.kerberos.principal: &quot;<a href="mailto:user@EXAMPLE.com">user@EXAMPLE.co
 <p>On worker hosts the bolt/trident-state code will use the keytab file with principal provided in the config to authenticate with 
 Namenode. This method is little dangerous as you need to ensure all workers have the keytab file at the same location and you need
 to remember this as you bring up new hosts in the cluster.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-hive.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-hive.html b/content/releases/current/storm-hive.html
index c86291b..e78f9e8 100644
--- a/content/releases/current/storm-hive.html
+++ b/content/releases/current/storm-hive.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
+<div class="documentation-content"><p>Hive offers streaming API that allows data to be written continuously into Hive. The incoming data 
   can be continuously committed in small batches of records into existing Hive partition or table. Once the data
   is committed its immediately visible to all hive queries. More info on Hive Streaming API 
   <a href="https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest">https://cwiki.apache.org/confluence/display/Hive/Streaming+Data+Ingest</a></p>
@@ -303,7 +303,7 @@ User should make sure that Tuple field names are matched to the table column nam
 
    <span class="n">StateFactory</span> <span class="n">factory</span> <span class="o">=</span> <span class="k">new</span> <span class="n">HiveStateFactory</span><span class="o">().</span><span class="na">withOptions</span><span class="o">(</span><span class="n">hiveOptions</span><span class="o">);</span>
    <span class="n">TridentState</span> <span class="n">state</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">partitionPersist</span><span class="o">(</span><span class="n">factory</span><span class="o">,</span> <span class="n">hiveFields</span><span class="o">,</span> <span class="k">new</span> <span class="n">HiveUpdater</span><span class="o">(),</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">());</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-jdbc.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-jdbc.html b/content/releases/current/storm-jdbc.html
index 99f7562..2e0f874 100644
--- a/content/releases/current/storm-jdbc.html
+++ b/content/releases/current/storm-jdbc.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
+<div class="documentation-content"><p>Storm/Trident integration for JDBC. This package includes the core bolts and trident states that allows a storm topology
 to either insert storm tuples in a database table or to execute select queries against a database and enrich tuples 
 in a storm topology.</p>
 
@@ -399,7 +399,7 @@ storm jar org.apache.storm.jdbc.topology.UserPersistanceTopology <dataSourceClas
 <div class="highlight"><pre><code class="language-" data-lang="">select * from user;
 </code></pre></div>
 <p>For trident you can view <code>org.apache.storm.jdbc.topology.UserPersistanceTridentTopology</code>.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-jms-example.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-jms-example.html b/content/releases/current/storm-jms-example.html
index 6a31fda..3920121 100644
--- a/content/releases/current/storm-jms-example.html
+++ b/content/releases/current/storm-jms-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
+<div class="documentation-content"><h2 id="example-storm-jms-topology">Example Storm JMS Topology</h2>
 
 <p>The storm-jms source code contains an example project (in the &quot;examples&quot; directory) 
 builds a multi-bolt/multi-spout topology (depicted below) that uses the JMS Spout and JMS Bolt components.</p>
@@ -248,7 +248,7 @@ DEBUG (backtype.storm.contrib.jms.example.GenericBolt:75) - [ANOTHER_BOLT] ACKin
 DEBUG (backtype.storm.contrib.jms.spout.JmsSpout:251) - JMS Message acked: ID:budreau.home-60117-1321735025796-0:0:1:1:1
 </code></pre></div>
 <p>The topology will run for 2 minutes, then gracefully shut down.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-jms-spring.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-jms-spring.html b/content/releases/current/storm-jms-spring.html
index 16e54b9..c18c253 100644
--- a/content/releases/current/storm-jms-spring.html
+++ b/content/releases/current/storm-jms-spring.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
+<div class="documentation-content"><h3 id="connecting-to-jms-using-springs-jms-support">Connecting to JMS Using Spring&#39;s JMS Support</h3>
 
 <p>Create a Spring applicationContext.xml file that defines one or more destination (topic/queue) beans, as well as a connecton factory.</p>
 <div class="highlight"><pre><code class="language-" data-lang=""><span class="cp">&lt;?xml version="1.0" encoding="UTF-8"?&gt;</span>
@@ -163,7 +163,7 @@
         <span class="na">brokerURL=</span><span class="s">"tcp://localhost:61616"</span> <span class="nt">/&gt;</span>
 
 <span class="nt">&lt;/beans&gt;</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-jms.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-jms.html b/content/releases/current/storm-jms.html
index 887e058..0cd88e6 100644
--- a/content/releases/current/storm-jms.html
+++ b/content/releases/current/storm-jms.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about-storm-jms">About Storm JMS</h2>
+<div class="documentation-content"><h2 id="about-storm-jms">About Storm JMS</h2>
 
 <p>Storm JMS is a generic framework for integrating JMS messaging within the Storm framework.</p>
 
@@ -169,7 +169,7 @@
 <p><a href="storm-jms-example.html">Example Topology</a></p>
 
 <p><a href="storm-jms-spring.html">Using Spring JMS</a></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-kafka-client.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-kafka-client.html b/content/releases/current/storm-kafka-client.html
index 9644458..e71ffa2 100644
--- a/content/releases/current/storm-kafka-client.html
+++ b/content/releases/current/storm-kafka-client.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
+<div class="documentation-content"><h1 id="storm-apache-kafka-integration-using-the-kafka-client-jar">Storm Apache Kafka integration using the kafka-client jar</h1>
 
 <p>This includes the new Apache Kafka consumer API.</p>
 
@@ -476,7 +476,7 @@ or enabling backpressure with Config.TOPOLOGY_MAX_SPOUT_PENDING.</p>
   <span class="o">.</span><span class="na">setTupleTrackingEnforced</span><span class="o">(</span><span class="kc">true</span><span class="o">)</span>
 </code></pre></div>
 <p>Note: This setting has no effect with AT_LEAST_ONCE processing guarantee, where tuple tracking is required and therefore always enabled.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-kafka.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-kafka.html b/content/releases/current/storm-kafka.html
index e08e547..4062063 100644
--- a/content/releases/current/storm-kafka.html
+++ b/content/releases/current/storm-kafka.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
+<div class="documentation-content"><p>Provides core Storm and Trident spout implementations for consuming data from Apache Kafka 0.8.x.</p>
 
 <h2 id="spouts">Spouts</h2>
 
@@ -498,7 +498,7 @@ Section &quot;Important configuration properties for the producer&quot; for more
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-metrics-profiling-internal-actions.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-metrics-profiling-internal-actions.html b/content/releases/current/storm-metrics-profiling-internal-actions.html
index 6d977ca..ec4add3 100644
--- a/content/releases/current/storm-metrics-profiling-internal-actions.html
+++ b/content/releases/current/storm-metrics-profiling-internal-actions.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
+<div class="documentation-content"><p>With the addition of these metrics, Storm users can collect, view, and analyze the performance of various internal actions.  The actions that are profiled include thrift rpc calls and http quests within the storm daemons. For instance, in the Storm Nimbus daemon, the following thrift calls defined in the Nimbus$Iface are profiled:</p>
 
 <ul>
 <li>submitTopology</li>
@@ -211,7 +211,7 @@ supervisor.childopts: "-Xmx256m -Dcom.sun.management.jmxremote.port=3337 -Dcom.s
 <p>For more information about io.dropwizard.metrics and metrics-clojure packages please reference their original documentation:
 - <a href="https://dropwizard.github.io/metrics/3.1.0/">https://dropwizard.github.io/metrics/3.1.0/</a>
 - <a href="http://metrics-clojure.readthedocs.org/en/latest/">http://metrics-clojure.readthedocs.org/en/latest/</a></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-mongodb.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-mongodb.html b/content/releases/current/storm-mongodb.html
index 6deafa6..1a3caee 100644
--- a/content/releases/current/storm-mongodb.html
+++ b/content/releases/current/storm-mongodb.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="https://www.mongodb.org/">MongoDB</a>. This package includes the core bolts and trident states that allows a storm topology to either insert storm tuples in a database collection or to execute update queries against a database collection in a storm topology.</p>
 
 <h2 id="insert-into-database">Insert into Database</h2>
 
@@ -298,7 +298,7 @@
 
         <span class="c1">//if a new document should be inserted if there are no matches to the query filter</span>
         <span class="c1">//updateBolt.withUpsert(true);</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-mqtt.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-mqtt.html b/content/releases/current/storm-mqtt.html
index 6de7bf0..2f71f28 100644
--- a/content/releases/current/storm-mqtt.html
+++ b/content/releases/current/storm-mqtt.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<h2 id="about">About</h2>
+<div class="documentation-content"><h2 id="about">About</h2>
 
 <p>MQTT is a lightweight publish/subscribe protocol frequently used in IoT applications.</p>
 
@@ -483,7 +483,7 @@ keystore/truststore need to be available on all worker nodes where the spout/bol
 <ul>
 <li>P. Taylor Goetz (<a href="mailto:ptgoetz@apache.org">ptgoetz@apache.org</a>)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-redis.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-redis.html b/content/releases/current/storm-redis.html
index 038df9a..cbad490 100644
--- a/content/releases/current/storm-redis.html
+++ b/content/releases/current/storm-redis.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
+<div class="documentation-content"><p>Storm/Trident integration for <a href="http://redis.io/">Redis</a></p>
 
 <p>Storm-redis uses Jedis for Redis client.</p>
 
@@ -382,7 +382,7 @@
         <span class="n">stream</span> <span class="o">=</span> <span class="n">stream</span><span class="o">.</span><span class="na">stateQuery</span><span class="o">(</span><span class="n">state</span><span class="o">,</span> <span class="k">new</span> <span class="n">Fields</span><span class="o">(</span><span class="s">"word"</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">RedisClusterStateQuerier</span><span class="o">(</span><span class="n">lookupMapper</span><span class="o">),</span>
                                 <span class="k">new</span> <span class="nf">Fields</span><span class="o">(</span><span class="s">"columnName"</span><span class="o">,</span><span class="s">"columnValue"</span><span class="o">));</span>
-</code></pre></div>
+</code></pre></div></div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-solr.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-solr.html b/content/releases/current/storm-solr.html
index 65b2527..3f5e133 100644
--- a/content/releases/current/storm-solr.html
+++ b/content/releases/current/storm-solr.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
+<div class="documentation-content"><p>Storm and Trident integration for Apache Solr. This package includes a bolt and a trident state that enable a Storm topology
 stream the contents of storm tuples to index Solr collections.</p>
 
 <h1 id="index-storm-tuples-into-a-solr-collection">Index Storm tuples into a Solr collection</h1>
@@ -308,7 +308,7 @@ and then generate an uber jar with all the dependencies.</p>
 <p>You can also see the results by opening the Apache Solr UI and pasting the <code>id</code> pattern in the <code>q</code> textbox in the queries page</p>
 
 <p><a href="http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query">http://localhost:8983/solr/#/gettingstarted_shard1_replica2/query</a></p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-sql-example.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-sql-example.html b/content/releases/current/storm-sql-example.html
index 29f249e..280626f 100644
--- a/content/releases/current/storm-sql-example.html
+++ b/content/releases/current/storm-sql-example.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
+<div class="documentation-content"><p>This page shows how to use Storm SQL by showing the example of processing Apache logs. 
 This page is written by &quot;how-to&quot; style so you can follow the step and learn how to utilize Storm SQL step by step. </p>
 
 <h2 id="preparation">Preparation</h2>
@@ -379,7 +379,7 @@ This page assumes that GetTime2 is in classpath, for simplicity.</p>
 (You may noticed that the types of some of output fields are different than output table schema.)</p>
 
 <p>Its behavior is subject to change when Storm SQL changes its backend API to core (tuple by tuple, low-level or high-level) one.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-sql-internal.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-sql-internal.html b/content/releases/current/storm-sql-internal.html
index 97f809b..959eb6a 100644
--- a/content/releases/current/storm-sql-internal.html
+++ b/content/releases/current/storm-sql-internal.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page describes the design and the implementation of the Storm SQL integration.</p>
+<div class="documentation-content"><p>This page describes the design and the implementation of the Storm SQL integration.</p>
 
 <h2 id="overview">Overview</h2>
 
@@ -195,7 +195,7 @@ You can use <code>--jars</code> or <code>--artifacts</code> option to <code>stor
 (Use <code>--artifacts</code> if your data source JARs are available in Maven repository since it handles transitive dependencies.)</p>
 
 <p>Please refer <a href="storm-sql.html">Storm SQL integration</a> page to how to do it.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-sql-reference.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-sql-reference.html b/content/releases/current/storm-sql-reference.html
index 5221649..e26b0e1 100644
--- a/content/releases/current/storm-sql-reference.html
+++ b/content/releases/current/storm-sql-reference.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
+<div class="documentation-content"><p>Storm SQL uses Apache Calcite to parse and evaluate the SQL statements. 
 Storm SQL also adopts Rex compiler from Calcite, so Storm SQL is expected to handle SQL dialect recognized by Calcite&#39;s default SQL parser. </p>
 
 <p>The page is based on Calcite SQL reference on website, and removes the area Storm SQL doesn&#39;t support, and also adds the area Storm SQL supports.</p>
@@ -2101,7 +2101,7 @@ You can use below as working reference for <code>--artifacts</code> option, and
 
 <p>Also, hdfs configuration files should be provided.
 You can put the <code>core-site.xml</code> and <code>hdfs-site.xml</code> into the <code>conf</code> directory which is in Storm installation directory.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/storm-sql.html
----------------------------------------------------------------------
diff --git a/content/releases/current/storm-sql.html b/content/releases/current/storm-sql.html
index 42effbd..a161fc9 100644
--- a/content/releases/current/storm-sql.html
+++ b/content/releases/current/storm-sql.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
+<div class="documentation-content"><p>The Storm SQL integration allows users to run SQL queries over streaming data in Storm. Not only the SQL interface allows faster development cycles on streaming analytics, but also opens up the opportunities to unify batch data processing like <a href="///hive.apache.org">Apache Hive</a> and real-time streaming data analytics.</p>
 
 <p>At a very high level StormSQL compiles the SQL queries to <a href="Trident-API-Overview.html">Trident</a> topologies and executes them in Storm clusters. This document provides information of how to use StormSQL as end users. For people that are interested in more details in the design and the implementation of StormSQL please refer to the <a href="storm-sql-internal.html">this</a> page.</p>
 
@@ -284,7 +284,7 @@ LogicalTableModify(table=[[LARGE_ORDERS]], operation=[INSERT], updateColumnList=
 <li>Windowing is yet to be implemented.</li>
 <li>Aggregation and join are not supported (waiting for <code>Streaming SQL</code> to be matured)</li>
 </ul>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/releases/current/windows-users-guide.html
----------------------------------------------------------------------
diff --git a/content/releases/current/windows-users-guide.html b/content/releases/current/windows-users-guide.html
index bd83020..752551f 100644
--- a/content/releases/current/windows-users-guide.html
+++ b/content/releases/current/windows-users-guide.html
@@ -144,7 +144,7 @@
 
 <p class="post-meta"></p>
 
-<p>This page guides how to set up environment on Windows for Apache Storm.</p>
+<div class="documentation-content"><p>This page guides how to set up environment on Windows for Apache Storm.</p>
 
 <h2 id="symbolic-link">Symbolic Link</h2>
 
@@ -172,7 +172,7 @@ If you don&#39;t want to execute Storm processes directly (not on command prompt
 on Nimbus and all of the Supervisor nodes.  This will also disable features that require symlinks.  Currently this is only downloading
 dependent blobs, but may change in the future.  Some topologies may rely on symbolic links to resources in the current working directory of the worker that are
 created as a convienence, so it is not a 100% backwards compatible change.</p>
-
+</div>
 
 
 	          </div>

http://git-wip-us.apache.org/repos/asf/storm-site/blob/df5612be/content/talksAndVideos.html
----------------------------------------------------------------------
diff --git a/content/talksAndVideos.html b/content/talksAndVideos.html
index fa7736d..88bf847 100644
--- a/content/talksAndVideos.html
+++ b/content/talksAndVideos.html
@@ -142,7 +142,7 @@
 
 <p class="post-meta"></p>
 
-<div class="row">
+<div class="documentation-content"><div class="row">
     <div class="col-md-12"> 
         <div class="resources">
             <ul class="nav nav-tabs" role="tablist">
@@ -566,7 +566,7 @@
         </div>
     </div>
 </div>
-
+</div>
 
 
 	          </div>