You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by pt...@apache.org on 2014/05/27 20:39:09 UTC

svn commit: r1597847 [2/6] - in /incubator/storm/site: _posts/ publish/ publish/2012/08/02/ publish/2012/09/06/ publish/2013/01/11/ publish/2013/12/08/ publish/2014/04/10/ publish/2014/04/17/ publish/2014/04/19/ publish/2014/04/21/ publish/2014/04/22/ ...

Added: incubator/storm/site/publish/2014/05/27/round1-results.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/2014/05/27/round1-results.html?rev=1597847&view=auto
==============================================================================
--- incubator/storm/site/publish/2014/05/27/round1-results.html (added)
+++ incubator/storm/site/publish/2014/05/27/round1-results.html Tue May 27 18:39:07 2014
@@ -0,0 +1,152 @@
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
+	
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
+<head>
+   <meta http-equiv="content-type" content="text/html;charset=utf-8" />
+	<meta name="description" content="Storm is a distributed and fault-tolerant realtime computation system. Similar to how Hadoop provides a set of general primitives for doing batch processing, Storm provides a set of general primitives for doing realtime computation. Storm is simple, can be used with any programming language, and is a lot of fun to use!" />
+	<meta name="keywords" content="storm, hadoop, realtime, stream, mapreduce, java, computation, scalability, clojure, scala, jvm, processing" />
+	<title>Logo Contest - Round 1 Results</title>
+	<link rel="stylesheet" type="text/css" href="/css/style.css" media="screen" />
+<script type="text/javascript">
+
+  var _gaq = _gaq || [];
+  _gaq.push(['_setAccount', 'UA-32530285-1']);
+  _gaq.push(['_trackPageview']);
+
+  (function() {
+    var ga = document.createElement('script'); ga.type = 'text/javascript'; ga.async = true;
+    ga.src = ('https:' == document.location.protocol ? 'https://ssl' : 'http://www') + '.google-analytics.com/ga.js';
+    var s = document.getElementsByTagName('script')[0]; s.parentNode.insertBefore(ga, s);
+  })();
+
+</script>
+</head>
+
+<body>
+<div id="wrap">
+<div id="top">
+<div id="projecttitle">
+<h2><a href="/" title="Back to main page">Storm</a></h2>
+<p id="slogan">Distributed and fault-tolerant realtime computation</p>
+</div>
+<div id="menu">
+<ul>
+<li><a href="/about/integrates.html">about</a></li>
+
+<!--
+<li><a href="/blog.html">blog</a></li>
+-->
+
+<li><a href="/documentation/Home.html">documentation</a></li>
+<li><a href="/blog.html">blog</a></li>
+<li><a href="/downloads.html">downloads</a></li>
+<li><a href="/community.html">community</a></li>
+</ul>
+</div>
+</div>
+<div id="content">
+<h1 id="logo-contest---round-1-results">Logo Contest - Round 1 Results</h1>
+
+<p>Round one of the Apache Storm logo contest is now complete and was a great success. We received votes from 7 PPMC members as well as 46 votes from the greater Storm community.</p>
+
+<p>We would like to extend a very special thanks to all those who took the time and effort to create and submit a logo proposal. We would also like to thank everyone who voted.</p>
+
+<h2 id="results">Results</h2>
+<p>The Storm PPMC and community votes were closely aligned, with the community vote resolving a PPMC tie for the 3rd finalist as shown in the result table below.</p>
+
+<p>The three finalists entering the final round are:</p>
+
+<ul>
+  <li><a href="/2014/04/23/logo-abartos.html">No. 6 - Alec Bartos</a></li>
+  <li><a href="/2014/04/29/logo-jlee1.html">No. 9 - Jennifer Lee</a></li>
+  <li><a href="/2014/04/29/logo-jlee2.html">No. 10 - Jennifer Lee</a></li>
+</ul>
+
+<p>Congratulations Alec and Jennifer!</p>
+
+<p>Stay tuned for the final vote, which will be announced shortly.</p>
+
+<hr />
+
+<table>
+  <thead>
+    <tr>
+      <th style="text-align: left">Entry</th>
+      <th style="text-align: right">PPMC</th>
+      <th style="text-align: right">Community</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td style="text-align: left">1 - Patricia Forrest</td>
+      <td style="text-align: right">2</td>
+      <td style="text-align: right">23</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">2 - Samuel Quiñones</td>
+      <td style="text-align: right">3</td>
+      <td style="text-align: right">6</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">3- Shaan Shiv Suleman</td>
+      <td style="text-align: right">0</td>
+      <td style="text-align: right">7</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">4 - Richard Brownlie-Marshall</td>
+      <td style="text-align: right">0</td>
+      <td style="text-align: right">6</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">5 - Ziba Sayari</td>
+      <td style="text-align: right">0</td>
+      <td style="text-align: right">9</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">6 - Alec Bartos</td>
+      <td style="text-align: right">3</td>
+      <td style="text-align: right">32</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">7 - Calum Boustead</td>
+      <td style="text-align: right">0</td>
+      <td style="text-align: right">0</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">8 - Stefano Asili</td>
+      <td style="text-align: right">0</td>
+      <td style="text-align: right">2</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">9 - Jennifer Lee</td>
+      <td style="text-align: right">9</td>
+      <td style="text-align: right">63</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">10 - Jennifer Lee</td>
+      <td style="text-align: right">18</td>
+      <td style="text-align: right">64</td>
+    </tr>
+    <tr>
+      <td style="text-align: left">11 - Jennifer Lee</td>
+      <td style="text-align: right">0</td>
+      <td style="text-align: right">18</td>
+    </tr>
+  </tbody>
+</table>
+
+</div>
+
+<div id="clear"></div></div>
+<div id="footer">
+	<p>
+	  Apache Storm is an effort undergoing incubation at The Apache Software Foundation.
+	  <a href="http://incubator.apache.org/" style="border: none;">
+	    <img style="vertical-align: middle; float: right; margin-bottom: 15px;"
+	        src="/images/incubator-logo.png" alt="Apache Incubator" title="Apache Incubator" />
+	  </a>  </p>
+</div>
+</div>
+
+</body>
+</html>
\ No newline at end of file

Modified: incubator/storm/site/publish/about/deployment.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/deployment.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/deployment.html (original)
+++ incubator/storm/site/publish/about/deployment.html Tue May 27 18:39:07 2014
@@ -67,9 +67,9 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>Storm clusters are easy to deploy, requiring a minimum of setup and configuration to get up and running. Storm&#8217;s out of the box configurations are suitable for production. Read more about how to deploy a Storm cluster <a href="/documentation/Setting-up-a-Storm-cluster.html">here</a>.</p>
+<p>Storm clusters are easy to deploy, requiring a minimum of setup and configuration to get up and running. Storm’s out of the box configurations are suitable for production. Read more about how to deploy a Storm cluster <a href="/documentation/Setting-up-a-Storm-cluster.html">here</a>.</p>
 
-<p>If you&#8217;re on EC2, the <a href="https://github.com/nathanmarz/storm-deploy">storm-deploy</a> project can provision, configure, and install a Storm cluster from scratch at just the click of a button.</p>
+<p>If you’re on EC2, the <a href="https://github.com/nathanmarz/storm-deploy">storm-deploy</a> project can provision, configure, and install a Storm cluster from scratch at just the click of a button.</p>
 
 <p>Additionally, Storm is easy to operate once deployed. Storm has been designed to be <a href="/about/fault-tolerant.html">extremely robust</a> – the cluster will just keep on running, month after month.</p>
 

Modified: incubator/storm/site/publish/about/fault-tolerant.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/fault-tolerant.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/fault-tolerant.html (original)
+++ incubator/storm/site/publish/about/fault-tolerant.html Tue May 27 18:39:07 2014
@@ -71,7 +71,7 @@
 
 <p>The Storm daemons, Nimbus and the Supervisors, are designed to be stateless and fail-fast. So if they die, they will restart like nothing happened. This means you can <em>kill -9</em> the Storm daemons without affecting the health of the cluster or your topologies.</p>
 
-<p>Read more about Storm&#8217;s fault-tolerance <a href="/documentation/Fault-tolerance.html">on the manual</a>.</p>
+<p>Read more about Storm’s fault-tolerance <a href="/documentation/Fault-tolerance.html">on the manual</a>.</p>
 
 </div>
 </div>

Modified: incubator/storm/site/publish/about/free-and-open-source.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/free-and-open-source.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/free-and-open-source.html (original)
+++ incubator/storm/site/publish/about/free-and-open-source.html Tue May 27 18:39:07 2014
@@ -74,7 +74,7 @@
 <ol>
   <li><em>Spouts</em>: These spouts integrate with queueing systems such as JMS, Kafka, Redis pub/sub, and more.</li>
   <li><em>storm-state</em>: storm-state makes it easy to manage large amounts of in-memory state in your computations in a reliable by using a distributed filesystem for persistence</li>
-  <li><em>Database integrations</em>: There are helper bolts for integrating with various databases, such as MongoDB, RDBMS&#8217;s, Cassandra, and more.</li>
+  <li><em>Database integrations</em>: There are helper bolts for integrating with various databases, such as MongoDB, RDBMS’s, Cassandra, and more.</li>
   <li>Other miscellaneous utilities</li>
 </ol>
 

Modified: incubator/storm/site/publish/about/guarantees-data-processing.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/guarantees-data-processing.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/guarantees-data-processing.html (original)
+++ incubator/storm/site/publish/about/guarantees-data-processing.html Tue May 27 18:39:07 2014
@@ -67,11 +67,11 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>Storm guarantees every tuple will be fully processed. One of Storm&#8217;s core mechanisms is the ability to track the lineage of a tuple as it makes its way through the topology in an extremely efficient way. Read more about how this works <a href="/documentation/Guaranteeing-message-processing.html">here</a>.</p>
+<p>Storm guarantees every tuple will be fully processed. One of Storm’s core mechanisms is the ability to track the lineage of a tuple as it makes its way through the topology in an extremely efficient way. Read more about how this works <a href="/documentation/Guaranteeing-message-processing.html">here</a>.</p>
 
-<p>Storm&#8217;s basic abstractions provide an at-least-once processing guarantee, the same guarantee you get when using a queueing system. Messages are only replayed when there are failures.</p>
+<p>Storm’s basic abstractions provide an at-least-once processing guarantee, the same guarantee you get when using a queueing system. Messages are only replayed when there are failures.</p>
 
-<p>Using <a href="/documentation/Trident-tutorial.html">Trident</a>, a higher level abstraction over Storm&#8217;s basic abstractions, you can achieve exactly-once processing semantics.</p>
+<p>Using <a href="/documentation/Trident-tutorial.html">Trident</a>, a higher level abstraction over Storm’s basic abstractions, you can achieve exactly-once processing semantics.</p>
 
 
 </div>

Modified: incubator/storm/site/publish/about/integrates.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/integrates.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/integrates.html (original)
+++ incubator/storm/site/publish/about/integrates.html Tue May 27 18:39:07 2014
@@ -67,7 +67,7 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>Storm integrates with any queueing system and any database system. Storm&#8217;s <a href="/apidocs/backtype/storm/spout/ISpout.html">spout</a> abstraction makes it easy to integrate a new queuing system. Example queue integrations include:</p>
+<p>Storm integrates with any queueing system and any database system. Storm’s <a href="/apidocs/backtype/storm/spout/ISpout.html">spout</a> abstraction makes it easy to integrate a new queuing system. Example queue integrations include:</p>
 
 <ol>
   <li><a href="https://github.com/nathanmarz/storm-kestrel">Kestrel</a></li>

Modified: incubator/storm/site/publish/about/scalable.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/scalable.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/scalable.html (original)
+++ incubator/storm/site/publish/about/scalable.html Tue May 27 18:39:07 2014
@@ -67,9 +67,9 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>Storm topologies are inherently parallel and run across a cluster of machines. Different parts of the topology can be scaled individually by tweaking their parallelism. The &#8220;rebalance&#8221; command of the &#8220;storm&#8221; command line client can adjust the parallelism of running topologies on the fly. </p>
+<p>Storm topologies are inherently parallel and run across a cluster of machines. Different parts of the topology can be scaled individually by tweaking their parallelism. The “rebalance” command of the “storm” command line client can adjust the parallelism of running topologies on the fly. </p>
 
-<p>Storm&#8217;s inherent parallelism means it can process very high throughputs of messages with very low latency. Storm was benchmarked at processing <strong>one million 100 byte messages per second per node</strong> on hardware with the following specs:</p>
+<p>Storm’s inherent parallelism means it can process very high throughputs of messages with very low latency. Storm was benchmarked at processing <strong>one million 100 byte messages per second per node</strong> on hardware with the following specs:</p>
 
 <ul>
   <li><strong>Processor:</strong> 2x Intel E5645@2.4Ghz </li>

Modified: incubator/storm/site/publish/about/simple-api.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/about/simple-api.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/about/simple-api.html (original)
+++ incubator/storm/site/publish/about/simple-api.html Tue May 27 18:39:07 2014
@@ -67,7 +67,7 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>Storm has a simple and easy to use API. When programming on Storm, you manipulate and transform streams of tuples, and a tuple is a named list of values. Tuples can contain objects of any type; if you want to use a type Storm doesn&#8217;t know about it&#8217;s <a href="/documentation/Serialization.html">very easy</a> to register a serializer for that type.</p>
+<p>Storm has a simple and easy to use API. When programming on Storm, you manipulate and transform streams of tuples, and a tuple is a named list of values. Tuples can contain objects of any type; if you want to use a type Storm doesn’t know about it’s <a href="/documentation/Serialization.html">very easy</a> to register a serializer for that type.</p>
 
 <p>There are just three abstractions in Storm: spouts, bolts, and topologies. A <strong>spout</strong> is a source of streams in a computation. Typically a spout reads from a queueing broker such as Kestrel, RabbitMQ, or Kafka, but a spout can also generate its own stream or read from somewhere like the Twitter streaming API. Spout implementations already exist for most queueing systems.</p>
 
@@ -75,7 +75,7 @@
 
 <p>A <strong>topology</strong> is a network of spouts and bolts, with each edge in the network representing a bolt subscribing to the output stream of some other spout or bolt. A topology is an arbitrarily complex multi-stage stream computation. Topologies run indefinitely when deployed.</p>
 
-<p>Storm has a &#8220;local mode&#8221; where a Storm cluster is simulated in-process. This is useful for development and testing. The &#8220;storm&#8221; command line client is used when ready to submit a topology for execution on an actual cluster.</p>
+<p>Storm has a “local mode” where a Storm cluster is simulated in-process. This is useful for development and testing. The “storm” command line client is used when ready to submit a topology for execution on an actual cluster.</p>
 
 <p>The <a href="https://github.com/nathanmarz/storm-starter">storm-starter</a> project contains example topologies for learning the basics of Storm. Learn more about how to use Storm by reading the <a href="/documentation/Tutorial.html">tutorial</a> and the <a href="/documentation/Home.html">documentation</a>.</p>
 

Modified: incubator/storm/site/publish/blog.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/blog.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/blog.html (original)
+++ incubator/storm/site/publish/blog.html Tue May 27 18:39:07 2014
@@ -47,6 +47,8 @@
 <div id="content">
   <ul class="posts">
     
+      <li><span>27 May 2014</span> &raquo; <a href="/2014/05/27/round1-results.html">Logo Contest - Round 1 Results</a></li>
+    
       <li><span>29 Apr 2014</span> &raquo; <a href="/2014/04/29/logo-jlee3.html">Logo Entry No. 11 - Jennifer Lee</a></li>
     
       <li><span>29 Apr 2014</span> &raquo; <a href="/2014/04/29/logo-jlee2.html">Logo Entry No. 10 - Jennifer Lee</a></li>

Modified: incubator/storm/site/publish/documentation/Acking-framework-implementation.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Acking-framework-implementation.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Acking-framework-implementation.html (original)
+++ incubator/storm/site/publish/documentation/Acking-framework-implementation.html Tue May 27 18:39:07 2014
@@ -65,20 +65,20 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm&#8217;s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
+<p><a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L28">Storm’s acker</a> tracks completion of each tupletree with a checksum hash: each time a tuple is sent, its value is XORed into the checksum, and each time a tuple is acked its value is XORed in again. If all tuples have been successfully acked, the checksum will be zero (the odds that the checksum will be zero otherwise are vanishingly small).</p>
 
-<p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki &#8211; this explains the internal details.</p>
+<p>You can read a bit more about the <a href="Guaranteeing-message-processing.html#what-is-storms-reliability-api">reliability mechanism</a> elsewhere on the wiki – this explains the internal details.</p>
 
 <h3 id="acker-execute">acker <code>execute()</code></h3>
 
-<p>The acker is actually a regular bolt, with its  <a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L36">execute method</a> defined withing <code>mk-acker-bolt</code>.  When a new tupletree is born, the spout sends the XORed edge-ids of each tuple recipient, which the acker records in its <code>pending</code> ledger. Every time an executor acks a tuple, the acker receives a partial checksum that is the XOR of the tuple&#8217;s own edge-id (clearing it from the ledger) and the edge-id of each downstream tuple the executor emitted (thus entering them into the ledger).</p>
+<p>The acker is actually a regular bolt, with its  <a href="https://github.com/apache/incubator-storm/blob/46c3ba7/storm-core/src/clj/backtype/storm/daemon/acker.clj#L36">execute method</a> defined withing <code>mk-acker-bolt</code>.  When a new tupletree is born, the spout sends the XORed edge-ids of each tuple recipient, which the acker records in its <code>pending</code> ledger. Every time an executor acks a tuple, the acker receives a partial checksum that is the XOR of the tuple’s own edge-id (clearing it from the ledger) and the edge-id of each downstream tuple the executor emitted (thus entering them into the ledger).</p>
 
 <p>This is accomplished as follows.</p>
 
 <p>On a tick tuple, just advance pending tupletree checksums towards death and return. Otherwise, update or create the record for this tupletree:</p>
 
 <ul>
-  <li>on init: initialize with the given checksum value, and record the spout&#8217;s id for later.</li>
+  <li>on init: initialize with the given checksum value, and record the spout’s id for later.</li>
   <li>on ack:  xor the partial checksum into the existing checksum value</li>
   <li>on fail: just mark it as failed</li>
 </ul>
@@ -98,7 +98,7 @@
 
 <p>The RotatingMap behaves as a HashMap, and offers the same O(1) access guarantees.</p>
 
-<p>Internally, it holds several HashMaps (&#8216;buckets&#8217;) of its own, each holding a cohort of records that will expire at the same time.  Let&#8217;s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery &#8211; and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
+<p>Internally, it holds several HashMaps (‘buckets’) of its own, each holding a cohort of records that will expire at the same time.  Let’s call the longest-lived bucket death row, and the most recent the nursery. Whenever a value is <code>.put()</code> to the RotatingMap, it is relocated to the nursery – and removed from any other bucket it might have been in (effectively resetting its death clock).</p>
 
 <p>Whenever its owner calls <code>.rotate()</code>, the RotatingMap advances each cohort one step further towards expiration. (Typically, Storm objects call rotate on every receipt of a system tick stream tuple.) If there are any key-value pairs in the former death row bucket, the RotatingMap invokes a callback (given in the constructor) for each key-value pair, letting its owner take appropriate action (eg, failing a tuple.</p>
 

Modified: incubator/storm/site/publish/documentation/Clojure-DSL.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Clojure-DSL.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Clojure-DSL.html (original)
+++ incubator/storm/site/publish/documentation/Clojure-DSL.html Tue May 27 18:39:07 2014
@@ -65,7 +65,7 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you&#8217;re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="https://github.com/apache/incubator-storm/blob/0.5.3/src/clj/backtype/storm/clojure.clj">backtype.storm.clojure</a> namespace.</p>
+<p>Storm comes with a Clojure DSL for defining spouts, bolts, and topologies. The Clojure DSL has access to everything the Java API exposes, so if you’re a Clojure user you can code Storm topologies without touching Java at all. The Clojure DSL is defined in the source in the <a href="https://github.com/apache/incubator-storm/blob/0.5.3/src/clj/backtype/storm/clojure.clj">backtype.storm.clojure</a> namespace.</p>
 
 <p>This page outlines all the pieces of the Clojure DSL, including:</p>
 
@@ -79,9 +79,9 @@
 
 <h3 id="defining-topologies">Defining topologies</h3>
 
-<p>To define a topology, use the <code>topology</code> function. <code>topology</code> takes in two arguments: a map of &#8220;spout specs&#8221; and a map of &#8220;bolt specs&#8221;. Each spout and bolt spec wires the code for the component into the topology by specifying things like inputs and parallelism.</p>
+<p>To define a topology, use the <code>topology</code> function. <code>topology</code> takes in two arguments: a map of “spout specs” and a map of “bolt specs”. Each spout and bolt spec wires the code for the component into the topology by specifying things like inputs and parallelism.</p>
 
-<p>Let&#8217;s take a look at an example topology definition <a href="https://github.com/nathanmarz/storm-starter/blob/master/src/clj/storm/starter/clj/word_count.clj">from the storm-starter project</a>:</p>
+<p>Let’s take a look at an example topology definition <a href="https://github.com/nathanmarz/storm-starter/blob/master/src/clj/storm/starter/clj/word_count.clj">from the storm-starter project</a>:</p>
 
 <p><code>clojure
 (topology
@@ -125,7 +125,7 @@
   <li><code>:direct</code>: subscribes with a direct grouping</li>
 </ol>
 
-<p>See <a href="Concepts.html">Concepts</a> for more info on stream groupings. Here&#8217;s an example input declaration showcasing the various ways to declare inputs:</p>
+<p>See <a href="Concepts.html">Concepts</a> for more info on stream groupings. Here’s an example input declaration showcasing the various ways to declare inputs:</p>
 
 <p><code>clojure
 {["2" "1"] :shuffle
@@ -133,7 +133,7 @@
  ["4" "2"] :global}
 </code></p>
 
-<p>This input declaration subscribes to three streams total. It subscribes to stream &#8220;1&#8221; on component &#8220;2&#8221; with a shuffle grouping, subscribes to the default stream on component &#8220;3&#8221; with a fields grouping on the fields &#8220;field1&#8221; and &#8220;field2&#8221;, and subscribes to stream &#8220;2&#8221; on component &#8220;4&#8221; with a global grouping.</p>
+<p>This input declaration subscribes to three streams total. It subscribes to stream “1” on component “2” with a shuffle grouping, subscribes to the default stream on component “3” with a fields grouping on the fields “field1” and “field2”, and subscribes to stream “2” on component “4” with a global grouping.</p>
 
 <p>Like <code>spout-spec</code>, the only current supported keyword argument for <code>bolt-spec</code> is <code>:p</code> which specifies the parallelism for the bolt.</p>
 
@@ -141,7 +141,7 @@
 
 <p><code>shell-bolt-spec</code> is used for defining bolts that are implemented in a non-JVM language. It takes as arguments the input declaration, the command line program to run, the name of the file implementing the bolt, an output specification, and then the same keyword arguments that <code>bolt-spec</code> accepts.</p>
 
-<p>Here&#8217;s an example <code>shell-bolt-spec</code>:</p>
+<p>Here’s an example <code>shell-bolt-spec</code>:</p>
 
 <p><code>clojure
 (shell-bolt-spec {"1" :shuffle "2" ["id"]}
@@ -155,9 +155,9 @@
 
 <h3 id="defbolt">defbolt</h3>
 
-<p><code>defbolt</code> is used for defining bolts in Clojure. Bolts have the constraint that they must be serializable, and this is why you can&#8217;t just reify <code>IRichBolt</code> to implement a bolt (closures aren&#8217;t serializable). <code>defbolt</code> works around this restriction and provides a nicer syntax for defining bolts than just implementing a Java interface.</p>
+<p><code>defbolt</code> is used for defining bolts in Clojure. Bolts have the constraint that they must be serializable, and this is why you can’t just reify <code>IRichBolt</code> to implement a bolt (closures aren’t serializable). <code>defbolt</code> works around this restriction and provides a nicer syntax for defining bolts than just implementing a Java interface.</p>
 
-<p>At its fullest expressiveness, <code>defbolt</code> supports parameterized bolts and maintaining state in a closure around the bolt implementation. It also provides shortcuts for defining bolts that don&#8217;t need this extra functionality. The signature for <code>defbolt</code> looks like the following:</p>
+<p>At its fullest expressiveness, <code>defbolt</code> supports parameterized bolts and maintaining state in a closure around the bolt implementation. It also provides shortcuts for defining bolts that don’t need this extra functionality. The signature for <code>defbolt</code> looks like the following:</p>
 
 <p>(defbolt <em>name</em> <em>output-declaration</em> *<em>option-map</em> &amp; <em>impl</em>)</p>
 
@@ -165,7 +165,7 @@
 
 <h4 id="simple-bolts">Simple bolts</h4>
 
-<p>Let&#8217;s start with the simplest form of <code>defbolt</code>. Here&#8217;s an example bolt that splits a tuple containing a sentence into a tuple for each word:</p>
+<p>Let’s start with the simplest form of <code>defbolt</code>. Here’s an example bolt that splits a tuple containing a sentence into a tuple for each word:</p>
 
 <p><code>clojure
 (defbolt split-sentence ["word"] [tuple collector]
@@ -176,7 +176,7 @@
     ))
 </code></p>
 
-<p>Since the option map is omitted, this is a non-prepared bolt. The DSL simply expects an implementation for the <code>execute</code> method of <code>IRichBolt</code>. The implementation takes two parameters, the tuple and the <code>OutputCollector</code>, and is followed by the body of the <code>execute</code> function. The DSL automatically type-hints the parameters for you so you don&#8217;t need to worry about reflection if you use Java interop.</p>
+<p>Since the option map is omitted, this is a non-prepared bolt. The DSL simply expects an implementation for the <code>execute</code> method of <code>IRichBolt</code>. The implementation takes two parameters, the tuple and the <code>OutputCollector</code>, and is followed by the body of the <code>execute</code> function. The DSL automatically type-hints the parameters for you so you don’t need to worry about reflection if you use Java interop.</p>
 
 <p>This implementation binds <code>split-sentence</code> to an actual <code>IRichBolt</code> object that you can use in topologies, like so:</p>
 
@@ -188,7 +188,7 @@
 
 <h4 id="parameterized-bolts">Parameterized bolts</h4>
 
-<p>Many times you want to parameterize your bolts with other arguments. For example, let&#8217;s say you wanted to have a bolt that appends a suffix to every input string it receives, and you want that suffix to be set at runtime. You do this with <code>defbolt</code> by including a <code>:params</code> option in the option map, like so:</p>
+<p>Many times you want to parameterize your bolts with other arguments. For example, let’s say you wanted to have a bolt that appends a suffix to every input string it receives, and you want that suffix to be set at runtime. You do this with <code>defbolt</code> by including a <code>:params</code> option in the option map, like so:</p>
 
 <p><code>clojure
 (defbolt suffix-appender ["word"] {:params [suffix]}
@@ -247,7 +247,7 @@
 <p><code>clojure
 ["word" "count"]
 </code>
-This declares the output of the bolt as the fields [&#8220;word&#8221; &#8220;count&#8221;] on the default stream id.</p>
+This declares the output of the bolt as the fields [“word” “count”] on the default stream id.</p>
 
 <h4 id="emitting-acking-and-failing">Emitting, acking, and failing</h4>
 
@@ -264,7 +264,7 @@ This declares the output of the bolt as 
 
 <h3 id="defspout">defspout</h3>
 
-<p><code>defspout</code> is used for defining spouts in Clojure. Like bolts, spouts must be serializable so you can&#8217;t just reify <code>IRichSpout</code> to do spout implementations in Clojure. <code>defspout</code> works around this restriction and provides a nicer syntax for defining spouts than just implementing a Java interface.</p>
+<p><code>defspout</code> is used for defining spouts in Clojure. Like bolts, spouts must be serializable so you can’t just reify <code>IRichSpout</code> to do spout implementations in Clojure. <code>defspout</code> works around this restriction and provides a nicer syntax for defining spouts than just implementing a Java interface.</p>
 
 <p>The signature for <code>defspout</code> looks like the following:</p>
 
@@ -272,7 +272,7 @@ This declares the output of the bolt as 
 
 <p>If you leave out the option map, it defaults to {:prepare true}. The output declaration for <code>defspout</code> has the same syntax as <code>defbolt</code>.</p>
 
-<p>Here&#8217;s an example <code>defspout</code> implementation from <a href="https://github.com/nathanmarz/storm-starter/blob/master/src/clj/storm/starter/clj/word_count.clj">storm-starter</a>:</p>
+<p>Here’s an example <code>defspout</code> implementation from <a href="https://github.com/nathanmarz/storm-starter/blob/master/src/clj/storm/starter/clj/word_count.clj">storm-starter</a>:</p>
 
 <p><code>clojure
 (defspout sentence-spout ["sentence"]
@@ -295,7 +295,7 @@ This declares the output of the bolt as 
 
 <p>The implementation takes in as input the topology config, <code>TopologyContext</code>, and <code>SpoutOutputCollector</code>. The implementation returns an <code>ISpout</code> object. Here, the <code>nextTuple</code> function emits a random sentence from <code>sentences</code>. </p>
 
-<p>This spout isn&#8217;t reliable, so the <code>ack</code> and <code>fail</code> methods will never be called. A reliable spout will add a message id when emitting tuples, and then <code>ack</code> or <code>fail</code> will be called when the tuple is completed or failed respectively. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for more info on how reliability works within Storm.</p>
+<p>This spout isn’t reliable, so the <code>ack</code> and <code>fail</code> methods will never be called. A reliable spout will add a message id when emitting tuples, and then <code>ack</code> or <code>fail</code> will be called when the tuple is completed or failed respectively. See <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a> for more info on how reliability works within Storm.</p>
 
 <p><code>emit-spout!</code> takes in as parameters the <code>SpoutOutputCollector</code> and the new tuple to be emitted, and accepts as keyword arguments <code>:stream</code> and <code>:id</code>. <code>:stream</code> specifies the stream to emit to, and <code>:id</code> specifies a message id for the tuple (used in the <code>ack</code> and <code>fail</code> callbacks). Omitting these arguments emits an unanchored tuple to the default output stream.</p>
 
@@ -321,9 +321,9 @@ This declares the output of the bolt as 
 
 <h3 id="running-topologies-in-local-mode-or-on-a-cluster">Running topologies in local mode or on a cluster</h3>
 
-<p>That&#8217;s all there is to the Clojure DSL. To submit topologies in remote mode or local mode, just use the <code>StormSubmitter</code> or <code>LocalCluster</code> classes just like you would from Java.</p>
+<p>That’s all there is to the Clojure DSL. To submit topologies in remote mode or local mode, just use the <code>StormSubmitter</code> or <code>LocalCluster</code> classes just like you would from Java.</p>
 
-<p>To create topology configs, it&#8217;s easiest to use the <code>backtype.storm.config</code> namespace which defines constants for all of the possible configs. The constants are the same as the static constants in the <code>Config</code> class, except with dashes instead of underscores. For example, here&#8217;s a topology config that sets the number of workers to 15 and configures the topology in debug mode:</p>
+<p>To create topology configs, it’s easiest to use the <code>backtype.storm.config</code> namespace which defines constants for all of the possible configs. The constants are the same as the static constants in the <code>Config</code> class, except with dashes instead of underscores. For example, here’s a topology config that sets the number of workers to 15 and configures the topology in debug mode:</p>
 
 <p><code>clojure
 {TOPOLOGY-DEBUG true
@@ -332,7 +332,7 @@ This declares the output of the bolt as 
 
 <h3 id="testing-topologies">Testing topologies</h3>
 
-<p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm&#8217;s powerful built-in facilities for testing topologies in Clojure.</p>
+<p><a href="http://www.pixelmachine.org/2011/12/17/Testing-Storm-Topologies.html">This blog post</a> and its <a href="http://www.pixelmachine.org/2011/12/21/Testing-Storm-Topologies-Part-2.html">follow-up</a> give a good overview of Storm’s powerful built-in facilities for testing topologies in Clojure.</p>
 
 </div>
 </div>

Modified: incubator/storm/site/publish/documentation/Command-line-client.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Command-line-client.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Command-line-client.html (original)
+++ incubator/storm/site/publish/documentation/Command-line-client.html Tue May 27 18:39:07 2014
@@ -65,7 +65,7 @@
   </ul>
 </div>
 <div id="aboutcontent">
-<p>This page describes all the commands that are possible with the &#8220;storm&#8221; command line client. To learn how to set up your &#8220;storm&#8221; client to talk to a remote cluster, follow the instructions in <a href="Setting-up-a-development-environment.html">Setting up development environment</a>.</p>
+<p>This page describes all the commands that are possible with the “storm” command line client. To learn how to set up your “storm” client to talk to a remote cluster, follow the instructions in <a href="Setting-up-a-development-environment.html">Setting up development environment</a>.</p>
 
 <p>These commands are:</p>
 
@@ -95,25 +95,25 @@
 
 <p>Syntax: <code>storm kill topology-name [-w wait-time-secs]</code></p>
 
-<p>Kills the topology with the name <code>topology-name</code>. Storm will first deactivate the topology&#8217;s spouts for the duration of the topology&#8217;s message timeout to allow all messages currently being processed to finish processing. Storm will then shutdown the workers and clean up their state. You can override the length of time Storm waits between deactivation and shutdown with the -w flag.</p>
+<p>Kills the topology with the name <code>topology-name</code>. Storm will first deactivate the topology’s spouts for the duration of the topology’s message timeout to allow all messages currently being processed to finish processing. Storm will then shutdown the workers and clean up their state. You can override the length of time Storm waits between deactivation and shutdown with the -w flag.</p>
 
 <h3 id="activate">activate</h3>
 
 <p>Syntax: <code>storm activate topology-name</code></p>
 
-<p>Activates the specified topology&#8217;s spouts.</p>
+<p>Activates the specified topology’s spouts.</p>
 
 <h3 id="deactivate">deactivate</h3>
 
 <p>Syntax: <code>storm deactivate topology-name</code></p>
 
-<p>Deactivates the specified topology&#8217;s spouts.</p>
+<p>Deactivates the specified topology’s spouts.</p>
 
 <h3 id="rebalance">rebalance</h3>
 
 <p>Syntax: <code>storm rebalance topology-name [-w wait-time-secs]</code></p>
 
-<p>Sometimes you may wish to spread out where the workers for a topology are running. For example, let&#8217;s say you have a 10 node cluster running 4 workers per node, and then let&#8217;s say you add another 10 nodes to the cluster. You may wish to have Storm spread out the workers for the running topology so that each node runs 2 workers. One way to do this is to kill the topology and resubmit it, but Storm provides a &#8220;rebalance&#8221; command that provides an easier way to do this. </p>
+<p>Sometimes you may wish to spread out where the workers for a topology are running. For example, let’s say you have a 10 node cluster running 4 workers per node, and then let’s say you add another 10 nodes to the cluster. You may wish to have Storm spread out the workers for the running topology so that each node runs 2 workers. One way to do this is to kill the topology and resubmit it, but Storm provides a “rebalance” command that provides an easier way to do this. </p>
 
 <p>Rebalance will first deactivate the topology for the duration of the message timeout (overridable with the -w flag) and then redistribute the workers evenly around the cluster. The topology will then return to its previous state of activation (so a deactivated topology will still be deactivated and an activated topology will go back to being activated).</p>
 
@@ -139,7 +139,7 @@
 
 <p>Syntax: <code>storm remoteconfvalue conf-name</code></p>
 
-<p>Prints out the value for <code>conf-name</code> in the cluster&#8217;s Storm configs. The cluster&#8217;s Storm configs are the ones in <code>$STORM-PATH/conf/storm.yaml</code> merged in with the configs in <code>defaults.yaml</code>. This command must be run on a cluster machine.</p>
+<p>Prints out the value for <code>conf-name</code> in the cluster’s Storm configs. The cluster’s Storm configs are the ones in <code>$STORM-PATH/conf/storm.yaml</code> merged in with the configs in <code>defaults.yaml</code>. This command must be run on a cluster machine.</p>
 
 <h3 id="nimbus">nimbus</h3>
 

Modified: incubator/storm/site/publish/documentation/Common-patterns.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Common-patterns.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Common-patterns.html (original)
+++ incubator/storm/site/publish/documentation/Common-patterns.html Tue May 27 18:39:07 2014
@@ -90,7 +90,7 @@ builder.setBolt("join", new MyJoiner(), 
   .fieldsGrouping("3", new Fields("joinfield1", "joinfield2"));
 </code></p>
 
-<p>The different streams don&#8217;t have to have the same field names, of course.</p>
+<p>The different streams don’t have to have the same field names, of course.</p>
 
 <h3 id="batching">Batching</h3>
 
@@ -105,7 +105,7 @@ builder.setBolt("join", new MyJoiner(), 
 
 <h3 id="in-memory-caching--fields-grouping-combo">In-memory caching + fields grouping combo</h3>
 
-<p>It&#8217;s common to keep caches in-memory in Storm bolts. Caching becomes particularly powerful when you combine it with a fields grouping. For example, suppose you have a bolt that expands short URLs (like bit.ly, t.co, etc.) into long URLs. You can increase performance by keeping an LRU cache of short URL to long URL expansions to avoid doing the same HTTP requests over and over. Suppose component &#8220;urls&#8221; emits short URLS, and component &#8220;expand&#8221; expands short URLs into long URLs and keeps a cache internally. Consider the difference between the two following snippets of code:</p>
+<p>It’s common to keep caches in-memory in Storm bolts. Caching becomes particularly powerful when you combine it with a fields grouping. For example, suppose you have a bolt that expands short URLs (like bit.ly, t.co, etc.) into long URLs. You can increase performance by keeping an LRU cache of short URL to long URL expansions to avoid doing the same HTTP requests over and over. Suppose component “urls” emits short URLS, and component “expand” expands short URLs into long URLs and keeps a cache internally. Consider the difference between the two following snippets of code:</p>
 
 <p><code>java
 builder.setBolt("expand", new ExpandUrl(), parallelism)
@@ -121,9 +121,9 @@ builder.setBolt("expand", new ExpandUrl(
 
 <h3 id="streaming-top-n">Streaming top N</h3>
 
-<p>A common continuous computation done on Storm is a &#8220;streaming top N&#8221; of some sort. Suppose you have a bolt that emits tuples of the form [&#8220;value&#8221;, &#8220;count&#8221;] and you want a bolt that emits the top N tuples based on count. The simplest way to do this is to have a bolt that does a global grouping on the stream and maintains a list in memory of the top N items.</p>
+<p>A common continuous computation done on Storm is a “streaming top N” of some sort. Suppose you have a bolt that emits tuples of the form [“value”, “count”] and you want a bolt that emits the top N tuples based on count. The simplest way to do this is to have a bolt that does a global grouping on the stream and maintains a list in memory of the top N items.</p>
 
-<p>This approach obviously doesn&#8217;t scale to large streams since the entire stream has to go through one task. A better way to do the computation is to do many top N&#8217;s in parallel across partitions of the stream, and then merge those top N&#8217;s together to get the global top N. The pattern looks like this:</p>
+<p>This approach obviously doesn’t scale to large streams since the entire stream has to go through one task. A better way to do the computation is to do many top N’s in parallel across partitions of the stream, and then merge those top N’s together to get the global top N. The pattern looks like this:</p>
 
 <p><code>java
 builder.setBolt("rank", new RankObjects(), parallellism)
@@ -136,11 +136,11 @@ builder.setBolt("merge", new MergeObject
 
 <h3 id="timecachemap-for-efficiently-keeping-a-cache-of-things-that-have-been-recently-updated">TimeCacheMap for efficiently keeping a cache of things that have been recently updated</h3>
 
-<p>You sometimes want to keep a cache in memory of items that have been recently &#8220;active&#8221; and have items that have been inactive for some time be automatically expires. <a href="/apidocs/backtype/storm/utils/TimeCacheMap.html">TimeCacheMap</a> is an efficient data structure for doing this and provides hooks so you can insert callbacks whenever an item is expired.</p>
+<p>You sometimes want to keep a cache in memory of items that have been recently “active” and have items that have been inactive for some time be automatically expires. <a href="/apidocs/backtype/storm/utils/TimeCacheMap.html">TimeCacheMap</a> is an efficient data structure for doing this and provides hooks so you can insert callbacks whenever an item is expired.</p>
 
 <h3 id="coordinatedbolt-and-keyedfairbolt-for-distributed-rpc">CoordinatedBolt and KeyedFairBolt for Distributed RPC</h3>
 
-<p>When building distributed RPC applications on top of Storm, there are two common patterns that are usually needed. These are encapsulated by <a href="/apidocs/backtype/storm/task/CoordinatedBolt.html">CoordinatedBolt</a> and <a href="/apidocs/backtype/storm/task/KeyedFairBolt.html">KeyedFairBolt</a> which are part of the &#8220;standard library&#8221; that ships with the Storm codebase.</p>
+<p>When building distributed RPC applications on top of Storm, there are two common patterns that are usually needed. These are encapsulated by <a href="/apidocs/backtype/storm/task/CoordinatedBolt.html">CoordinatedBolt</a> and <a href="/apidocs/backtype/storm/task/KeyedFairBolt.html">KeyedFairBolt</a> which are part of the “standard library” that ships with the Storm codebase.</p>
 
 <p><code>CoordinatedBolt</code> wraps the bolt containing your logic and figures out when your bolt has received all the tuples for any given request. It makes heavy use of direct streams to do this.</p>
 

Modified: incubator/storm/site/publish/documentation/Concepts.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Concepts.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Concepts.html (original)
+++ incubator/storm/site/publish/documentation/Concepts.html Tue May 27 18:39:07 2014
@@ -92,16 +92,16 @@
 
 <h3 id="streams">Streams</h3>
 
-<p>The stream is the core abstraction in Storm. A stream is an unbounded sequence of tuples that is processed and created in parallel in a distributed fashion. Streams are defined with a schema that names the fields in the stream&#8217;s tuples. By default, tuples can contain integers, longs, shorts, bytes, strings, doubles, floats, booleans, and byte arrays. You can also define your own serializers so that custom types can be used natively within tuples.</p>
+<p>The stream is the core abstraction in Storm. A stream is an unbounded sequence of tuples that is processed and created in parallel in a distributed fashion. Streams are defined with a schema that names the fields in the stream’s tuples. By default, tuples can contain integers, longs, shorts, bytes, strings, doubles, floats, booleans, and byte arrays. You can also define your own serializers so that custom types can be used natively within tuples.</p>
 
-<p>Every stream is given an id when declared. Since single-stream spouts and bolts are so common, <a href="/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html">OutputFieldsDeclarer</a> has convenience methods for declaring a single stream without specifying an id. In this case, the stream is given the default id of &#8220;default&#8221;.</p>
+<p>Every stream is given an id when declared. Since single-stream spouts and bolts are so common, <a href="/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html">OutputFieldsDeclarer</a> has convenience methods for declaring a single stream without specifying an id. In this case, the stream is given the default id of “default”.</p>
 
 <p><strong>Resources:</strong></p>
 
 <ul>
   <li><a href="/apidocs/backtype/storm/tuple/Tuple.html">Tuple</a>: streams are composed of tuples</li>
   <li><a href="/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html">OutputFieldsDeclarer</a>: used to declare streams and their schemas</li>
-  <li><a href="Serialization.html">Serialization</a>: Information about Storm&#8217;s dynamic typing of tuples and declaring custom serializations</li>
+  <li><a href="Serialization.html">Serialization</a>: Information about Storm’s dynamic typing of tuples and declaring custom serializations</li>
   <li><a href="/apidocs/backtype/storm/serialization/ISerialization.html">ISerialization</a>: custom serializers must implement this interface</li>
   <li><a href="/apidocs/backtype/storm/Config.html#TOPOLOGY_SERIALIZATIONS">CONFIG.TOPOLOGY_SERIALIZATIONS</a>: custom serializers can be registered using this configuration</li>
 </ul>
@@ -131,7 +131,7 @@
 
 <p>Bolts can emit more than one stream. To do so, declare multiple streams using the <code>declareStream</code> method of <a href="/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html">OutputFieldsDeclarer</a> and specify the stream to emit to when using the <code>emit</code> method on <a href="/apidocs/backtype/storm/task/OutputCollector.html">OutputCollector</a>.</p>
 
-<p>When you declare a bolt&#8217;s input streams, you always subscribe to specific streams of another component. If you want to subscribe to all the streams of another component, you have to subscribe to each one individually. <a href="/apidocs/backtype/storm/topology/InputDeclarer.html">InputDeclarer</a> has syntactic sugar for subscribing to streams declared on the default stream id. Saying <code>declarer.shuffleGrouping("1")</code> subscribes to the default stream on component &#8220;1&#8221; and is equivalent to <code>declarer.shuffleGrouping("1", DEFAULT_STREAM_ID)</code>. </p>
+<p>When you declare a bolt’s input streams, you always subscribe to specific streams of another component. If you want to subscribe to all the streams of another component, you have to subscribe to each one individually. <a href="/apidocs/backtype/storm/topology/InputDeclarer.html">InputDeclarer</a> has syntactic sugar for subscribing to streams declared on the default stream id. Saying <code>declarer.shuffleGrouping("1")</code> subscribes to the default stream on component “1” and is equivalent to <code>declarer.shuffleGrouping("1", DEFAULT_STREAM_ID)</code>. </p>
 
 <p>The main method in bolts is the <code>execute</code> method which takes in as input a new tuple. Bolts emit new tuples using the <a href="/apidocs/backtype/storm/task/OutputCollector.html">OutputCollector</a> object. Bolts must call the <code>ack</code> method on the <code>OutputCollector</code> for every tuple they process so that Storm knows when tuples are completed (and can eventually determine that its safe to ack the original spout tuples). For the common case of processing an input tuple, emitting 0 or more tuples based on that tuple, and then acking the input tuple, Storm provides an <a href="/apidocs/backtype/storm/topology/IBasicBolt.html">IBasicBolt</a> interface which does the acking automatically.</p>
 
@@ -148,16 +148,16 @@
 
 <h3 id="stream-groupings">Stream groupings</h3>
 
-<p>Part of defining a topology is specifying for each bolt which streams it should receive as input. A stream grouping defines how that stream should be partitioned among the bolt&#8217;s tasks.</p>
+<p>Part of defining a topology is specifying for each bolt which streams it should receive as input. A stream grouping defines how that stream should be partitioned among the bolt’s tasks.</p>
 
 <p>There are seven built-in stream groupings in Storm, and you can implement a custom stream grouping by implementing the <a href="/apidocs/backtype/storm/grouping/CustomStreamGrouping.html">CustomStreamGrouping</a> interface:</p>
 
 <ol>
-  <li><strong>Shuffle grouping</strong>: Tuples are randomly distributed across the bolt&#8217;s tasks in a way such that each bolt is guaranteed to get an equal number of tuples.</li>
-  <li><strong>Fields grouping</strong>: The stream is partitioned by the fields specified in the grouping. For example, if the stream is grouped by the &#8220;user-id&#8221; field, tuples with the same &#8220;user-id&#8221; will always go to the same task, but tuples with different &#8220;user-id&#8221;&#8217;s may go to different tasks.</li>
-  <li><strong>All grouping</strong>: The stream is replicated across all the bolt&#8217;s tasks. Use this grouping with care.</li>
-  <li><strong>Global grouping</strong>: The entire stream goes to a single one of the bolt&#8217;s tasks. Specifically, it goes to the task with the lowest id.</li>
-  <li><strong>None grouping</strong>: This grouping specifies that you don&#8217;t care how the stream is grouped. Currently, none groupings are equivalent to shuffle groupings. Eventually though, Storm will push down bolts with none groupings to execute in the same thread as the bolt or spout they subscribe from (when possible).</li>
+  <li><strong>Shuffle grouping</strong>: Tuples are randomly distributed across the bolt’s tasks in a way such that each bolt is guaranteed to get an equal number of tuples.</li>
+  <li><strong>Fields grouping</strong>: The stream is partitioned by the fields specified in the grouping. For example, if the stream is grouped by the “user-id” field, tuples with the same “user-id” will always go to the same task, but tuples with different “user-id”’s may go to different tasks.</li>
+  <li><strong>All grouping</strong>: The stream is replicated across all the bolt’s tasks. Use this grouping with care.</li>
+  <li><strong>Global grouping</strong>: The entire stream goes to a single one of the bolt’s tasks. Specifically, it goes to the task with the lowest id.</li>
+  <li><strong>None grouping</strong>: This grouping specifies that you don’t care how the stream is grouped. Currently, none groupings are equivalent to shuffle groupings. Eventually though, Storm will push down bolts with none groupings to execute in the same thread as the bolt or spout they subscribe from (when possible).</li>
   <li><strong>Direct grouping</strong>: This is a special kind of grouping. A stream grouped this way means that the <strong>producer</strong> of the tuple decides which task of the consumer will receive this tuple. Direct groupings can only be declared on streams that have been declared as direct streams. Tuples emitted to a direct stream must be emitted using one of the [emitDirect](/apidocs/backtype/storm/task/OutputCollector.html#emitDirect(int, int, java.util.List) methods. A bolt can get the task ids of its consumers by either using the provided <a href="/apidocs/backtype/storm/task/TopologyContext.html">TopologyContext</a> or by keeping track of the output of the <code>emit</code> method in <a href="/apidocs/backtype/storm/task/OutputCollector.html">OutputCollector</a> (which returns the task ids that the tuple was sent to).  </li>
   <li><strong>Local or shuffle grouping</strong>: If the target bolt has one or more tasks in the same worker process, tuples will be shuffled to just those in-process tasks. Otherwise, this acts like a normal shuffle grouping.</li>
 </ol>
@@ -166,15 +166,15 @@
 
 <ul>
   <li><a href="/apidocs/backtype/storm/topology/TopologyBuilder.html">TopologyBuilder</a>: use this class to define topologies</li>
-  <li><a href="/apidocs/backtype/storm/topology/InputDeclarer.html">InputDeclarer</a>: this object is returned whenever <code>setBolt</code> is called on <code>TopologyBuilder</code> and is used for declaring a bolt&#8217;s input streams and how those streams should be grouped</li>
+  <li><a href="/apidocs/backtype/storm/topology/InputDeclarer.html">InputDeclarer</a>: this object is returned whenever <code>setBolt</code> is called on <code>TopologyBuilder</code> and is used for declaring a bolt’s input streams and how those streams should be grouped</li>
   <li><a href="/apidocs/backtype/storm/task/CoordinatedBolt.html">CoordinatedBolt</a>: this bolt is useful for distributed RPC topologies and makes heavy use of direct streams and direct groupings</li>
 </ul>
 
 <h3 id="reliability">Reliability</h3>
 
-<p>Storm guarantees that every spout tuple will be fully processed by the topology. It does this by tracking the tree of tuples triggered by every spout tuple and determining when that tree of tuples has been successfully completed. Every topology has a &#8220;message timeout&#8221; associated with it. If Storm fails to detect that a spout tuple has been completed within that timeout, then it fails the tuple and replays it later. </p>
+<p>Storm guarantees that every spout tuple will be fully processed by the topology. It does this by tracking the tree of tuples triggered by every spout tuple and determining when that tree of tuples has been successfully completed. Every topology has a “message timeout” associated with it. If Storm fails to detect that a spout tuple has been completed within that timeout, then it fails the tuple and replays it later. </p>
 
-<p>To take advantage of Storm&#8217;s reliability capabilities, you must tell Storm when new edges in a tuple tree are being created and tell Storm whenever you&#8217;ve finished processing an individual tuple. These are done using the <a href="/apidocs/backtype/storm/task/OutputCollector.html">OutputCollector</a> object that bolts use to emit tuples. Anchoring is done in the <code>emit</code> method, and you declare that you&#8217;re finished with a tuple using the <code>ack</code> method.</p>
+<p>To take advantage of Storm’s reliability capabilities, you must tell Storm when new edges in a tuple tree are being created and tell Storm whenever you’ve finished processing an individual tuple. These are done using the <a href="/apidocs/backtype/storm/task/OutputCollector.html">OutputCollector</a> object that bolts use to emit tuples. Anchoring is done in the <code>emit</code> method, and you declare that you’re finished with a tuple using the <code>ack</code> method.</p>
 
 <p>This is all explained in much more detail in <a href="Guaranteeing-message-processing.html">Guaranteeing message processing</a>. </p>
 

Modified: incubator/storm/site/publish/documentation/Configuration.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Configuration.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Configuration.html (original)
+++ incubator/storm/site/publish/documentation/Configuration.html Tue May 27 18:39:07 2014
@@ -67,15 +67,15 @@
 <div id="aboutcontent">
 <p>Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. </p>
 
-<p>Every configuration has a default value defined in <a href="https://github.com/apache/incubator-storm/blob/master/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="/apidocs/backtype/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with &#8220;TOPOLOGY&#8221;.</p>
+<p>Every configuration has a default value defined in <a href="https://github.com/apache/incubator-storm/blob/master/conf/defaults.yaml">defaults.yaml</a> in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using <a href="/apidocs/backtype/storm/StormSubmitter.html">StormSubmitter</a>. However, the topology-specific configuration can only override configs prefixed with “TOPOLOGY”.</p>
 
 <p>Storm 0.7.0 and onwards lets you override configuration on a per-bolt/per-spout basis. The only configurations that can be overriden this way are:</p>
 
 <ol>
-  <li>&#8220;topology.debug&#8221;</li>
-  <li>&#8220;topology.max.spout.pending&#8221;</li>
-  <li>&#8220;topology.max.task.parallelism&#8221;</li>
-  <li>&#8220;topology.kryo.register&#8221;: This works a little bit differently than the other ones, since the serializations will be available to all components in the topology. More details on <a href="Serialization.html">Serialization</a>. </li>
+  <li>“topology.debug”</li>
+  <li>“topology.max.spout.pending”</li>
+  <li>“topology.max.task.parallelism”</li>
+  <li>“topology.kryo.register”: This works a little bit differently than the other ones, since the serializations will be available to all components in the topology. More details on <a href="Serialization.html">Serialization</a>. </li>
 </ol>
 
 <p>The Java API lets you specify component specific configurations in two ways:</p>

Modified: incubator/storm/site/publish/documentation/Contributing-to-Storm.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Contributing-to-Storm.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Contributing-to-Storm.html (original)
+++ incubator/storm/site/publish/documentation/Contributing-to-Storm.html Tue May 27 18:39:07 2014
@@ -67,7 +67,7 @@
 <div id="aboutcontent">
 <h3 id="getting-started-with-contributing">Getting started with contributing</h3>
 
-<p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the &#8220;Newbie&#8221; label. If you&#8217;re interesting in contributing to Storm but don&#8217;t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
+<p>Some of the issues on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> are marked with the “Newbie” label. If you’re interesting in contributing to Storm but don’t know where to begin, these are good issues to start with. These issues are a great way to get your feet wet with learning the codebase because they require learning about only an isolated portion of the codebase and are a relatively small amount of work.</p>
 
 <h3 id="learning-the-codebase">Learning the codebase</h3>
 
@@ -75,20 +75,20 @@
 
 <h3 id="contribution-process">Contribution process</h3>
 
-<p>Contributions to the Storm codebase should be sent as GitHub pull requests. If there&#8217;s any problems to the pull request we can iterate on it using GitHub&#8217;s commenting features.</p>
+<p>Contributions to the Storm codebase should be sent as GitHub pull requests. If there’s any problems to the pull request we can iterate on it using GitHub’s commenting features.</p>
 
 <p>For small patches, feel free to submit pull requests directly for them. For larger contributions, please use the following process. The idea behind this process is to prevent any wasted work and catch design issues early on:</p>
 
 <ol>
-  <li>Open an issue on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> if one doesn&#8217;t exist already</li>
-  <li>Comment on the issue with your plan for implementing the issue. Explain what pieces of the codebase you&#8217;re going to touch and how everything is going to fit together.</li>
-  <li>Storm committers will iterate with you on the design to make sure you&#8217;re on the right track</li>
+  <li>Open an issue on the <a href="https://issues.apache.org/jira/browse/STORM">issue tracker</a> if one doesn’t exist already</li>
+  <li>Comment on the issue with your plan for implementing the issue. Explain what pieces of the codebase you’re going to touch and how everything is going to fit together.</li>
+  <li>Storm committers will iterate with you on the design to make sure you’re on the right track</li>
   <li>Implement your issue, submit a pull request, and iterate from there.</li>
 </ol>
 
 <h3 id="modules-built-on-top-of-storm">Modules built on top of Storm</h3>
 
-<p>Modules built on top of Storm (like spouts, bolts, etc) that aren&#8217;t appropriate for Storm core can be done as your own project or as part of <a href="https://github.com/stormprocessor">@stormprocessor</a>. To be part of @stormprocessor put your project on your own Github and then send an email to the mailing list proposing to make it part of @stormprocessor. Then the community can discuss whether it&#8217;s useful enough to be part of @stormprocessor. Then you&#8217;ll be added to the @stormprocessor organization and can maintain your project there. The advantage of hosting your module in @stormprocessor is that it will be easier for potential users to find your project.</p>
+<p>Modules built on top of Storm (like spouts, bolts, etc) that aren’t appropriate for Storm core can be done as your own project or as part of <a href="https://github.com/stormprocessor">@stormprocessor</a>. To be part of @stormprocessor put your project on your own Github and then send an email to the mailing list proposing to make it part of @stormprocessor. Then the community can discuss whether it’s useful enough to be part of @stormprocessor. Then you’ll be added to the @stormprocessor organization and can maintain your project there. The advantage of hosting your module in @stormprocessor is that it will be easier for potential users to find your project.</p>
 
 <h3 id="contributing-documentation">Contributing documentation</h3>
 

Modified: incubator/storm/site/publish/documentation/Creating-a-new-Storm-project.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Creating-a-new-Storm-project.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Creating-a-new-Storm-project.html (original)
+++ incubator/storm/site/publish/documentation/Creating-a-new-Storm-project.html Tue May 27 18:39:07 2014
@@ -76,7 +76,7 @@
 
 <h3 id="add-storm-jars-to-classpath">Add Storm jars to classpath</h3>
 
-<p>You&#8217;ll need the Storm jars on your classpath to develop Storm topologies. Using <a href="Maven.html">Maven</a> is highly recommended. <a href="https://github.com/nathanmarz/storm-starter/blob/master/m2-pom.xml">Here&#8217;s an example</a> of how to setup your pom.xml for a Storm project. If you don&#8217;t want to use Maven, you can include the jars from the Storm release on your classpath. </p>
+<p>You’ll need the Storm jars on your classpath to develop Storm topologies. Using <a href="Maven.html">Maven</a> is highly recommended. <a href="https://github.com/nathanmarz/storm-starter/blob/master/m2-pom.xml">Here’s an example</a> of how to setup your pom.xml for a Storm project. If you don’t want to use Maven, you can include the jars from the Storm release on your classpath. </p>
 
 <p><a href="http://github.com/nathanmarz/storm-starter">storm-starter</a> uses <a href="http://github.com/technomancy/leiningen">Leiningen</a> for build and dependency resolution. You can install leiningen by downloading <a href="https://raw.github.com/technomancy/leiningen/stable/bin/lein">this script</a>, placing it on your path, and making it executable. To retrieve the dependencies for Storm, simply run <code>lein deps</code> in the project root.</p>
 

Modified: incubator/storm/site/publish/documentation/Defining-a-non-jvm-language-dsl-for-storm.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Defining-a-non-jvm-language-dsl-for-storm.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Defining-a-non-jvm-language-dsl-for-storm.html (original)
+++ incubator/storm/site/publish/documentation/Defining-a-non-jvm-language-dsl-for-storm.html Tue May 27 18:39:07 2014
@@ -77,9 +77,9 @@ union ComponentObject {
 }
 </code></p>
 
-<p>For a Python DSL, you would want to make use of &#8220;2&#8221; and &#8220;3&#8221;. ShellComponent lets you specify a script to run that component (e.g., your python code). And JavaObject lets you specify native java spouts and bolts for the component (and Storm will use reflection to create that spout or bolt).</p>
+<p>For a Python DSL, you would want to make use of “2” and “3”. ShellComponent lets you specify a script to run that component (e.g., your python code). And JavaObject lets you specify native java spouts and bolts for the component (and Storm will use reflection to create that spout or bolt).</p>
 
-<p>There&#8217;s a &#8220;storm shell&#8221; command that will help with submitting a topology. Its usage is like this:</p>
+<p>There’s a “storm shell” command that will help with submitting a topology. Its usage is like this:</p>
 
 <p><code>
 storm shell resources/ python topology.py arg1 arg2
@@ -91,7 +91,7 @@ storm shell resources/ python topology.p
 python topology.py arg1 arg2 {nimbus-host} {nimbus-port} {uploaded-jar-location}
 </code></p>
 
-<p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here&#8217;s the submitTopology definition:</p>
+<p>Then you can connect to Nimbus using the Thrift API and submit the topology, passing {uploaded-jar-location} into the submitTopology method. For reference, here’s the submitTopology definition:</p>
 
 <p><code>java
 void submitTopology(1: string name, 2: string uploadedJarLocation, 3: string jsonConf, 4: StormTopology topology) throws (1: AlreadyAliveException e, 2: InvalidTopologyException ite);

Modified: incubator/storm/site/publish/documentation/Distributed-RPC.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Distributed-RPC.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Distributed-RPC.html (original)
+++ incubator/storm/site/publish/documentation/Distributed-RPC.html Tue May 27 18:39:07 2014
@@ -67,11 +67,11 @@
 <div id="aboutcontent">
 <p>The idea behind distributed RPC (DRPC) is to parallelize the computation of really intense functions on the fly using Storm. The Storm topology takes in as input a stream of function arguments, and it emits an output stream of the results for each of those function calls. </p>
 
-<p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm&#8217;s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it&#8217;s so useful that it&#8217;s bundled with Storm.</p>
+<p>DRPC is not so much a feature of Storm as it is a pattern expressed from Storm’s primitives of streams, spouts, bolts, and topologies. DRPC could have been packaged as a separate library from Storm, but it’s so useful that it’s bundled with Storm.</p>
 
 <h3 id="high-level-overview">High level overview</h3>
 
-<p>Distributed RPC is coordinated by a &#8220;DRPC server&#8221; (Storm comes packaged with an implementation of this). The DRPC server coordinates receiving an RPC request, sending the request to the Storm topology, receiving the results from the Storm topology, and sending the results back to the waiting client. From a client&#8217;s perspective, a distributed RPC call looks just like a regular RPC call. For example, here&#8217;s how a client would compute the results for the &#8220;reach&#8221; function with the argument &#8220;http://twitter.com&#8221;:</p>
+<p>Distributed RPC is coordinated by a “DRPC server” (Storm comes packaged with an implementation of this). The DRPC server coordinates receiving an RPC request, sending the request to the Storm topology, receiving the results from the Storm topology, and sending the results back to the waiting client. From a client’s perspective, a distributed RPC call looks just like a regular RPC call. For example, here’s how a client would compute the results for the “reach” function with the argument “http://twitter.com”:</p>
 
 <p><code>java
 DRPCClient client = new DRPCClient("drpc-host", 3772);
@@ -94,13 +94,13 @@ String result = client.execute("reach", 
   <li>Providing functionality to bolts for doing finite aggregations over groups of tuples</li>
 </ol>
 
-<p>Let&#8217;s look at a simple example. Here&#8217;s the implementation of a DRPC topology that returns its input argument with a &#8220;!&#8221; appended:</p>
+<p>Let’s look at a simple example. Here’s the implementation of a DRPC topology that returns its input argument with a “!” appended:</p>
 
 <p>```java
 public static class ExclaimBolt extends BaseBasicBolt {
     public void execute(Tuple tuple, BasicOutputCollector collector) {
         String input = tuple.getString(1);
-        collector.emit(new Values(tuple.getValue(0), input + &#8220;!&#8221;));
+        collector.emit(new Values(tuple.getValue(0), input + “!”));
     }</p>
 
 <pre><code>public void declareOutputFields(OutputFieldsDeclarer declarer) {
@@ -109,27 +109,27 @@ public static class ExclaimBolt extends 
 </code></pre>
 
 <p>public static void main(String[] args) throws Exception {
-    LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder(&#8220;exclamation&#8221;);
+    LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder(“exclamation”);
     builder.addBolt(new ExclaimBolt(), 3);
-    // &#8230;
+    // …
 }
 ```</p>
 
-<p>As you can see, there&#8217;s very little to it. When creating the <code>LinearDRPCTopologyBuilder</code>, you tell it the name of the DRPC function for the topology. A single DRPC server can coordinate many functions, and the function name distinguishes the functions from one another. The first bolt you declare will take in as input 2-tuples, where the first field is the request id and the second field is the arguments for that request. <code>LinearDRPCTopologyBuilder</code> expects the last bolt to emit an output stream containing 2-tuples of the form [id, result]. Finally, all intermediate tuples must contain the request id as the first field.</p>
+<p>As you can see, there’s very little to it. When creating the <code>LinearDRPCTopologyBuilder</code>, you tell it the name of the DRPC function for the topology. A single DRPC server can coordinate many functions, and the function name distinguishes the functions from one another. The first bolt you declare will take in as input 2-tuples, where the first field is the request id and the second field is the arguments for that request. <code>LinearDRPCTopologyBuilder</code> expects the last bolt to emit an output stream containing 2-tuples of the form [id, result]. Finally, all intermediate tuples must contain the request id as the first field.</p>
 
-<p>In this example, <code>ExclaimBolt</code> simply appends a &#8220;!&#8221; to the second field of the tuple. <code>LinearDRPCTopologyBuilder</code> handles the rest of the coordination of connecting to the DRPC server and sending results back.</p>
+<p>In this example, <code>ExclaimBolt</code> simply appends a “!” to the second field of the tuple. <code>LinearDRPCTopologyBuilder</code> handles the rest of the coordination of connecting to the DRPC server and sending results back.</p>
 
 <h3 id="local-mode-drpc">Local mode DRPC</h3>
 
-<p>DRPC can be run in local mode. Here&#8217;s how to run the above example in local mode:</p>
+<p>DRPC can be run in local mode. Here’s how to run the above example in local mode:</p>
 
 <p>```java
 LocalDRPC drpc = new LocalDRPC();
 LocalCluster cluster = new LocalCluster();</p>
 
-<p>cluster.submitTopology(&#8220;drpc-demo&#8221;, conf, builder.createLocalTopology(drpc));</p>
+<p>cluster.submitTopology(“drpc-demo”, conf, builder.createLocalTopology(drpc));</p>
 
-<p>System.out.println(&#8220;Results for &#8216;hello&#8217;:&#8221; + drpc.execute(&#8220;exclamation&#8221;, &#8220;hello&#8221;));</p>
+<p>System.out.println(“Results for ‘hello’:” + drpc.execute(“exclamation”, “hello”));</p>
 
 <p>cluster.shutdown();
 drpc.shutdown();
@@ -141,7 +141,7 @@ drpc.shutdown();
 
 <h3 id="remote-mode-drpc">Remote mode DRPC</h3>
 
-<p>Using DRPC on an actual cluster is also straightforward. There&#8217;s three steps:</p>
+<p>Using DRPC on an actual cluster is also straightforward. There’s three steps:</p>
 
 <ol>
   <li>Launch DRPC server(s)</li>
@@ -173,7 +173,7 @@ StormSubmitter.submitTopology("exclamati
 
 <h3 id="a-more-complex-example">A more complex example</h3>
 
-<p>The exclamation DRPC example was a toy example for illustrating the concepts of DRPC. Let&#8217;s look at a more complex example which really needs the parallelism a Storm cluster provides for computing the DRPC function. The example we&#8217;ll look at is computing the reach of a URL on Twitter.</p>
+<p>The exclamation DRPC example was a toy example for illustrating the concepts of DRPC. Let’s look at a more complex example which really needs the parallelism a Storm cluster provides for computing the DRPC function. The example we’ll look at is computing the reach of a URL on Twitter.</p>
 
 <p>The reach of a URL is the number of unique people exposed to a URL on Twitter. To compute reach, you need to:</p>
 
@@ -184,9 +184,9 @@ StormSubmitter.submitTopology("exclamati
   <li>Count the unique set of followers</li>
 </ol>
 
-<p>A single reach computation can involve thousands of database calls and tens of millions of follower records during the computation. It&#8217;s a really, really intense computation. As you&#8217;re about to see, implementing this function on top of Storm is dead simple. On a single machine, reach can take minutes to compute; on a Storm cluster, you can compute reach for even the hardest URLs in a couple seconds.</p>
+<p>A single reach computation can involve thousands of database calls and tens of millions of follower records during the computation. It’s a really, really intense computation. As you’re about to see, implementing this function on top of Storm is dead simple. On a single machine, reach can take minutes to compute; on a Storm cluster, you can compute reach for even the hardest URLs in a couple seconds.</p>
 
-<p>A sample reach topology is defined in storm-starter <a href="https://github.com/nathanmarz/storm-starter/blob/master/src/jvm/storm/starter/ReachTopology.java">here</a>. Here&#8217;s how you define the reach topology:</p>
+<p>A sample reach topology is defined in storm-starter <a href="https://github.com/nathanmarz/storm-starter/blob/master/src/jvm/storm/starter/ReachTopology.java">here</a>. Here’s how you define the reach topology:</p>
 
 <p><code>java
 LinearDRPCTopologyBuilder builder = new LinearDRPCTopologyBuilder("reach");
@@ -208,7 +208,7 @@ builder.addBolt(new CountAggregator(), 2
   <li>Finally, <code>CountAggregator</code> receives the partial counts from each of the <code>PartialUniquer</code> tasks and sums them up to complete the reach computation.</li>
 </ol>
 
-<p>Let&#8217;s take a look at the <code>PartialUniquer</code> bolt:</p>
+<p>Let’s take a look at the <code>PartialUniquer</code> bolt:</p>
 
 <p>```java
 public class PartialUniquer extends BaseBatchBolt {
@@ -250,7 +250,7 @@ public void declareOutputFields(OutputFi
 
 <h3 id="non-linear-drpc-topologies">Non-linear DRPC topologies</h3>
 
-<p><code>LinearDRPCTopologyBuilder</code> only handles &#8220;linear&#8221; DRPC topologies, where the computation is expressed as a sequence of steps (like reach). It&#8217;s not hard to imagine functions that would require a more complicated topology with branching and merging of the bolts. For now, to do this you&#8217;ll need to drop down into using <code>CoordinatedBolt</code> directly. Be sure to talk about your use case for non-linear DRPC topologies on the mailing list to inform the construction of more general abstractions for DRPC topologies.</p>
+<p><code>LinearDRPCTopologyBuilder</code> only handles “linear” DRPC topologies, where the computation is expressed as a sequence of steps (like reach). It’s not hard to imagine functions that would require a more complicated topology with branching and merging of the bolts. For now, to do this you’ll need to drop down into using <code>CoordinatedBolt</code> directly. Be sure to talk about your use case for non-linear DRPC topologies on the mailing list to inform the construction of more general abstractions for DRPC topologies.</p>
 
 <h3 id="how-lineardrpctopologybuilder-works">How LinearDRPCTopologyBuilder works</h3>
 
@@ -265,7 +265,7 @@ public void declareOutputFields(OutputFi
       <li>ReturnResult (connects to the DRPC server and returns the result)</li>
     </ul>
   </li>
-  <li>LinearDRPCTopologyBuilder is a good example of a higher level abstraction built on top of Storm&#8217;s primitives</li>
+  <li>LinearDRPCTopologyBuilder is a good example of a higher level abstraction built on top of Storm’s primitives</li>
 </ul>
 
 <h3 id="advanced">Advanced</h3>

Modified: incubator/storm/site/publish/documentation/Documentation.html
URL: http://svn.apache.org/viewvc/incubator/storm/site/publish/documentation/Documentation.html?rev=1597847&r1=1597846&r2=1597847&view=diff
==============================================================================
--- incubator/storm/site/publish/documentation/Documentation.html (original)
+++ incubator/storm/site/publish/documentation/Documentation.html Tue May 27 18:39:07 2014
@@ -80,13 +80,13 @@
 
 <h3 id="trident">Trident</h3>
 
-<p>Trident is an alternative interface to Storm. It provides exactly-once processing, &#8220;transactional&#8221; datastore persistence, and a set of common stream analytics operations.</p>
+<p>Trident is an alternative interface to Storm. It provides exactly-once processing, “transactional” datastore persistence, and a set of common stream analytics operations.</p>
 
 <ul>
-  <li><a href="Trident-tutorial.html">Trident Tutorial</a>     &#8211; basic concepts and walkthrough</li>
-  <li><a href="Trident-API-Overview.html">Trident API Overview</a> &#8211; operations for transforming and orchestrating data</li>
-  <li><a href="Trident-state.html">Trident State</a>        &#8211; exactly-once processing and fast, persistent aggregation</li>
-  <li><a href="Trident-spouts.html">Trident spouts</a>       &#8211; transactional and non-transactional data intake</li>
+  <li><a href="Trident-tutorial.html">Trident Tutorial</a>     – basic concepts and walkthrough</li>
+  <li><a href="Trident-API-Overview.html">Trident API Overview</a> – operations for transforming and orchestrating data</li>
+  <li><a href="Trident-state.html">Trident State</a>        – exactly-once processing and fast, persistent aggregation</li>
+  <li><a href="Trident-spouts.html">Trident spouts</a>       – transactional and non-transactional data intake</li>
 </ul>
 
 <h3 id="setup-and-deploying">Setup and deploying</h3>