You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@storm.apache.org by bo...@apache.org on 2015/03/11 21:54:05 UTC

[1/3] storm git commit: [STORM-669] Replace links with ones to latest api document

Repository: storm
Updated Branches:
  refs/heads/master 8c39e4c49 -> 43fe5135a


[STORM-669] Replace links with ones to latest api document


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/2be8acd6
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/2be8acd6
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/2be8acd6

Branch: refs/heads/master
Commit: 2be8acd69b6bd89425539231b4b79a10e831e2d7
Parents: 8036109
Author: lewuathe <le...@me.com>
Authored: Tue Feb 10 22:40:21 2015 +0900
Committer: lewuathe <le...@me.com>
Committed: Tue Feb 10 22:40:21 2015 +0900

----------------------------------------------------------------------
 docs/documentation/Clojure-DSL.md               |  4 +-
 docs/documentation/Command-line-client.md       |  2 +-
 docs/documentation/Common-patterns.md           |  6 +--
 docs/documentation/Concepts.md                  | 48 ++++++++++----------
 docs/documentation/Configuration.md             |  4 +-
 docs/documentation/Distributed-RPC.md           |  2 +-
 .../Guaranteeing-message-processing.md          |  6 +--
 docs/documentation/Hooks.md                     |  6 +--
 docs/documentation/Local-mode.md                |  4 +-
 ...unning-topologies-on-a-production-cluster.md |  6 +--
 .../Serialization-(prior-to-0.6.0).md           |  4 +-
 docs/documentation/Serialization.md             |  2 +-
 docs/documentation/Structure-of-the-codebase.md |  8 ++--
 docs/documentation/Transactional-topologies.md  | 18 ++++----
 docs/documentation/Tutorial.md                  |  8 ++--
 ...nding-the-parallelism-of-a-Storm-topology.md | 16 +++----
 16 files changed, 72 insertions(+), 72 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Clojure-DSL.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Clojure-DSL.md b/docs/documentation/Clojure-DSL.md
index 5c8b65f..fcbbce4 100644
--- a/docs/documentation/Clojure-DSL.md
+++ b/docs/documentation/Clojure-DSL.md
@@ -38,11 +38,11 @@ The maps of spout and bolt specs are maps from the component id to the correspon
 
 #### spout-spec
 
-`spout-spec` takes as arguments the spout implementation (an object that implements [IRichSpout](/apidocs/backtype/storm/topology/IRichSpout.html)) and optional keyword arguments. The only option that exists currently is the `:p` option, which specifies the parallelism for the spout. If you omit `:p`, the spout will execute as a single task.
+`spout-spec` takes as arguments the spout implementation (an object that implements [IRichSpout](/javadoc/apidocs/backtype/storm/topology/IRichSpout.html)) and optional keyword arguments. The only option that exists currently is the `:p` option, which specifies the parallelism for the spout. If you omit `:p`, the spout will execute as a single task.
 
 #### bolt-spec
 
-`bolt-spec` takes as arguments the input declaration for the bolt, the bolt implementation (an object that implements [IRichBolt](/apidocs/backtype/storm/topology/IRichBolt.html)), and optional keyword arguments.
+`bolt-spec` takes as arguments the input declaration for the bolt, the bolt implementation (an object that implements [IRichBolt](/javadoc/apidocs/backtype/storm/topology/IRichBolt.html)), and optional keyword arguments.
 
 The input declaration is a map from stream ids to stream groupings. A stream id can have one of two forms:
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Command-line-client.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Command-line-client.md b/docs/documentation/Command-line-client.md
index fb6a77a..67928a5 100644
--- a/docs/documentation/Command-line-client.md
+++ b/docs/documentation/Command-line-client.md
@@ -25,7 +25,7 @@ These commands are:
 
 Syntax: `storm jar topology-jar-path class ...`
 
-Runs the main method of `class` with the specified arguments. The storm jars and configs in `~/.storm` are put on the classpath. The process is configured so that [StormSubmitter](/apidocs/backtype/storm/StormSubmitter.html) will upload the jar at `topology-jar-path` when the topology is submitted.
+Runs the main method of `class` with the specified arguments. The storm jars and configs in `~/.storm` are put on the classpath. The process is configured so that [StormSubmitter](/javadoc/apidocs/backtype/storm/StormSubmitter.html) will upload the jar at `topology-jar-path` when the topology is submitted.
 
 ### kill
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Common-patterns.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Common-patterns.md b/docs/documentation/Common-patterns.md
index c11f5e9..7b15cfb 100644
--- a/docs/documentation/Common-patterns.md
+++ b/docs/documentation/Common-patterns.md
@@ -39,7 +39,7 @@ If you want reliability in your data processing, the right way to do this is to
 If the bolt emits tuples, then you may want to use multi-anchoring to ensure reliability. It all depends on the specific application. See [Guaranteeing message processing](Guaranteeing-message-processing.html) for more details on how reliability works.
 
 ### BasicBolt
-Many bolts follow a similar pattern of reading an input tuple, emitting zero or more tuples based on that input tuple, and then acking that input tuple immediately at the end of the execute method. Bolts that match this pattern are things like functions and filters. This is such a common pattern that Storm exposes an interface called [IBasicBolt](/apidocs/backtype/storm/topology/IBasicBolt.html) that automates this pattern for you. See [Guaranteeing message processing](Guaranteeing-message-processing.html) for more information.
+Many bolts follow a similar pattern of reading an input tuple, emitting zero or more tuples based on that input tuple, and then acking that input tuple immediately at the end of the execute method. Bolts that match this pattern are things like functions and filters. This is such a common pattern that Storm exposes an interface called [IBasicBolt](/javadoc/apidocs/backtype/storm/topology/IBasicBolt.html) that automates this pattern for you. See [Guaranteeing message processing](Guaranteeing-message-processing.html) for more information.
 
 ### In-memory caching + fields grouping combo
 
@@ -87,11 +87,11 @@ The topology needs an extra layer of processing to aggregate the partial counts
 
 ### TimeCacheMap for efficiently keeping a cache of things that have been recently updated
 
-You sometimes want to keep a cache in memory of items that have been recently "active" and have items that have been inactive for some time be automatically expires. [TimeCacheMap](/apidocs/backtype/storm/utils/TimeCacheMap.html) is an efficient data structure for doing this and provides hooks so you can insert callbacks whenever an item is expired.
+You sometimes want to keep a cache in memory of items that have been recently "active" and have items that have been inactive for some time be automatically expires. [TimeCacheMap](/javadoc/apidocs/backtype/storm/utils/TimeCacheMap.html) is an efficient data structure for doing this and provides hooks so you can insert callbacks whenever an item is expired.
 
 ### CoordinatedBolt and KeyedFairBolt for Distributed RPC
 
-When building distributed RPC applications on top of Storm, there are two common patterns that are usually needed. These are encapsulated by [CoordinatedBolt](/apidocs/backtype/storm/task/CoordinatedBolt.html) and [KeyedFairBolt](/apidocs/backtype/storm/task/KeyedFairBolt.html) which are part of the "standard library" that ships with the Storm codebase.
+When building distributed RPC applications on top of Storm, there are two common patterns that are usually needed. These are encapsulated by [CoordinatedBolt](/javadoc/apidocs/backtype/storm/task/CoordinatedBolt.html) and [KeyedFairBolt](/javadoc/apidocs/backtype/storm/task/KeyedFairBolt.html) which are part of the "standard library" that ships with the Storm codebase.
 
 `CoordinatedBolt` wraps the bolt containing your logic and figures out when your bolt has received all the tuples for any given request. It makes heavy use of direct streams to do this.
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Concepts.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Concepts.md b/docs/documentation/Concepts.md
index 827bb3a..9c02560 100644
--- a/docs/documentation/Concepts.md
+++ b/docs/documentation/Concepts.md
@@ -21,7 +21,7 @@ The logic for a realtime application is packaged into a Storm topology. A Storm
 
 **Resources:**
 
-* [TopologyBuilder](/apidocs/backtype/storm/topology/TopologyBuilder.html): use this class to construct topologies in Java
+* [TopologyBuilder](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html): use this class to construct topologies in Java
 * [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html)
 * [Local mode](Local-mode.html): Read this to learn how to develop and test topologies in local mode.
 
@@ -29,30 +29,30 @@ The logic for a realtime application is packaged into a Storm topology. A Storm
 
 The stream is the core abstraction in Storm. A stream is an unbounded sequence of tuples that is processed and created in parallel in a distributed fashion. Streams are defined with a schema that names the fields in the stream's tuples. By default, tuples can contain integers, longs, shorts, bytes, strings, doubles, floats, booleans, and byte arrays. You can also define your own serializers so that custom types can be used natively within tuples.
 
-Every stream is given an id when declared. Since single-stream spouts and bolts are so common, [OutputFieldsDeclarer](/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html) has convenience methods for declaring a single stream without specifying an id. In this case, the stream is given the default id of "default".
+Every stream is given an id when declared. Since single-stream spouts and bolts are so common, [OutputFieldsDeclarer](/javadoc/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html) has convenience methods for declaring a single stream without specifying an id. In this case, the stream is given the default id of "default".
 
 
 **Resources:**
 
-* [Tuple](/apidocs/backtype/storm/tuple/Tuple.html): streams are composed of tuples
-* [OutputFieldsDeclarer](/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html): used to declare streams and their schemas
+* [Tuple](/javadoc/apidocs/backtype/storm/tuple/Tuple.html): streams are composed of tuples
+* [OutputFieldsDeclarer](/javadoc/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html): used to declare streams and their schemas
 * [Serialization](Serialization.html): Information about Storm's dynamic typing of tuples and declaring custom serializations
-* [ISerialization](/apidocs/backtype/storm/serialization/ISerialization.html): custom serializers must implement this interface
-* [CONFIG.TOPOLOGY_SERIALIZATIONS](/apidocs/backtype/storm/Config.html#TOPOLOGY_SERIALIZATIONS): custom serializers can be registered using this configuration
+* [ISerialization](/javadoc/apidocs/backtype/storm/serialization/ISerialization.html): custom serializers must implement this interface
+* [CONFIG.TOPOLOGY_SERIALIZATIONS](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_SERIALIZATIONS): custom serializers can be registered using this configuration
 
 ### Spouts
 
 A spout is a source of streams in a topology. Generally spouts will read tuples from an external source and emit them into the topology (e.g. a Kestrel queue or the Twitter API). Spouts can either be __reliable__ or __unreliable__. A reliable spout is capable of replaying a tuple if it failed to be processed by Storm, whereas an unreliable spout forgets about the tuple as soon as it is emitted.
 
-Spouts can emit more than one stream. To do so, declare multiple streams using the `declareStream` method of [OutputFieldsDeclarer](/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html) and specify the stream to emit to when using the `emit` method on [SpoutOutputCollector](/apidocs/backtype/storm/spout/SpoutOutputCollector.html). 
+Spouts can emit more than one stream. To do so, declare multiple streams using the `declareStream` method of [OutputFieldsDeclarer](/javadoc/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html) and specify the stream to emit to when using the `emit` method on [SpoutOutputCollector](/javadoc/apidocs/backtype/storm/spout/SpoutOutputCollector.html).
 
 The main method on spouts is `nextTuple`. `nextTuple` either emits a new tuple into the topology or simply returns if there are no new tuples to emit. It is imperative that `nextTuple` does not block for any spout implementation, because Storm calls all the spout methods on the same thread.
 
-The other main methods on spouts are `ack` and `fail`. These are called when Storm detects that a tuple emitted from the spout either successfully completed through the topology or failed to be completed. `ack` and `fail` are only called for reliable spouts. See [the Javadoc](/apidocs/backtype/storm/spout/ISpout.html) for more information.
+The other main methods on spouts are `ack` and `fail`. These are called when Storm detects that a tuple emitted from the spout either successfully completed through the topology or failed to be completed. `ack` and `fail` are only called for reliable spouts. See [the Javadoc](/javadoc/apidocs/backtype/storm/spout/ISpout.html) for more information.
 
 **Resources:**
 
-* [IRichSpout](/apidocs/backtype/storm/topology/IRichSpout.html): this is the interface that spouts must implement. 
+* [IRichSpout](/javadoc/apidocs/backtype/storm/topology/IRichSpout.html): this is the interface that spouts must implement.
 * [Guaranteeing message processing](Guaranteeing-message-processing.html)
 
 ### Bolts
@@ -61,26 +61,26 @@ All processing in topologies is done in bolts. Bolts can do anything from filter
 
 Bolts can do simple stream transformations. Doing complex stream transformations often requires multiple steps and thus multiple bolts. For example, transforming a stream of tweets into a stream of trending images requires at least two steps: a bolt to do a rolling count of retweets for each image, and one or more bolts to stream out the top X images (you can do this particular stream transformation in a more scalable way with three bolts than with two). 
 
-Bolts can emit more than one stream. To do so, declare multiple streams using the `declareStream` method of [OutputFieldsDeclarer](/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html) and specify the stream to emit to when using the `emit` method on [OutputCollector](/apidocs/backtype/storm/task/OutputCollector.html).
+Bolts can emit more than one stream. To do so, declare multiple streams using the `declareStream` method of [OutputFieldsDeclarer](/javadoc/apidocs/backtype/storm/topology/OutputFieldsDeclarer.html) and specify the stream to emit to when using the `emit` method on [OutputCollector](/javadoc/apidocs/backtype/storm/task/OutputCollector.html).
 
-When you declare a bolt's input streams, you always subscribe to specific streams of another component. If you want to subscribe to all the streams of another component, you have to subscribe to each one individually. [InputDeclarer](/apidocs/backtype/storm/topology/InputDeclarer.html) has syntactic sugar for subscribing to streams declared on the default stream id. Saying `declarer.shuffleGrouping("1")` subscribes to the default stream on component "1" and is equivalent to `declarer.shuffleGrouping("1", DEFAULT_STREAM_ID)`. 
+When you declare a bolt's input streams, you always subscribe to specific streams of another component. If you want to subscribe to all the streams of another component, you have to subscribe to each one individually. [InputDeclarer](/javadoc/apidocs/backtype/storm/topology/InputDeclarer.html) has syntactic sugar for subscribing to streams declared on the default stream id. Saying `declarer.shuffleGrouping("1")` subscribes to the default stream on component "1" and is equivalent to `declarer.shuffleGrouping("1", DEFAULT_STREAM_ID)`.
 
-The main method in bolts is the `execute` method which takes in as input a new tuple. Bolts emit new tuples using the [OutputCollector](/apidocs/backtype/storm/task/OutputCollector.html) object. Bolts must call the `ack` method on the `OutputCollector` for every tuple they process so that Storm knows when tuples are completed (and can eventually determine that its safe to ack the original spout tuples). For the common case of processing an input tuple, emitting 0 or more tuples based on that tuple, and then acking the input tuple, Storm provides an [IBasicBolt](/apidocs/backtype/storm/topology/IBasicBolt.html) interface which does the acking automatically.
+The main method in bolts is the `execute` method which takes in as input a new tuple. Bolts emit new tuples using the [OutputCollector](/javadoc/apidocs/backtype/storm/task/OutputCollector.html) object. Bolts must call the `ack` method on the `OutputCollector` for every tuple they process so that Storm knows when tuples are completed (and can eventually determine that its safe to ack the original spout tuples). For the common case of processing an input tuple, emitting 0 or more tuples based on that tuple, and then acking the input tuple, Storm provides an [IBasicBolt](/javadoc/apidocs/backtype/storm/topology/IBasicBolt.html) interface which does the acking automatically.
 
-Its perfectly fine to launch new threads in bolts that do processing asynchronously. [OutputCollector](/apidocs/backtype/storm/task/OutputCollector.html) is thread-safe and can be called at any time.
+Its perfectly fine to launch new threads in bolts that do processing asynchronously. [OutputCollector](/javadoc/apidocs/backtype/storm/task/OutputCollector.html) is thread-safe and can be called at any time.
 
 **Resources:**
 
-* [IRichBolt](/apidocs/backtype/storm/topology/IRichBolt.html): this is general interface for bolts.
-* [IBasicBolt](/apidocs/backtype/storm/topology/IBasicBolt.html): this is a convenience interface for defining bolts that do filtering or simple functions.
-* [OutputCollector](/apidocs/backtype/storm/task/OutputCollector.html): bolts emit tuples to their output streams using an instance of this class
+* [IRichBolt](/javadoc/apidocs/backtype/storm/topology/IRichBolt.html): this is general interface for bolts.
+* [IBasicBolt](/javadoc/apidocs/backtype/storm/topology/IBasicBolt.html): this is a convenience interface for defining bolts that do filtering or simple functions.
+* [OutputCollector](/javadoc/apidocs/backtype/storm/task/OutputCollector.html): bolts emit tuples to their output streams using an instance of this class
 * [Guaranteeing message processing](Guaranteeing-message-processing.html)
 
 ### Stream groupings
 
 Part of defining a topology is specifying for each bolt which streams it should receive as input. A stream grouping defines how that stream should be partitioned among the bolt's tasks.
 
-There are seven built-in stream groupings in Storm, and you can implement a custom stream grouping by implementing the [CustomStreamGrouping](/apidocs/backtype/storm/grouping/CustomStreamGrouping.html) interface:
+There are seven built-in stream groupings in Storm, and you can implement a custom stream grouping by implementing the [CustomStreamGrouping](/javadoc/apidocs/backtype/storm/grouping/CustomStreamGrouping.html) interface:
 
 1. **Shuffle grouping**: Tuples are randomly distributed across the bolt's tasks in a way such that each bolt is guaranteed to get an equal number of tuples.
 2. **Fields grouping**: The stream is partitioned by the fields specified in the grouping. For example, if the stream is grouped by the "user-id" field, tuples with the same "user-id" will always go to the same task, but tuples with different "user-id"'s may go to different tasks.
@@ -88,26 +88,26 @@ There are seven built-in stream groupings in Storm, and you can implement a cust
 4. **All grouping**: The stream is replicated across all the bolt's tasks. Use this grouping with care.
 5. **Global grouping**: The entire stream goes to a single one of the bolt's tasks. Specifically, it goes to the task with the lowest id.
 6. **None grouping**: This grouping specifies that you don't care how the stream is grouped. Currently, none groupings are equivalent to shuffle groupings. Eventually though, Storm will push down bolts with none groupings to execute in the same thread as the bolt or spout they subscribe from (when possible).
-7. **Direct grouping**: This is a special kind of grouping. A stream grouped this way means that the __producer__ of the tuple decides which task of the consumer will receive this tuple. Direct groupings can only be declared on streams that have been declared as direct streams. Tuples emitted to a direct stream must be emitted using one of the [emitDirect](/apidocs/backtype/storm/task/OutputCollector.html#emitDirect(int, int, java.util.List) methods. A bolt can get the task ids of its consumers by either using the provided [TopologyContext](/apidocs/backtype/storm/task/TopologyContext.html) or by keeping track of the output of the `emit` method in [OutputCollector](/apidocs/backtype/storm/task/OutputCollector.html) (which returns the task ids that the tuple was sent to).  
+7. **Direct grouping**: This is a special kind of grouping. A stream grouped this way means that the __producer__ of the tuple decides which task of the consumer will receive this tuple. Direct groupings can only be declared on streams that have been declared as direct streams. Tuples emitted to a direct stream must be emitted using one of the [emitDirect](/javadoc/apidocs/backtype/storm/task/OutputCollector.html#emitDirect(int, int, java.util.List) methods. A bolt can get the task ids of its consumers by either using the provided [TopologyContext](/javadoc/apidocs/backtype/storm/task/TopologyContext.html) or by keeping track of the output of the `emit` method in [OutputCollector](/javadoc/apidocs/backtype/storm/task/OutputCollector.html) (which returns the task ids that the tuple was sent to).
 8. **Local or shuffle grouping**: If the target bolt has one or more tasks in the same worker process, tuples will be shuffled to just those in-process tasks. Otherwise, this acts like a normal shuffle grouping.
 
 **Resources:**
 
-* [TopologyBuilder](/apidocs/backtype/storm/topology/TopologyBuilder.html): use this class to define topologies
-* [InputDeclarer](/apidocs/backtype/storm/topology/InputDeclarer.html): this object is returned whenever `setBolt` is called on `TopologyBuilder` and is used for declaring a bolt's input streams and how those streams should be grouped
-* [CoordinatedBolt](/apidocs/backtype/storm/task/CoordinatedBolt.html): this bolt is useful for distributed RPC topologies and makes heavy use of direct streams and direct groupings
+* [TopologyBuilder](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html): use this class to define topologies
+* [InputDeclarer](/javadoc/apidocs/backtype/storm/topology/InputDeclarer.html): this object is returned whenever `setBolt` is called on `TopologyBuilder` and is used for declaring a bolt's input streams and how those streams should be grouped
+* [CoordinatedBolt](/javadoc/apidocs/backtype/storm/task/CoordinatedBolt.html): this bolt is useful for distributed RPC topologies and makes heavy use of direct streams and direct groupings
 
 ### Reliability
 
 Storm guarantees that every spout tuple will be fully processed by the topology. It does this by tracking the tree of tuples triggered by every spout tuple and determining when that tree of tuples has been successfully completed. Every topology has a "message timeout" associated with it. If Storm fails to detect that a spout tuple has been completed within that timeout, then it fails the tuple and replays it later. 
 
-To take advantage of Storm's reliability capabilities, you must tell Storm when new edges in a tuple tree are being created and tell Storm whenever you've finished processing an individual tuple. These are done using the [OutputCollector](/apidocs/backtype/storm/task/OutputCollector.html) object that bolts use to emit tuples. Anchoring is done in the `emit` method, and you declare that you're finished with a tuple using the `ack` method.
+To take advantage of Storm's reliability capabilities, you must tell Storm when new edges in a tuple tree are being created and tell Storm whenever you've finished processing an individual tuple. These are done using the [OutputCollector](/javadoc/apidocs/backtype/storm/task/OutputCollector.html) object that bolts use to emit tuples. Anchoring is done in the `emit` method, and you declare that you're finished with a tuple using the `ack` method.
 
 This is all explained in much more detail in [Guaranteeing message processing](Guaranteeing-message-processing.html). 
 
 ### Tasks
 
-Each spout or bolt executes as many tasks across the cluster. Each task corresponds to one thread of execution, and stream groupings define how to send tuples from one set of tasks to another set of tasks. You set the parallelism for each spout or bolt in the `setSpout` and `setBolt` methods of [TopologyBuilder](/apidocs/backtype/storm/topology/TopologyBuilder.html). 
+Each spout or bolt executes as many tasks across the cluster. Each task corresponds to one thread of execution, and stream groupings define how to send tuples from one set of tasks to another set of tasks. You set the parallelism for each spout or bolt in the `setSpout` and `setBolt` methods of [TopologyBuilder](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html).
 
 ### Workers
 
@@ -115,4 +115,4 @@ Topologies execute across one or more worker processes. Each worker process is a
 
 **Resources:**
 
-* [Config.TOPOLOGY_WORKERS](/apidocs/backtype/storm/Config.html#TOPOLOGY_WORKERS): this config sets the number of workers to allocate for executing the topology
+* [Config.TOPOLOGY_WORKERS](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_WORKERS): this config sets the number of workers to allocate for executing the topology

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Configuration.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Configuration.md b/docs/documentation/Configuration.md
index 01c136e..6d85aa1 100644
--- a/docs/documentation/Configuration.md
+++ b/docs/documentation/Configuration.md
@@ -5,7 +5,7 @@ documentation: true
 ---
 Storm has a variety of configurations for tweaking the behavior of nimbus, supervisors, and running topologies. Some configurations are system configurations and cannot be modified on a topology by topology basis, whereas other configurations can be modified per topology. 
 
-Every configuration has a default value defined in [defaults.yaml](https://github.com/apache/storm/blob/master/conf/defaults.yaml) in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using [StormSubmitter](/apidocs/backtype/storm/StormSubmitter.html). However, the topology-specific configuration can only override configs prefixed with "TOPOLOGY".
+Every configuration has a default value defined in [defaults.yaml](https://github.com/apache/storm/blob/master/conf/defaults.yaml) in the Storm codebase. You can override these configurations by defining a storm.yaml in the classpath of Nimbus and the supervisors. Finally, you can define a topology-specific configuration that you submit along with your topology when using [StormSubmitter](/javadoc/apidocs/backtype/storm/StormSubmitter.html). However, the topology-specific configuration can only override configs prefixed with "TOPOLOGY".
 
 Storm 0.7.0 and onwards lets you override configuration on a per-bolt/per-spout basis. The only configurations that can be overriden this way are:
 
@@ -24,7 +24,7 @@ The preference order for configuration values is defaults.yaml < storm.yaml < to
 
 **Resources:**
 
-* [Config](/apidocs/backtype/storm/Config.html): a listing of all configurations as well as a helper class for creating topology specific configurations
+* [Config](/javadoc/apidocs/backtype/storm/Config.html): a listing of all configurations as well as a helper class for creating topology specific configurations
 * [defaults.yaml](https://github.com/apache/storm/blob/master/conf/defaults.yaml): the default values for all configurations
 * [Setting up a Storm cluster](Setting-up-a-Storm-cluster.html): explains how to create and configure a Storm cluster
 * [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html): lists useful configurations when running topologies on a cluster

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Distributed-RPC.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Distributed-RPC.md b/docs/documentation/Distributed-RPC.md
index b4707e3..484178b 100644
--- a/docs/documentation/Distributed-RPC.md
+++ b/docs/documentation/Distributed-RPC.md
@@ -24,7 +24,7 @@ A client sends the DRPC server the name of the function to execute and the argum
 
 ### LinearDRPCTopologyBuilder
 
-Storm comes with a topology builder called [LinearDRPCTopologyBuilder](/apidocs/backtype/storm/drpc/LinearDRPCTopologyBuilder.html) that automates almost all the steps involved for doing DRPC. These include:
+Storm comes with a topology builder called [LinearDRPCTopologyBuilder](/javadoc/apidocs/backtype/storm/drpc/LinearDRPCTopologyBuilder.html) that automates almost all the steps involved for doing DRPC. These include:
 
 1. Setting up the spout
 2. Returning the results to the DRPC server

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Guaranteeing-message-processing.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Guaranteeing-message-processing.md b/docs/documentation/Guaranteeing-message-processing.md
index ed623d5..4ecc620 100644
--- a/docs/documentation/Guaranteeing-message-processing.md
+++ b/docs/documentation/Guaranteeing-message-processing.md
@@ -25,11 +25,11 @@ This topology reads sentences off of a Kestrel queue, splits the sentences into
 
 ![Tuple tree](images/tuple_tree.png)
 
-Storm considers a tuple coming off a spout "fully processed" when the tuple tree has been exhausted and every message in the tree has been processed. A tuple is considered failed when its tree of messages fails to be fully processed within a specified timeout. This timeout can be configured on a topology-specific basis using the [Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS](/apidocs/backtype/storm/Config.html#TOPOLOGY_MESSAGE_TIMEOUT_SECS) configuration and defaults to 30 seconds.
+Storm considers a tuple coming off a spout "fully processed" when the tuple tree has been exhausted and every message in the tree has been processed. A tuple is considered failed when its tree of messages fails to be fully processed within a specified timeout. This timeout can be configured on a topology-specific basis using the [Config.TOPOLOGY_MESSAGE_TIMEOUT_SECS](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_MESSAGE_TIMEOUT_SECS) configuration and defaults to 30 seconds.
 
 ### What happens if a message is fully processed or fails to be fully processed?
 
-To understand this question, let's take a look at the lifecycle of a tuple coming off of a spout. For reference, here is the interface that spouts implement (see the [Javadoc](/apidocs/backtype/storm/spout/ISpout.html) for more information):
+To understand this question, let's take a look at the lifecycle of a tuple coming off of a spout. For reference, here is the interface that spouts implement (see the [Javadoc](/javadoc/apidocs/backtype/storm/spout/ISpout.html) for more information):
 
 ```java
 public interface ISpout extends Serializable {
@@ -136,7 +136,7 @@ As always in software design, the answer is "it depends." Storm 0.7.0 introduced
 
 ### How does Storm implement reliability in an efficient way?
 
-A Storm topology has a set of special "acker" tasks that track the DAG of tuples for every spout tuple. When an acker sees that a DAG is complete, it sends a message to the spout task that created the spout tuple to ack the message. You can set the number of acker tasks for a topology in the topology configuration using [Config.TOPOLOGY_ACKERS](/apidocs/backtype/storm/Config.html#TOPOLOGY_ACKERS). Storm defaults TOPOLOGY_ACKERS to one task -- you will need to increase this number for topologies processing large amounts of messages. 
+A Storm topology has a set of special "acker" tasks that track the DAG of tuples for every spout tuple. When an acker sees that a DAG is complete, it sends a message to the spout task that created the spout tuple to ack the message. You can set the number of acker tasks for a topology in the topology configuration using [Config.TOPOLOGY_ACKERS](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_ACKERS). Storm defaults TOPOLOGY_ACKERS to one task -- you will need to increase this number for topologies processing large amounts of messages.
 
 The best way to understand Storm's reliability implementation is to look at the lifecycle of tuples and tuple DAGs. When a tuple is created in a topology, whether in a spout or a bolt, it is given a random 64 bit id. These ids are used by ackers to track the tuple DAG for every spout tuple.
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Hooks.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Hooks.md b/docs/documentation/Hooks.md
index 19c3e98..e923348 100644
--- a/docs/documentation/Hooks.md
+++ b/docs/documentation/Hooks.md
@@ -3,7 +3,7 @@ title: Hooks
 layout: documentation
 documentation: true
 ---
-Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the [BaseTaskHook](/apidocs/backtype/storm/hooks/BaseTaskHook.html) class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:
+Storm provides hooks with which you can insert custom code to run on any number of events within Storm. You create a hook by extending the [BaseTaskHook](/javadoc/apidocs/backtype/storm/hooks/BaseTaskHook.html) class and overriding the appropriate method for the event you want to catch. There are two ways to register your hook:
 
-1. In the open method of your spout or prepare method of your bolt using the [TopologyContext#addTaskHook](/apidocs/backtype/storm/task/TopologyContext.html) method.
-2. Through the Storm configuration using the ["topology.auto.task.hooks"](/apidocs/backtype/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS) config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.
\ No newline at end of file
+1. In the open method of your spout or prepare method of your bolt using the [TopologyContext#addTaskHook](/javadoc/apidocs/backtype/storm/task/TopologyContext.html) method.
+2. Through the Storm configuration using the ["topology.auto.task.hooks"](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_AUTO_TASK_HOOKS) config. These hooks are automatically registered in every spout or bolt, and are useful for doing things like integrating with a custom monitoring system.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Local-mode.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Local-mode.md b/docs/documentation/Local-mode.md
index bb8c7de..5ffd365 100644
--- a/docs/documentation/Local-mode.md
+++ b/docs/documentation/Local-mode.md
@@ -13,7 +13,7 @@ import backtype.storm.LocalCluster;
 LocalCluster cluster = new LocalCluster();
 ```
 
-You can then submit topologies using the `submitTopology` method on the `LocalCluster` object. Just like the corresponding method on [StormSubmitter](/apidocs/backtype/storm/StormSubmitter.html), `submitTopology` takes a name, a topology configuration, and the topology object. You can then kill a topology using the `killTopology` method which takes the topology name as an argument.
+You can then submit topologies using the `submitTopology` method on the `LocalCluster` object. Just like the corresponding method on [StormSubmitter](/javadoc/apidocs/backtype/storm/StormSubmitter.html), `submitTopology` takes a name, a topology configuration, and the topology object. You can then kill a topology using the `killTopology` method which takes the topology name as an argument.
 
 To shutdown a local cluster, simple call:
 
@@ -23,7 +23,7 @@ cluster.shutdown();
 
 ### Common configurations for local mode
 
-You can see a full list of configurations [here](/apidocs/backtype/storm/Config.html).
+You can see a full list of configurations [here](/javadoc/apidocs/backtype/storm/Config.html).
 
 1. **Config.TOPOLOGY_MAX_TASK_PARALLELISM**: This config puts a ceiling on the number of threads spawned for a single component. Oftentimes production topologies have a lot of parallelism (hundreds of threads) which places unreasonable load when trying to test the topology in local mode. This config lets you easy control that parallelism.
 2. **Config.TOPOLOGY_DEBUG**: When this is set to true, Storm will log a message every time a tuple is emitted from any spout or bolt. This is extremely useful for debugging.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Running-topologies-on-a-production-cluster.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Running-topologies-on-a-production-cluster.md b/docs/documentation/Running-topologies-on-a-production-cluster.md
index 47fabb4..ebba452 100644
--- a/docs/documentation/Running-topologies-on-a-production-cluster.md
+++ b/docs/documentation/Running-topologies-on-a-production-cluster.md
@@ -5,9 +5,9 @@ documentation: true
 ---
 Running topologies on a production cluster is similar to running in [Local mode](Local-mode.html). Here are the steps:
 
-1) Define the topology (Use [TopologyBuilder](/apidocs/backtype/storm/topology/TopologyBuilder.html) if defining using Java)
+1) Define the topology (Use [TopologyBuilder](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html) if defining using Java)
 
-2) Use [StormSubmitter](/apidocs/backtype/storm/StormSubmitter.html) to submit the topology to the cluster. `StormSubmitter` takes as input the name of the topology, a configuration for the topology, and the topology itself. For example:
+2) Use [StormSubmitter](/javadoc/apidocs/backtype/storm/StormSubmitter.html) to submit the topology to the cluster. `StormSubmitter` takes as input the name of the topology, a configuration for the topology, and the topology itself. For example:
 
 ```java
 Config conf = new Config();
@@ -47,7 +47,7 @@ You can find out how to configure your `storm` client to talk to a Storm cluster
 
 ### Common configurations
 
-There are a variety of configurations you can set per topology. A list of all the configurations you can set can be found [here](/apidocs/backtype/storm/Config.html). The ones prefixed with "TOPOLOGY" can be overridden on a topology-specific basis (the other ones are cluster configurations and cannot be overridden). Here are some common ones that are set for a topology:
+There are a variety of configurations you can set per topology. A list of all the configurations you can set can be found [here](/javadoc/apidocs/backtype/storm/Config.html). The ones prefixed with "TOPOLOGY" can be overridden on a topology-specific basis (the other ones are cluster configurations and cannot be overridden). Here are some common ones that are set for a topology:
 
 1. **Config.TOPOLOGY_WORKERS**: This sets the number of worker processes to use to execute the topology. For example, if you set this to 25, there will be 25 Java processes across the cluster executing all the tasks. If you had a combined 150 parallelism across all components in the topology, each worker process will have 6 tasks running within it as threads.
 2. **Config.TOPOLOGY_ACKERS**: This sets the number of tasks that will track tuple trees and detect when a spout tuple has been fully processed. Ackers are an integral part of Storm's reliability model and you can read more about them on [Guaranteeing message processing](Guaranteeing-message-processing.html).

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Serialization-(prior-to-0.6.0).md
----------------------------------------------------------------------
diff --git a/docs/documentation/Serialization-(prior-to-0.6.0).md b/docs/documentation/Serialization-(prior-to-0.6.0).md
index 71037a8..9ef2fdf 100644
--- a/docs/documentation/Serialization-(prior-to-0.6.0).md
+++ b/docs/documentation/Serialization-(prior-to-0.6.0).md
@@ -21,7 +21,7 @@ Let's dive into Storm's API for defining custom serializations. There are two st
 
 #### Creating a serializer
 
-Custom serializers implement the [ISerialization](/apidocs/backtype/storm/serialization/ISerialization.html) interface. Implementations specify how to serialize and deserialize types into a binary format.
+Custom serializers implement the [ISerialization](/javadoc/apidocs/backtype/storm/serialization/ISerialization.html) interface. Implementations specify how to serialize and deserialize types into a binary format.
 
 The interface looks like this:
 
@@ -47,6 +47,6 @@ Once you create a serializer, you need to tell Storm it exists. This is done thr
 
 Serializer registrations are done through the Config.TOPOLOGY_SERIALIZATIONS config and is simply a list of serialization class names.
 
-Storm provides helpers for registering serializers in a topology config. The [Config](/apidocs/backtype/storm/Config.html) class has a method called `addSerialization` that takes in a serializer class to add to the config.
+Storm provides helpers for registering serializers in a topology config. The [Config](/javadoc/apidocs/backtype/storm/Config.html) class has a method called `addSerialization` that takes in a serializer class to add to the config.
 
 There's an advanced config called Config.TOPOLOGY_SKIP_MISSING_SERIALIZATIONS. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can't find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the `storm.yaml` files.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Serialization.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Serialization.md b/docs/documentation/Serialization.md
index e185b6d..fb86161 100644
--- a/docs/documentation/Serialization.md
+++ b/docs/documentation/Serialization.md
@@ -41,7 +41,7 @@ topology.kryo.register:
 
 `com.mycompany.CustomType1` and `com.mycompany.CustomType3` will use the `FieldsSerializer`, whereas `com.mycompany.CustomType2` will use `com.mycompany.serializer.CustomType2Serializer` for serialization.
 
-Storm provides helpers for registering serializers in a topology config. The [Config](/apidocs/backtype/storm/Config.html) class has a method called `registerSerialization` that takes in a registration to add to the config.
+Storm provides helpers for registering serializers in a topology config. The [Config](/javadoc/apidocs/backtype/storm/Config.html) class has a method called `registerSerialization` that takes in a registration to add to the config.
 
 There's an advanced config called `Config.TOPOLOGY_SKIP_MISSING_KRYO_REGISTRATIONS`. If you set this to true, Storm will ignore any serializations that are registered but do not have their code available on the classpath. Otherwise, Storm will throw errors when it can't find a serialization. This is useful if you run many topologies on a cluster that each have different serializations, but you want to declare all the serializations across all topologies in the `storm.yaml` files.
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Structure-of-the-codebase.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Structure-of-the-codebase.md b/docs/documentation/Structure-of-the-codebase.md
index 44fafe7..2980dd9 100644
--- a/docs/documentation/Structure-of-the-codebase.md
+++ b/docs/documentation/Structure-of-the-codebase.md
@@ -42,16 +42,16 @@ Note that the structure spouts also have a `ComponentCommon` field, and so spout
 
 The interfaces for Storm are generally specified as Java interfaces. The main interfaces are:
 
-1. [IRichBolt](/apidocs/backtype/storm/topology/IRichBolt.html)
-2. [IRichSpout](/apidocs/backtype/storm/topology/IRichSpout.html)
-3. [TopologyBuilder](/apidocs/backtype/storm/topology/TopologyBuilder.html)
+1. [IRichBolt](/javadoc/apidocs/backtype/storm/topology/IRichBolt.html)
+2. [IRichSpout](/javadoc/apidocs/backtype/storm/topology/IRichSpout.html)
+3. [TopologyBuilder](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html)
 
 The strategy for the majority of the interfaces is to:
 
 1. Specify the interface using a Java interface
 2. Provide a base class that provides default implementations when appropriate
 
-You can see this strategy at work with the [BaseRichSpout](/apidocs/backtype/storm/topology/base/BaseRichSpout.html) class. 
+You can see this strategy at work with the [BaseRichSpout](/javadoc/apidocs/backtype/storm/topology/base/BaseRichSpout.html) class.
 
 Spouts and bolts are serialized into the Thrift definition of the topology as described above. 
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Transactional-topologies.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Transactional-topologies.md b/docs/documentation/Transactional-topologies.md
index 9eb6f7c..8c999e7 100644
--- a/docs/documentation/Transactional-topologies.md
+++ b/docs/documentation/Transactional-topologies.md
@@ -81,7 +81,7 @@ Finally, another thing to note is that transactional topologies require a source
 
 ## The basics through example
 
-You build transactional topologies by using [TransactionalTopologyBuilder](/apidocs/backtype/storm/transactional/TransactionalTopologyBuilder.html). Here's the transactional topology definition for a topology that computes the global count of tuples from the input stream. This code comes from [TransactionalGlobalCount](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/TransactionalGlobalCount.java) in storm-starter.
+You build transactional topologies by using [TransactionalTopologyBuilder](/javadoc/apidocs/backtype/storm/transactional/TransactionalTopologyBuilder.html). Here's the transactional topology definition for a topology that computes the global count of tuples from the input stream. This code comes from [TransactionalGlobalCount](https://github.com/apache/storm/blob/master/examples/storm-starter/src/jvm/storm/starter/TransactionalGlobalCount.java) in storm-starter.
 
 ```java
 MemoryTransactionalSpout spout = new MemoryTransactionalSpout(DATA, new Fields("word"), PARTITION_TAKE_PER_BATCH);
@@ -132,7 +132,7 @@ public static class BatchCount extends BaseBatchBolt {
 
 A new instance of this object is created for every batch that's being processed. The actual bolt this runs within is called [BatchBoltExecutor](https://github.com/apache/storm/blob/0.7.0/src/jvm/backtype/storm/coordination/BatchBoltExecutor.java) and manages the creation and cleanup for these objects.
 
-The `prepare` method parameterizes this batch bolt with the Storm config, the topology context, an output collector, and the id for this batch of tuples. In the case of transactional topologies, the id will be a [TransactionAttempt](/apidocs/backtype/storm/transactional/TransactionAttempt.html) object. The batch bolt abstraction can be used in Distributed RPC as well which uses a different type of id for the batches. `BatchBolt` can actually be parameterized with the type of the id, so if you only intend to use the batch bolt for transactional topologies, you can extend `BaseTransactionalBolt` which has this definition:
+The `prepare` method parameterizes this batch bolt with the Storm config, the topology context, an output collector, and the id for this batch of tuples. In the case of transactional topologies, the id will be a [TransactionAttempt](/javadoc/apidocs/backtype/storm/transactional/TransactionAttempt.html) object. The batch bolt abstraction can be used in Distributed RPC as well which uses a different type of id for the batches. `BatchBolt` can actually be parameterized with the type of the id, so if you only intend to use the batch bolt for transactional topologies, you can extend `BaseTransactionalBolt` which has this definition:
 
 ```java
 public abstract class BaseTransactionalBolt extends BaseBatchBolt<TransactionAttempt> {
@@ -211,9 +211,9 @@ This section outlines the different pieces of the transactional topology API.
 
 There are three kinds of bolts possible in a transactional topology:
 
-1. [BasicBolt](/apidocs/backtype/storm/topology/base/BaseBasicBolt.html): This bolt doesn't deal with batches of tuples and just emits tuples based on a single tuple of input.
-2. [BatchBolt](/apidocs/backtype/storm/topology/base/BaseBatchBolt.html): This bolt processes batches of tuples. `execute` is called for each tuple, and `finishBatch` is called when the batch is complete.
-3. BatchBolt's that are marked as committers: The only difference between this bolt and a regular batch bolt is when `finishBatch` is called. A committer bolt has `finishedBatch` called during the commit phase. The commit phase is guaranteed to occur only after all prior batches have successfully committed, and it will be retried until all bolts in the topology succeed the commit for the batch. There are two ways to make a `BatchBolt` a committer, by having the `BatchBolt` implement the [ICommitter](/apidocs/backtype/storm/transactional/ICommitter.html) marker interface, or by using the `setCommiterBolt` method in `TransactionalTopologyBuilder`.
+1. [BasicBolt](/javadoc/apidocs/backtype/storm/topology/base/BaseBasicBolt.html): This bolt doesn't deal with batches of tuples and just emits tuples based on a single tuple of input.
+2. [BatchBolt](/javadoc/apidocs/backtype/storm/topology/base/BaseBatchBolt.html): This bolt processes batches of tuples. `execute` is called for each tuple, and `finishBatch` is called when the batch is complete.
+3. BatchBolt's that are marked as committers: The only difference between this bolt and a regular batch bolt is when `finishBatch` is called. A committer bolt has `finishedBatch` called during the commit phase. The commit phase is guaranteed to occur only after all prior batches have successfully committed, and it will be retried until all bolts in the topology succeed the commit for the batch. There are two ways to make a `BatchBolt` a committer, by having the `BatchBolt` implement the [ICommitter](/javadoc/apidocs/backtype/storm/transactional/ICommitter.html) marker interface, or by using the `setCommiterBolt` method in `TransactionalTopologyBuilder`.
 
 #### Processing phase vs. commit phase in bolts
 
@@ -237,7 +237,7 @@ Notice that you don't have to do any acking or anchoring when working with trans
 
 #### Failing a transaction
 
-When using regular bolts, you can call the `fail` method on `OutputCollector` to fail the tuple trees of which that tuple is a member. Since transactional topologies hide the acking framework from you, they provide a different mechanism to fail a batch (and cause the batch to be replayed). Just throw a [FailedException](/apidocs/backtype/storm/topology/FailedException.html). Unlike regular exceptions, this will only cause that particular batch to replay and will not crash the process.
+When using regular bolts, you can call the `fail` method on `OutputCollector` to fail the tuple trees of which that tuple is a member. Since transactional topologies hide the acking framework from you, they provide a different mechanism to fail a batch (and cause the batch to be replayed). Just throw a [FailedException](/javadoc/apidocs/backtype/storm/topology/FailedException.html). Unlike regular exceptions, this will only cause that particular batch to replay and will not crash the process.
 
 ### Transactional spout
 
@@ -251,11 +251,11 @@ The coordinator on the left is a regular Storm spout that emits a tuple whenever
 
 The need to be idempotent with respect to the tuples it emits requires a `TransactionalSpout` to store a small amount of state. The state is stored in Zookeeper.
 
-The details of implementing a `TransactionalSpout` are in [the Javadoc](/apidocs/backtype/storm/transactional/ITransactionalSpout.html).
+The details of implementing a `TransactionalSpout` are in [the Javadoc](/javadoc/apidocs/backtype/storm/transactional/ITransactionalSpout.html).
 
 #### Partitioned Transactional Spout
 
-A common kind of transactional spout is one that reads the batches from a set of partitions across many queue brokers. For example, this is how [TransactionalKafkaSpout](https://github.com/apache/storm/tree/master/external/storm-kafka/src/jvm/storm/kafka/TransactionalKafkaSpout.java) works. An `IPartitionedTransactionalSpout` automates the bookkeeping work of managing the state for each partition to ensure idempotent replayability. See [the Javadoc](/apidocs/backtype/storm/transactional/partitioned/IPartitionedTransactionalSpout.html) for more details.
+A common kind of transactional spout is one that reads the batches from a set of partitions across many queue brokers. For example, this is how [TransactionalKafkaSpout](https://github.com/apache/storm/tree/master/external/storm-kafka/src/jvm/storm/kafka/TransactionalKafkaSpout.java) works. An `IPartitionedTransactionalSpout` automates the bookkeeping work of managing the state for each partition to ensure idempotent replayability. See [the Javadoc](/javadoc/apidocs/backtype/storm/transactional/partitioned/IPartitionedTransactionalSpout.html) for more details.
 
 ### Configuration
 
@@ -325,7 +325,7 @@ In this scenario, tuples 41-50 are skipped. By failing all subsequent transactio
 
 By failing all subsequent transactions on failure, no tuples are skipped. This also shows that a requirement of transactional spouts is that they always emit where the last transaction left off.
 
-A non-idempotent transactional spout is more concisely referred to as an "OpaqueTransactionalSpout" (opaque is the opposite of idempotent). [IOpaquePartitionedTransactionalSpout](/apidocs/backtype/storm/transactional/partitioned/IOpaquePartitionedTransactionalSpout.html) is an interface for implementing opaque partitioned transactional spouts, of which [OpaqueTransactionalKafkaSpout](https://github.com/apache/storm/tree/master/external/storm-kafka/src/jvm/storm/kafka/OpaqueTransactionalKafkaSpout.java) is an example. `OpaqueTransactionalKafkaSpout` can withstand losing individual Kafka nodes without sacrificing accuracy as long as you use the update strategy as explained in this section.
+A non-idempotent transactional spout is more concisely referred to as an "OpaqueTransactionalSpout" (opaque is the opposite of idempotent). [IOpaquePartitionedTransactionalSpout](/javadoc/apidocs/backtype/storm/transactional/partitioned/IOpaquePartitionedTransactionalSpout.html) is an interface for implementing opaque partitioned transactional spouts, of which [OpaqueTransactionalKafkaSpout](https://github.com/apache/storm/tree/master/external/storm-kafka/src/jvm/storm/kafka/OpaqueTransactionalKafkaSpout.java) is an example. `OpaqueTransactionalKafkaSpout` can withstand losing individual Kafka nodes without sacrificing accuracy as long as you use the update strategy as explained in this section.
 
 ## Implementation
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Tutorial.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Tutorial.md b/docs/documentation/Tutorial.md
index b7c4569..3e1c016 100644
--- a/docs/documentation/Tutorial.md
+++ b/docs/documentation/Tutorial.md
@@ -103,11 +103,11 @@ This topology contains a spout and two bolts. The spout emits words, and each bo
 
 This code defines the nodes using the `setSpout` and `setBolt` methods. These methods take as input a user-specified id, an object containing the processing logic, and the amount of parallelism you want for the node. In this example, the spout is given id "words" and the bolts are given ids "exclaim1" and "exclaim2". 
 
-The object containing the processing logic implements the [IRichSpout](/apidocs/backtype/storm/topology/IRichSpout.html) interface for spouts and the [IRichBolt](/apidocs/backtype/storm/topology/IRichBolt.html) interface for bolts. 
+The object containing the processing logic implements the [IRichSpout](/javadoc/apidocs/backtype/storm/topology/IRichSpout.html) interface for spouts and the [IRichBolt](/javadoc/apidocs/backtype/storm/topology/IRichBolt.html) interface for bolts.
 
 The last parameter, how much parallelism you want for the node, is optional. It indicates how many threads should execute that component across the cluster. If you omit it, Storm will only allocate one thread for that node.
 
-`setBolt` returns an [InputDeclarer](/apidocs/backtype/storm/topology/InputDeclarer.html) object that is used to define the inputs to the Bolt. Here, component "exclaim1" declares that it wants to read all the tuples emitted by component "words" using a shuffle grouping, and component "exclaim2" declares that it wants to read all the tuples emitted by component "exclaim1" using a shuffle grouping. "shuffle grouping" means that tuples should be randomly distributed from the input tasks to the bolt's tasks. There are many ways to group data between components. These will be explained in a few sections.
+`setBolt` returns an [InputDeclarer](/javadoc/apidocs/backtype/storm/topology/InputDeclarer.html) object that is used to define the inputs to the Bolt. Here, component "exclaim1" declares that it wants to read all the tuples emitted by component "words" using a shuffle grouping, and component "exclaim2" declares that it wants to read all the tuples emitted by component "exclaim1" using a shuffle grouping. "shuffle grouping" means that tuples should be randomly distributed from the input tasks to the bolt's tasks. There are many ways to group data between components. These will be explained in a few sections.
 
 If you wanted component "exclaim2" to read all the tuples emitted by both component "words" and component "exclaim1", you would write component "exclaim2"'s definition like this:
 
@@ -163,7 +163,7 @@ public static class ExclamationBolt implements IRichBolt {
 
 The `prepare` method provides the bolt with an `OutputCollector` that is used for emitting tuples from this bolt. Tuples can be emitted at anytime from the bolt -- in the `prepare`, `execute`, or `cleanup` methods, or even asynchronously in another thread. This `prepare` implementation simply saves the `OutputCollector` as an instance variable to be used later on in the `execute` method.
 
-The `execute` method receives a tuple from one of the bolt's inputs. The `ExclamationBolt` grabs the first field from the tuple and emits a new tuple with the string "!!!" appended to it. If you implement a bolt that subscribes to multiple input sources, you can find out which component the [Tuple](/apidocs/backtype/storm/tuple/Tuple.html) came from by using the `Tuple#getSourceComponent` method.
+The `execute` method receives a tuple from one of the bolt's inputs. The `ExclamationBolt` grabs the first field from the tuple and emits a new tuple with the string "!!!" appended to it. If you implement a bolt that subscribes to multiple input sources, you can find out which component the [Tuple](/javadoc/apidocs/backtype/storm/tuple/Tuple.html) came from by using the `Tuple#getSourceComponent` method.
 
 There's a few other things going in in the `execute` method, namely that the input tuple is passed as the first argument to `emit` and the input tuple is acked on the final line. These are part of Storm's reliability API for guaranteeing no data loss and will be explained later in this tutorial. 
 
@@ -225,7 +225,7 @@ The configuration is used to tune various aspects of the running topology. The t
 1. **TOPOLOGY_WORKERS** (set with `setNumWorkers`) specifies how many _processes_ you want allocated around the cluster to execute the topology. Each component in the topology will execute as many _threads_. The number of threads allocated to a given component is configured through the `setBolt` and `setSpout` methods. Those _threads_ exist within worker _processes_. Each worker _process_ contains within it some number of _threads_ for some number of components. For instance, you may have 300 threads specified across all your components and 50 worker processes specified in your config. Each worker process will execute 6 threads, each of which of could belong to a different component. You tune the performance of Storm topologies by tweaking the parallelism for each component and the number of worker processes those threads should run within.
 2. **TOPOLOGY_DEBUG** (set with `setDebug`), when set to true, tells Storm to log every message every emitted by a component. This is useful in local mode when testing topologies, but you probably want to keep this turned off when running topologies on the cluster.
 
-There's many other configurations you can set for the topology. The various configurations are detailed on [the Javadoc for Config](/apidocs/backtype/storm/Config.html).
+There's many other configurations you can set for the topology. The various configurations are detailed on [the Javadoc for Config](/javadoc/apidocs/backtype/storm/Config.html).
 
 To learn about how to set up your development environment so that you can run topologies in local mode (such as in Eclipse), see [Creating a new Storm project](Creating-a-new-Storm-project.html).
 

http://git-wip-us.apache.org/repos/asf/storm/blob/2be8acd6/docs/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
----------------------------------------------------------------------
diff --git a/docs/documentation/Understanding-the-parallelism-of-a-Storm-topology.md b/docs/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
index c780b6f..efa6a9b 100644
--- a/docs/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
+++ b/docs/documentation/Understanding-the-parallelism-of-a-Storm-topology.md
@@ -30,25 +30,25 @@ The following sections give an overview of the various configuration options and
 ### Number of worker processes
 
 * Description: How many worker processes to create _for the topology_ across machines in the cluster.
-* Configuration option: [TOPOLOGY_WORKERS](/apidocs/backtype/storm/Config.html#TOPOLOGY_WORKERS)
+* Configuration option: [TOPOLOGY_WORKERS](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_WORKERS)
 * How to set in your code (examples):
-    * [Config#setNumWorkers](/apidocs/backtype/storm/Config.html)
+    * [Config#setNumWorkers](/javadoc/apidocs/backtype/storm/Config.html)
 
 ### Number of executors (threads)
 
 * Description: How many executors to spawn _per component_.
 * Configuration option: ?
 * How to set in your code (examples):
-    * [TopologyBuilder#setSpout()](/apidocs/backtype/storm/topology/TopologyBuilder.html)
-    * [TopologyBuilder#setBolt()](/apidocs/backtype/storm/topology/TopologyBuilder.html)
+    * [TopologyBuilder#setSpout()](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html)
+    * [TopologyBuilder#setBolt()](/javadoc/apidocs/backtype/storm/topology/TopologyBuilder.html)
     * Note that as of Storm 0.8 the ``parallelism_hint`` parameter now specifies the initial number of executors (not tasks!) for that bolt.
 
 ### Number of tasks
 
 * Description: How many tasks to create _per component_.
-* Configuration option: [TOPOLOGY_TASKS](/apidocs/backtype/storm/Config.html#TOPOLOGY_TASKS)
+* Configuration option: [TOPOLOGY_TASKS](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_TASKS)
 * How to set in your code (examples):
-    * [ComponentConfigurationDeclarer#setNumTasks()](/apidocs/backtype/storm/topology/ComponentConfigurationDeclarer.html)
+    * [ComponentConfigurationDeclarer#setNumTasks()](/javadoc/apidocs/backtype/storm/topology/ComponentConfigurationDeclarer.html)
 
 
 Here is an example code snippet to show these settings in practice:
@@ -91,7 +91,7 @@ StormSubmitter.submitTopology(
 
 And of course Storm comes with additional configuration settings to control the parallelism of a topology, including:
 
-* [TOPOLOGY_MAX_TASK_PARALLELISM](/apidocs/backtype/storm/Config.html#TOPOLOGY_MAX_TASK_PARALLELISM): This setting puts a ceiling on the number of executors that can be spawned for a single component. It is typically used during testing to limit the number of threads spawned when running a topology in local mode. You can set this option via e.g. [Config#setMaxTaskParallelism()](/apidocs/backtype/storm/Config.html).
+* [TOPOLOGY_MAX_TASK_PARALLELISM](/javadoc/apidocs/backtype/storm/Config.html#TOPOLOGY_MAX_TASK_PARALLELISM): This setting puts a ceiling on the number of executors that can be spawned for a single component. It is typically used during testing to limit the number of threads spawned when running a topology in local mode. You can set this option via e.g. [Config#setMaxTaskParallelism()](/javadoc/apidocs/backtype/storm/Config.html#setMaxTaskParallelism(int)).
 
 ## How to change the parallelism of a running topology
 
@@ -119,5 +119,5 @@ $ storm rebalance mytopology -n 5 -e blue-spout=3 -e yellow-bolt=10
 * [Running topologies on a production cluster](Running-topologies-on-a-production-cluster.html)]
 * [Local mode](Local-mode.html)
 * [Tutorial](Tutorial.html)
-* [Storm API documentation](/apidocs/), most notably the class ``Config``
+* [Storm API documentation](/javadoc/apidocs/), most notably the class ``Config``
 


[3/3] storm git commit: Added STORM-669 to Changelog

Posted by bo...@apache.org.
Added STORM-669 to Changelog


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/43fe5135
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/43fe5135
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/43fe5135

Branch: refs/heads/master
Commit: 43fe5135a1023813bc8816be07b8dd0c42706fc4
Parents: 9e571e0
Author: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Authored: Wed Mar 11 15:53:34 2015 -0500
Committer: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Committed: Wed Mar 11 15:53:34 2015 -0500

----------------------------------------------------------------------
 CHANGELOG.md | 1 +
 1 file changed, 1 insertion(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/storm/blob/43fe5135/CHANGELOG.md
----------------------------------------------------------------------
diff --git a/CHANGELOG.md b/CHANGELOG.md
index a1f46fe..c2e38ab 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -73,6 +73,7 @@
  * STORM-657: make the shutdown-worker sleep time before kill -9 configurable
  * STORM-663: Create javadocs for BoltDeclarer
  * STORM-690: Return Jedis into JedisPool with marking 'broken' if connection is broken
+ * STORM-669: Replace links with ones to latest api document
 
 ## 0.9.3-rc2
  * STORM-558: change "swap!" to "reset!" to fix assignment-versions in supervisor


[2/3] storm git commit: Merge branch 'link-to-latest-javadoc' of https://github.com/Lewuathe/storm into STORM-669

Posted by bo...@apache.org.
Merge branch 'link-to-latest-javadoc' of https://github.com/Lewuathe/storm into STORM-669

STORM-669: Replace links with ones to latest api document


Project: http://git-wip-us.apache.org/repos/asf/storm/repo
Commit: http://git-wip-us.apache.org/repos/asf/storm/commit/9e571e04
Tree: http://git-wip-us.apache.org/repos/asf/storm/tree/9e571e04
Diff: http://git-wip-us.apache.org/repos/asf/storm/diff/9e571e04

Branch: refs/heads/master
Commit: 9e571e0481f41ddbfb8d1dfaba7f3cafe649853c
Parents: 8c39e4c 2be8acd
Author: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Authored: Wed Mar 11 15:47:25 2015 -0500
Committer: Robert (Bobby) Evans <ev...@yahoo-inc.com>
Committed: Wed Mar 11 15:47:25 2015 -0500

----------------------------------------------------------------------
 docs/documentation/Clojure-DSL.md               |  4 +-
 docs/documentation/Command-line-client.md       |  2 +-
 docs/documentation/Common-patterns.md           |  6 +--
 docs/documentation/Concepts.md                  | 48 ++++++++++----------
 docs/documentation/Configuration.md             |  4 +-
 docs/documentation/Distributed-RPC.md           |  2 +-
 .../Guaranteeing-message-processing.md          |  6 +--
 docs/documentation/Hooks.md                     |  6 +--
 docs/documentation/Local-mode.md                |  4 +-
 ...unning-topologies-on-a-production-cluster.md |  6 +--
 .../Serialization-(prior-to-0.6.0).md           |  4 +-
 docs/documentation/Serialization.md             |  2 +-
 docs/documentation/Structure-of-the-codebase.md |  8 ++--
 docs/documentation/Transactional-topologies.md  | 18 ++++----
 docs/documentation/Tutorial.md                  |  8 ++--
 ...nding-the-parallelism-of-a-Storm-topology.md | 16 +++----
 16 files changed, 72 insertions(+), 72 deletions(-)
----------------------------------------------------------------------