You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@heron.apache.org by jo...@apache.org on 2020/10/31 14:29:49 UTC

[incubator-heron] branch master updated: clean up site docs (#3626)

This is an automated email from the ASF dual-hosted git repository.

joshfischer pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/incubator-heron.git


The following commit(s) were added to refs/heads/master by this push:
     new 3be9f06  clean up site docs (#3626)
3be9f06 is described below

commit 3be9f065165fc90ce3ba1571c90eea0c7f4c2d53
Author: Josh Fischer <jo...@joshfischer.io>
AuthorDate: Sat Oct 31 09:29:39 2020 -0500

    clean up site docs (#3626)
    
    * clean up
    
    * Update website2/docs/topology-development-streamlet-api.md
    
    Co-authored-by: Oliver Bristow <ev...@gmail.com>
    
    * clean up duplicates
    
    * clean up
    
    * last version clean up
    
    * commit before reset
    
    * adding version to link
    
    Co-authored-by: Oliver Bristow <ev...@gmail.com>
---
 website2/docs/compiling-code-organization.md       |  52 +-
 .../guides-effectively-once-java-topologies.md     |   2 +-
 website2/docs/guides-python-topologies.md          | 363 ---------
 .../docs/topology-development-streamlet-api.md     |  28 +-
 .../topology-development-topology-api-python.md    | 337 +--------
 website2/website/sidebars.json                     |   1 -
 .../compiling-code-organization.md                 |  57 +-
 .../topology-development-topology-api-python.md    | 337 +--------
 .../compiling-code-organization.md                 |  55 +-
 .../version-0.20.2-incubating/compiling-docker.md  | 253 +++++++
 .../version-0.20.2-incubating/compiling-linux.md   | 212 ++++++
 .../version-0.20.2-incubating/compiling-osx.md     |  90 +++
 .../compiling-overview.md                          | 145 ++++
 .../guides-effectively-once-java-topologies.md     |   5 +-
 .../heron-streamlet-concepts.md                    | 811 +++++++++++++++++++++
 .../schedulers-k8s-with-helm.md                    | 304 ++++++++
 .../version-0.20.2-incubating/schedulers-nomad.md  | 439 +++++++++++
 .../topology-development-streamlet-api.md          |  31 +-
 .../topology-development-topology-api-python.md}   | 233 +++++-
 .../version-0.20.0-incubating-sidebars.json        |   1 -
 20 files changed, 2572 insertions(+), 1184 deletions(-)

diff --git a/website2/docs/compiling-code-organization.md b/website2/docs/compiling-code-organization.md
index 053bc48..02398a9 100644
--- a/website2/docs/compiling-code-organization.md
+++ b/website2/docs/compiling-code-organization.md
@@ -22,7 +22,7 @@ sidebar_label: Code Organization
 
 This document contains information about the Heron codebase intended primarily
 for developers who want to contribute to Heron. The Heron codebase lives on
-[github]({{% githubMaster %}}).
+[github](https://github.com/apache/incubator-heron/tree/master).
 
 If you're looking for documentation about developing topologies for a Heron
 cluster, see [Building Topologies](topology-development-topology-api-java) instead.
@@ -51,7 +51,7 @@ Information on setting up and using Bazel for Heron can be found in [Compiling H
 * **Inter-component communication** --- Heron uses [Protocol
 Buffers](https://developers.google.com/protocol-buffers/?hl=en) for
 communication between components. Most `.proto` definition files can be found in
-[`heron/proto`]({{% githubMaster %}}/heron/proto).
+[`heron/proto`](https://github.com/apache/incubator-heron/tree/master/heron/proto).
 
 * **Cluster coordination** --- Heron relies heavily on ZooKeeper for cluster
 coordination for distributed deployment, be it for [Aurora](schedulers-aurora-cluster) or for a [custom
@@ -61,7 +61,7 @@ Management](#state-management) section below.
 
 ## Common Utilities
 
-The [`heron/common`]({{% githubMaster %}}/heron/common) contains a variety of
+The [`heron/common`](https://github.com/apache/incubator-heron/tree/master/heron/common) contains a variety of
 utilities for each of Heron's languages, including useful constants, file
 utilities, networking interfaces, and more.
 
@@ -70,7 +70,7 @@ utilities, networking interfaces, and more.
 Heron supports two cluster schedulers out of the box:
 [Aurora](schedulers-aurora-cluster) and a [local
 scheduler](schedulers-local). The Java code for each of those
-schedulers can be found in [`heron/schedulers`]({{% githubMaster %}}/heron/schedulers)
+schedulers can be found in [`heron/schedulers`](https://github.com/apache/incubator-heron/tree/master/heron/schedulers)
 , while the underlying scheduler API can be found [here](/api/org/apache/heron/spi/scheduler/package-summary.html)
 
 Info on custom schedulers can be found in [Implementing a Custom
@@ -83,10 +83,10 @@ Deployment](schedulers-local).
 
 The parts of Heron's codebase related to
 [ZooKeeper](http://zookeeper.apache.org/) are mostly contained in
-[`heron/state`]({{% githubMaster %}}/heron/state). There are ZooKeeper-facing
-interfaces for [C++]({{% githubMaster %}}/heron/state/src/cpp),
-[Java]({{% githubMaster %}}/heron/state/src/java), and
-[Python]({{% githubMaster %}}/heron/state/src/python) that are used in a variety of
+[`heron/state`](https://github.com/apache/incubator-heron/tree/master/heron/state). There are ZooKeeper-facing
+interfaces for [C++](https://github.com/apache/incubator-heron/tree/master/heron/state/src/cpp),
+[Java](https://github.com/apache/incubator-heron/tree/master/heron/state/src/java), and
+[Python](https://github.com/apache/incubator-heron/tree/master/heron/state/src/python) that are used in a variety of
 Heron components.
 
 ## Topology Components
@@ -95,25 +95,25 @@ Heron components.
 
 The C++ code for Heron's [Topology
 Master](heron-architecture#topology-master) is written in C++ can be
-found in [`heron/tmaster`]({{% githubMaster %}}/heron/tmaster).
+found in [`heron/tmaster`](https://github.com/apache/incubator-heron/tree/master/heron/tmaster).
 
 ### Stream Manager
 
 The C++ code for Heron's [Stream
 Manager](heron-architecture#stream-manager) can be found in
-[`heron/stmgr`]({{% githubMaster %}}/heron/stmgr).
+[`heron/stmgr`](https://github.com/apache/incubator-heron/tree/master/heron/stmgr).
 
 ### Heron Instance
 
 The Java code for [Heron
 instances](heron-architecture#heron-instance) can be found in
-[`heron/instance`]({{% githubMaster %}}/heron/instance).
+[`heron/instance`](https://github.com/apache/incubator-heron/tree/master/heron/instance).
 
 ### Metrics Manager
 
 The Java code for Heron's [Metrics
 Manager](heron-architecture#metrics-manager) can be found in
-[`heron/metricsmgr`]({{% githubMaster %}}/heron/metricsmgr).
+[`heron/metricsmgr`](https://github.com/apache/incubator-heron/tree/master/heron/metricsmgr).
 
 If you'd like to implement your own custom metrics handler (known as a **metrics
 sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
@@ -123,7 +123,7 @@ sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
 ### Topology API
 
 Heron's API for writing topologies is written in Java. The code for this API can
-be found in [`heron/api`]({{% githubMaster %}}/heron/api).
+be found in [`heron/api`](https://github.com/apache/incubator-heron/tree/master/heron/api).
 
 Documentation for writing topologies can be found in [Building
 Topologies](topology-development-topology-api-java), while API documentation can be found
@@ -142,7 +142,7 @@ The Java API for simulator can be found in
 Heron's codebase includes a wide variety of example
 [topologies](heron-topology-concepts) built using Heron's topology API for
 Java. Those examples can be found in
-[`heron/examples`]({{% githubMaster %}}/heron/examples).
+[`heron/examples`](https://github.com/apache/incubator-heron/tree/master/heron/examples).
 
 ## User Interface Components
 
@@ -152,45 +152,45 @@ Heron has a tool called `heron` that is used to both provide a CLI interface
 for [managing topologies](user-manuals-heron-cli) and to perform much of
 the heavy lifting behind assembling physical topologies in your cluster.
 The Python code for `heron` can be found in
-[`heron/tools/cli`]({{% githubMaster %}}/heron/tools/cli).
+[`heron/tools/cli`](https://github.com/apache/incubator-heron/tree/master/heron/tools/cli).
 
 Sample configurations for different Heron schedulers
 
-* [Local scheduler](schedulers-local) config can be found in [`heron/config/src/yaml/conf/local`]({{% githubMaster %}}/heron/config/src/yaml/conf/local),
-* [Aurora scheduler](schedulers-aurora-cluster) config can be found [`heron/config/src/yaml/conf/aurora`]({{% githubMaster %}}/heron/config/src/yaml/conf/aurora).
+* [Local scheduler](schedulers-local) config can be found in [`heron/config/src/yaml/conf/local`](https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/local),
+* [Aurora scheduler](schedulers-aurora-cluster) config can be found [`heron/config/src/yaml/conf/aurora`]({https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/aurora).
 
 ### Heron Tracker
 
 The Python code for the [Heron Tracker](user-manuals-heron-tracker-runbook) can be
-found in [`heron/tools/tracker`]({{% githubMaster %}}/heron/tools/tracker).
+found in [`heron/tools/tracker`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker).
 
 The Tracker is a web server written in Python. It relies on the
 [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new HTTP
 routes to the Tracker in
-[`main.py`]({{% githubMaster %}}/heron/tools/tracker/src/python/main.py) and
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker/src/python/main.py) and
 corresponding handlers in the
-[`handlers`]({{% githubMaster %}}/heron/tools/tracker/src/python/handlers) directory.
+[`handlers`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker/src/python/handlers) directory.
 
 ### Heron UI
 
 The Python code for the [Heron UI](user-manuals-heron-ui) can be found in
-[`heron/tools/ui`]({{% githubMaster %}}/heron/tools/ui).
+[`heron/tools/ui`](https://github.com/apache/incubator-heron/tree/master/heron/tools/ui).
 
 Like Heron Tracker, Heron UI is a web server written in Python that relies on
 the [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new
 HTTP routes to Heron UI in
-[`main.py`]({{% githubMaster %}}/heron/web/source/python/main.py) and corresponding
-handlers in the [`handlers`]({{% githubMaster %}}/heron/web/source/python/handlers)
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/web/source/python/main.py) and corresponding
+handlers in the [`handlers`](https://github.com/apache/incubator-heron/tree/master/heron/web/source/python/handlers)
 directory.
 
 ### Heron Shell
 
 The Python code for the [Heron Shell](user-manuals-heron-shell) can be
-found in [`heron/shell`]({{% githubMaster %}}/heron/shell). The HTTP handlers and
+found in [`heron/shell`](https://github.com/apache/incubator-heron/tree/master/heron/shell). The HTTP handlers and
 web server are defined in
-[`main.py`]({{% githubMaster %}}/heron/shell/src/python/main.py) while the HTML,
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/shell/src/python/main.py) while the HTML,
 JavaScript, CSS, and images for the web UI can be found in the
-[`assets`]({{% githubMaster %}}/heron/shell/assets) directory.
+[`assets`](https://github.com/apache/incubator-heron/tree/master/heron/shell/assets) directory.
 
 ## Tests
 
diff --git a/website2/docs/guides-effectively-once-java-topologies.md b/website2/docs/guides-effectively-once-java-topologies.md
index 77c46b8..5d30440 100644
--- a/website2/docs/guides-effectively-once-java-topologies.md
+++ b/website2/docs/guides-effectively-once-java-topologies.md
@@ -240,7 +240,7 @@ public class EffectivelyOnceTopology {
 
 ### Submitting the topology
 
-The code for this topology can be found in [this GitHub repository](https://github.com/streamlio/heron-java-effectively-once-example). You can clone the repo locally like this:
+The code for this topology can be found in. You can clone the repo locally like this:
 
 ```bash
 $ git clone https://github.com/streamlio/heron-java-effectively-once-example
diff --git a/website2/docs/guides-python-topologies.md b/website2/docs/guides-python-topologies.md
deleted file mode 100644
index e2aa96a..0000000
--- a/website2/docs/guides-python-topologies.md
+++ /dev/null
@@ -1,363 +0,0 @@
----
-id: guides-python-topologies
-title: Python Topologies
-sidebar_label: Python Topologies
----
-<!--
-    Licensed to the Apache Software Foundation (ASF) under one
-    or more contributor license agreements.  See the NOTICE file
-    distributed with this work for additional information
-    regarding copyright ownership.  The ASF licenses this file
-    to you under the Apache License, Version 2.0 (the
-    "License"); you may not use this file except in compliance
-    with the License.  You may obtain a copy of the License at
-      http://www.apache.org/licenses/LICENSE-2.0
-    Unless required by applicable law or agreed to in writing,
-    software distributed under the License is distributed on an
-    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-    KIND, either express or implied.  See the License for the
-    specific language governing permissions and limitations
-    under the License.
--->
-
-> The current version of `heronpy` is [{{% heronpyVersion %}}](https://pypi.python.org/pypi/heronpy/{{% heronpyVersion %}}).
-
-Support for developing Heron topologies in Python is provided by a Python library called [`heronpy`](https://pypi.python.org/pypi/heronpy).
-
-> #### Python API docs
-> You can find API docs for the `heronpy` library [here](/api/python).
-
-## Setup
-
-First, you need to install the `heronpy` library using [pip](https://pip.pypa.io/en/stable/), [EasyInstall](https://wiki.python.org/moin/EasyInstall), or an analogous tool:
-
-```shell
-$ pip install heronpy
-$ easy_install heronpy
-```
-
-Then you can include `heronpy` in your project files. Here's an example:
-
-```python
-from heronpy.api.bolt.bolt import Bolt
-from heronpy.api.spout.spout import Spout
-from heronpy.api.topology import Topology
-```
-
-## Writing topologies in Python
-
-Heron [topologies](heron-topology-concepts) are networks of [spouts](topology-development-topology-api-python#spouts) that pull data into a topology and [bolts](topology-development-topology-api-python#bolts) that process that ingested data.
-
-> You can see how to create Python spouts in the [Implementing Python Spouts](topology-development-topology-api-python#spouts) guide and how to create Python bolts in the [Implementing Python Bolts](topology-development-topology-api-python#bolts) guide.
-
-Once you've defined spouts and bolts for a topology, you can then compose the topology in one of two ways:
-
-* You can use the `TopologyBuilder` class inside of a main function.
-
-    Here's an example:
-
-    ```python
-    #!/usr/bin/env python
-    from heronpy.api.topology import TopologyBuilder
-
-
-    if __name__ == "__main__":
-        builder = TopologyBuilder("MyTopology")
-        # Add spouts and bolts
-        builder.build_and_submit()
-    ```
-
-* You can subclass the `Topology` class.
-
-    Here's an example:
-
-    ```python
-    from heronpy.api.stream import Grouping
-    from heronpy.api.topology import Topology
-
-
-    class MyTopology(Topology):
-        my_spout = WordSpout.spec(par=2)
-        my_bolt = CountBolt.spec(par=3, inputs={spout: Grouping.fields("word")})
-    ```
-
-## Defining topologies using the `TopologyBuilder` class
-
-If you create a Python topology using a `TopologyBuilder`, you need to instantiate a `TopologyBuilder` inside of a standard Python main function, like this:
-
-```python
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    builder = TopologyBuilder("MyTopology")
-```
-
-Once you've created a `TopologyBuilder` object, you can add bolts using the `add_bolt` method and spouts using the `add_spout` method. Here's an example:
-
-```python
-builder = TopologyBuilder("MyTopology")
-builder.add_bolt("my_bolt", CountBolt, par=3)
-builder.add_spout("my_spout", WordSpout, par=2)
-```
-
-Both the `add_bolt` and `add_spout` methods return the corresponding `HeronComponentSpec1 object.
-
-The `add_bolt` method takes four arguments and an optional `config` parameter:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this bolt | |
-`bolt_cls` | class | The subclass of `Bolt` that defines this bolt | |
-`par` | `int` | The number of instances of this bolt in the topology | |
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-The `add_spout` method takes three arguments and an optional `config` parameter:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this spout | |
-`spout_cls` | class | The subclass of `Spout` that defines this spout | |
-`par` | `int` | The number of instances of this spout in the topology | |
-`inputs` | `dict` or `list` | Either a `dict` mapping from `HeronComponentSpec` to `Grouping` *or* a list of `HeronComponentSpec`, in which case the `shuffle` grouping is used
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-### Example
-
-The following is an example implementation of a word count topology in Python that subclasses `TopologyBuilder`.
-
-```python
-from your_spout import WordSpout
-from your_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    builder = TopologyBuilder("WordCountTopology")
-    # piece together the topology
-    word_spout = builder.add_spout("word_spout", WordSpout, par=2)
-    count_bolt = builder.add_bolt("count_bolt", CountBolt, par=2, inputs={word_spout: Grouping.fields("word")})
-    # submit the toplogy
-    builder.build_and_submit()
-```
-
-Note that arguments to the main method can be passed by providing them in the
-`heron submit` command.
-
-### Topology-wide configuration
-
-If you're building a Python topology using a `TopologyBuilder`, you can specify configuration for the topology using the `set_config` method. A topology's config is a `dict` in which the keys are a series constants from the `api_constants` module and values are configuration values for those parameters.
-
-Here's an example:
-
-```python
-from heronpy.api import api_constants
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    topology_config = {
-        api_constants.TOPOLOGY_ENABLE_MESSAGE_TIMEOUTS: True
-    }
-    builder = TopologyBuilder("MyTopology")
-    builder.set_config(topology_config)
-    # Add bolts and spouts, etc.
-```
-
-### Launching the topology
-
-If you want to [submit](../../../operators/heron-cli#submitting-a-topology) Python topologies to a Heron cluster, they need to be packaged as a [PEX](https://pex.readthedocs.io/en/stable/whatispex.html) file. In order to produce PEX files, we recommend using a build tool like [Pants](http://www.pantsbuild.org/python_readme.html) or [Bazel](https://github.com/benley/bazel_rules_pex).
-
-If you defined your topology by subclassing the `TopologyBuilder` class and built a `word_count.pex` file for that topology in the `~/topology` folder. You can submit the topology to a cluster called `local` like this:
-
-```bash
-$ heron submit local \
-  ~/topology/word_count.pex \
-  - # No class specified
-```
-
-Note the `-` in this submission command. If you define a topology by subclassing `TopologyBuilder` you do not need to instruct Heron where your main method is located.
-
-> #### Example topologies buildable as PEXs
-> * See [this repo](https://github.com/streamlio/pants-dev-environment) for an example of a Heron topology written in Python and deployable as a Pants-packaged PEX.
-> * See [this repo](https://github.com/streamlio/bazel-dev-environment) for an example of a Heron topology written in Python and deployable as a Bazel-packaged PEX.
-
-## Defining a topology by subclassing the `Topology` class
-
-If you create a Python topology by subclassing the `Topology` class, you need to create a new topology class, like this:
-
-```python
-from my_spout import WordSpout
-from my_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import Topology
-
-
-class MyTopology(Topology):
-    my_spout = WordSpout.spec(par=2)
-    my_bolt_inputs = {my_spout: Grouping.fields("word")}
-    my_bolt = CountBolt.spec(par=3, inputs=my_bolt_inputs)
-```
-
-All you need to do is place `HeronComponentSpec`s as the class attributes
-of your topology class, which are returned by the `spec()` method of
-your spout or bolt class. You do *not* need to run a `build` method or anything like that; the `Topology` class will automatically detect which spouts and bolts are included in the topology.
-
-> If you use this method to define a new Python topology, you do *not* need to have a main function.
-
-For bolts, the `spec` method for spouts takes three optional arguments::
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this bolt or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this bolt | |
-`par` | `int` | The number of instances of this bolt in the topology | |
-`config` | `dict` | Specifies the configuration for this bolt | `None`
-
-
-For spouts, the `spec` method takes four optional arguments:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this spout or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this spout | `None` |
-`inputs` | `dict` or `list` | Either a `dict` mapping from `HeronComponentSpec`to `Grouping` *or* a list of `HeronComponentSpec`s, in which case the `shuffle` grouping is used
-`par` | `int` | The number of instances of this spout in the topology | `1` |
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-### Example
-
-Here's an example topology definition with one spout and one bolt:
-
-```python
-from my_spout import WordSpout
-from my_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import Topology
-
-
-class WordCount(Topology):
-    word_spout = WordSpout.spec(par=2)
-    count_bolt = CountBolt.spec(par=2, inputs={word_spout: Grouping.fields("word")})
-```
-
-### Launching
-
-If you defined your topology by subclassing the `Topology` class,
-your main Python file should *not* contain a main method. You will, however, need to instruct Heron which class contains your topology definition.
-
-Let's say that you've defined a topology by subclassing `Topology` and built a PEX stored in `~/topology/dist/word_count.pex`. The class containing your topology definition is `topology.word_count.WordCount`. You can submit the topology to a cluster called `local` like this:
-
-```bash
-$ heron submit local \
-  ~/topology/dist/word_count.pex \
-  topology.word_count.WordCount \ # Specifies the topology class definition
-  WordCountTopology
-```
-
-### Topology-wide configuration
-
-If you're building a Python topology by subclassing `Topology`, you can specify configuration for the topology using the `set_config` method. A topology's config is a `dict` in which the keys are a series constants from the `api_constants` module and values are configuration values for those parameters.
-
-Here's an example:
-
-```python
-from heronpy.api.topology import Topology
-from heronpy.api import api_constants
-
-
-class MyTopology(Topology):
-    config = {
-        api_constants.TOPOLOGY_ENABLE_MESSAGE_TIMEOUTS: True
-    }
-    # Add bolts and spouts, etc.
-```
-
-## Multiple streams
-
-To specify that a component has multiple output streams, instead of using a list of
-strings for `outputs`, you can specify a list of `Stream` objects, in the following manner.
-
-```python
-class MultiStreamSpout(Spout):
-    outputs = [
-        Stream(fields=["normal", "fields"], name="default"),
-        Stream(fields=["error_message"], name="error_stream"),
-    ]
-```
-
-To select one of these streams as the input for your bolt, you can simply
-use `[]` to specify the stream you want. Without any stream specified, the `default`
-stream will be used.
-
-```python
-class MultiStreamTopology(Topology):
-    spout = MultiStreamSpout.spec()
-    error_bolt = ErrorBolt.spec(inputs={spout["error_stream"]: Grouping.LOWEST})
-    consume_bolt = ConsumeBolt.spec(inputs={spout: Grouping.SHUFFLE})
-```
-
-## Declaring output fields using the `spec()` method
-
-In Python topologies, the output fields of your spouts and bolts
-need to be declared by placing `outputs` class attributes, as there is
-no `declareOutputFields()` method. `heronpy` enables you to dynamically declare output fields as a list using the
-`optional_outputs` argument in the `spec()` method.
-
-This is useful in a situation like below.
-
-```python
-class IdentityBolt(Bolt):
-    # Statically declaring output fields is not allowed
-    class process(self, tup):
-        emit([tup.values])
-
-
-class DynamicOutputField(Topology):
-    spout = WordSpout.spec()
-    bolt = IdentityBolt.spec(inputs={spout: Grouping.ALL}, optional_outputs=["word"])
-```
-
-You can also declare outputs in the `add_spout()` and the `add_bolt()`
-method for the `TopologyBuilder` in the same way.
-
-## Example topologies
-
-There are a number of example topologies that you can peruse in the [`examples/src/python`]({{% githubMaster %}}/examples/src/python) directory of the [Heron repo]({{% githubMaster %}}):
-
-Topology | File | Description
-:--------|:-----|:-----------
-Word count | [`word_count_topology.py`]({{% githubMaster %}}/examples/src/python/word_count_topology.py) | The [`WordSpout`]({{% githubMaster %}}/examples/src/python/spout/word_spout.py) spout emits random words from a list, while the [`CountBolt`]({{% githubMaster %}}/examples/src/python/bolt/count_bolt.py) bolt counts the number of words that have been emitted.
-Multiple streams | [`multi_stream_topology.py`]({{% githubMaster %}}/examples/src/python/multi_stream_topology.py) | The [`MultiStreamSpout`]({{% githubMaster %}}/examples/src/python/spout/multi_stream_spout.py) emits multiple streams to downstream bolts.
-Half acking | [`half_acking_topology.py`]({{% githubMaster %}}/examples/src/python/half_acking_topology.py) | The [`HalfAckBolt`]({{% githubMaster %}}/examples/src/python/bolt/half_ack_bolt.py) acks only half of all received tuples.
-Custom grouping | [`custom_grouping_topology.py`]({{% githubMaster %}}/examples/src/python/custom_grouping_topology.py) | The [`SampleCustomGrouping`]({{% githubMaster %}}/examples/src/python/custom_grouping_topology.py#L26) class provides a custom field grouping.
-
-You can build the respective PEXs for these topologies using the following commands:
-
-```shell
-$ bazel build examples/src/python:word_count
-$ bazel build examples/src/python:multi_stream
-$ bazel build examples/src/python:half_acking
-$ bazel build examples/src/python:custom_grouping
-```
-
-All built PEXs will be stored in `bazel-bin/examples/src/python`. You can submit them to Heron like so:
-
-```shell
-$ heron submit local \
-  bazel-bin/examples/src/python/word_count.pex - \
-  WordCount
-$ heron submit local \
-  bazel-bin/examples/src/python/multi_stream.pex \
-  heron.examples.src.python.multi_stream_topology.MultiStream
-$ heron submit local \
-  bazel-bin/examples/src/python/half_acking.pex - \
-  HalfAcking
-$ heron submit local \
-  bazel-bin/examples/src/python/custom_grouping.pex \
-  heron.examples.src.python.custom_grouping_topology.CustomGrouping
-```
-
-By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
diff --git a/website2/docs/topology-development-streamlet-api.md b/website2/docs/topology-development-streamlet-api.md
index 87396af..12748e0 100644
--- a/website2/docs/topology-development-streamlet-api.md
+++ b/website2/docs/topology-development-streamlet-api.md
@@ -113,34 +113,26 @@ $ heron submit local \
 
 ### Java Streamlet API starter project
 
-If you'd like to up and running quickly with the Heron Streamlet API for Java, you can clone [this repository](https://github.com/streamlio/heron-java-streamlet-api-example), which includes an example topology built using the Streamlet API as well as the necessary Maven configuration. To build a JAR with dependencies of this example topology:
-
-```bash
-$ git clone https://github.com/streamlio/heron-java-streamlet-api-example
-$ cd heron-java-streamlet-api-example
-$ mvn assembly:assembly
-$ ls target/*.jar
-target/heron-java-streamlet-api-example-latest-jar-with-dependencies.jar
-target/heron-java-streamlet-api-example-latest.jar
-```
+If you'd like to up and running quickly with the Heron Streamlet API for Java, you can view the example topologies [here](https://github.com/apache/incubator-heron/tree/{{ heron:version }}/examples/src/java/org/apache/heron/examples/streamlet)
 
 If you're running a [local Heron cluster](getting-started-local-single-node), you can submit the built example topology like this:
 
 ```bash
-$ heron submit local target/heron-java-streamlet-api-example-latest-jar-with-dependencies.jar \
-  io.streaml.heron.streamlet.WordCountStreamletTopology \
-  WordCountStreamletTopology
+$ heron submit local \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.streamlet.WindowedWordCountTopology \
+  streamletWindowedWordCount
 ```
 
 #### Selecting delivery semantics
 
-Heron enables you to apply one of three [delivery semantics](heron-delivery-semantics) to any Heron topology. For the [example topology](#java-streamlet-api-starter-project) above, you can select the delivery semantics when you submit the topology with the topology's second argument. This command, for example, would apply [effectively-once](heron-delivery-semantics) to the example topology:
+Heron enables you to apply one of three [delivery semantics](heron-delivery-semantics) to any Heron topology. For the example topology above, you can select the delivery semantics when you submit the topology with the topology's second argument. This command, for example, would apply [effectively-once](heron-delivery-semantics) to the example topology:
 
 ```bash
-$ heron submit local target/heron-java-streamlet-api-example-latest-jar-with-dependencies.jar \
-  io.streaml.heron.streamlet.WordCountStreamletTopology \
-  WordCountStreamletTopology \
-  effectively-once
+$ heron submit local \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.streamlet.WireRequestsTopology \
+  wireRequestsTopology
 ```
 
 The other options are `at-most-once` and `at-least-once`. If you don't explicitly select the delivery semantics, at-least-once semantics will be applied.
diff --git a/website2/docs/topology-development-topology-api-python.md b/website2/docs/topology-development-topology-api-python.md
index d898a66..77da3b6 100644
--- a/website2/docs/topology-development-topology-api-python.md
+++ b/website2/docs/topology-development-topology-api-python.md
@@ -181,8 +181,7 @@ $ heron submit local \
 Note the `-` in this submission command. If you define a topology by subclassing `TopologyBuilder` you do not need to instruct Heron where your main method is located.
 
 > #### Example topologies buildable as PEXs
-> * See [this repo](https://github.com/streamlio/pants-dev-environment) for an example of a Heron topology written in Python and deployable as a Pants-packaged PEX.
-> * See [this repo](https://github.com/streamlio/bazel-dev-environment) for an example of a Heron topology written in Python and deployable as a Bazel-packaged PEX.
+> * TODO
 
 ## Defining a topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class
 
@@ -323,45 +322,6 @@ class DynamicOutputField(Topology):
 You can also declare outputs in the `add_spout()` and the `add_bolt()`
 method for the `TopologyBuilder` in the same way.
 
-## Example topologies
-
-There are a number of example topologies that you can peruse in the [`examples/src/python`]({{% githubMaster %}}/examples/src/python) directory of the [Heron repo]({{% githubMaster %}}):
-
-Topology | File | Description
-:--------|:-----|:-----------
-Word count | [`word_count_topology.py`]({{% githubMaster %}}/examples/src/python/word_count_topology.py) | The [`WordSpout`]({{% githubMaster %}}/examples/src/python/spout/word_spout.py) spout emits random words from a list, while the [`CountBolt`]({{% githubMaster %}}/examples/src/python/bolt/count_bolt.py) bolt counts the number of words that have been emitted.
-Multiple streams | [`multi_stream_topology.py`]({{% githubMaster %}}/examples/src/python/multi_stream_topology.py) | The [`MultiStreamSpout`]({{% githubMaster %}}/examples/src/python/spout/multi_stream_spout.py) emits multiple streams to downstream bolts.
-Half acking | [`half_acking_topology.py`]({{% githubMaster %}}/examples/src/python/half_acking_topology.py) | The [`HalfAckBolt`]({{% githubMaster %}}/examples/src/python/bolt/half_ack_bolt.py) acks only half of all received tuples.
-Custom grouping | [`custom_grouping_topology.py`]({{% githubMaster %}}/examples/src/python/custom_grouping_topology.py) | The [`SampleCustomGrouping`]({{% githubMaster %}}/examples/src/python/custom_grouping_topology.py#L26) class provides a custom field grouping.
-
-You can build the respective PEXs for these topologies using the following commands:
-
-```shell
-$ bazel build examples/src/python:word_count
-$ bazel build examples/src/python:multi_stream
-$ bazel build examples/src/python:half_acking
-$ bazel build examples/src/python:custom_grouping
-```
-
-All built PEXs will be stored in `bazel-bin/examples/src/python`. You can submit them to Heron like so:
-
-```shell
-$ heron submit local \
-  bazel-bin/examples/src/python/word_count.pex - \
-  WordCount
-$ heron submit local \
-  bazel-bin/examples/src/python/multi_stream.pex \
-  heron.examples.src.python.multi_stream_topology.MultiStream
-$ heron submit local \
-  bazel-bin/examples/src/python/half_acking.pex - \
-  HalfAcking
-$ heron submit local \
-  bazel-bin/examples/src/python/custom_grouping.pex \
-  heron.examples.src.python.custom_grouping_topology.CustomGrouping
-```
-
-By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
-
 ## Bolts 
 
  Bolts must implement the `Bolt` interface, which has the following methods.
@@ -535,299 +495,7 @@ class WordSpout(Spout):
         self.emit([word])
 ```
 
-## Topologies Further
-
-```shell
-$ pip install heronpy
-$ easy_install heronpy
-```
-
-Then you can include `heronpy` in your project files. Here's an example:
-
-```python
-from heronpy.api.bolt.bolt import Bolt
-from heronpy.api.spout.spout import Spout
-from heronpy.api.topology import Topology
-```
-
-## Writing topologies in Python
-
-Heron [topologies](heron-topologies-concepts) are networks of [spouts](../spouts) that pull data into a topology and [bolts](../bolts) that process that ingested data.
-
-> You can see how to create Python spouts in the [Implementing Python Spouts](../spouts) guide and how to create Python bolts in the [Implementing Python Bolts](../bolts) guide.
-
-Once you've defined spouts and bolts for a topology, you can then compose the topology in one of two ways:
-
-* You can use the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class inside of a main function.
-
-    Here's an example:
-
-    ```python
-    #!/usr/bin/env python
-    from heronpy.api.topology import TopologyBuilder
-
-
-    if __name__ == "__main__":
-        builder = TopologyBuilder("MyTopology")
-        # Add spouts and bolts
-        builder.build_and_submit()
-    ```
-
-* You can subclass the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class.
-
-    Here's an example:
-
-    ```python
-    from heronpy.api.stream import Grouping
-    from heronpy.api.topology import Topology
-
-
-    class MyTopology(Topology):
-        my_spout = WordSpout.spec(par=2)
-        my_bolt = CountBolt.spec(par=3, inputs={spout: Grouping.fields("word")})
-    ```
-
-## Defining topologies using the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class
-
-If you create a Python topology using a [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder), you need to instantiate a `TopologyBuilder` inside of a standard Python main function, like this:
-
-```python
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    builder = TopologyBuilder("MyTopology")
-```
-
-Once you've created a `TopologyBuilder` object, you can add [bolts](../bolts) using the [`add_bolt`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.add_bolt) method and [spouts](../spouts) using the [`add_spout`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.add_spout) method. Here's an example:
-
-```python
-builder = TopologyBuilder("MyTopology")
-builder.add_bolt("my_bolt", CountBolt, par=3)
-builder.add_spout("my_spout", WordSpout, par=2)
-```
-
-Both the `add_bolt` and `add_spout` methods return the corresponding [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) object.
-
-The `add_bolt` method takes four arguments and an optional `config` parameter:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this bolt | |
-`bolt_cls` | class | The subclass of [`Bolt`](/api/python/bolt/bolt.m.html#heronpy.bolt.bolt.Bolt) that defines this bolt | |
-`par` | `int` | The number of instances of this bolt in the topology | |
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-The `add_spout` method takes three arguments and an optional `config` parameter:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this spout | |
-`spout_cls` | class | The subclass of [`Spout`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout) that defines this spout | |
-`par` | `int` | The number of instances of this spout in the topology | |
-`inputs` | `dict` or `list` | Either a `dict` mapping from [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) to [`Grouping`](/api/python/stream.m.html#heronpy.stream.Grouping) *or* a list of [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s, in which case the [`shuffle`](/api/python/stream.m.html#heronpy.stream.Grouping.SHUFFLE) grouping is used
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-### Example
-
-The following is an example implementation of a word count topology in Python that subclasses [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder).
-
-```python
-from your_spout import WordSpout
-from your_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    builder = TopologyBuilder("WordCountTopology")
-    # piece together the topology
-    word_spout = builder.add_spout("word_spout", WordSpout, par=2)
-    count_bolt = builder.add_bolt("count_bolt", CountBolt, par=2, inputs={word_spout: Grouping.fields("word")})
-    # submit the toplogy
-    builder.build_and_submit()
-```
-
-Note that arguments to the main method can be passed by providing them in the
-`heron submit` command.
-
-### Topology-wide configuration
-
-If you're building a Python topology using a `TopologyBuilder`, you can specify configuration for the topology using the [`set_config`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.set_config) method. A topology's config is a `dict` in which the keys are a series constants from the [`api_constants`](/api/python/api_constants.m.html) module and values are configuration values for those parameters.
-
-Here's an example:
-
-```python
-from heronpy.api import api_constants
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    topology_config = {
-        api_constants.TOPOLOGY_ENABLE_MESSAGE_TIMEOUTS: True
-    }
-    builder = TopologyBuilder("MyTopology")
-    builder.set_config(topology_config)
-    # Add bolts and spouts, etc.
-```
-
-### Launching the topology
-
-If you want to [submit](../../../operators/heron-cli#submitting-a-topology) Python topologies to a Heron cluster, they need to be packaged as a [PEX](https://pex.readthedocs.io/en/stable/whatispex.html) file. In order to produce PEX files, we recommend using a build tool like [Pants](http://www.pantsbuild.org/python_readme.html) or [Bazel](https://github.com/benley/bazel_rules_pex).
-
-If you defined your topology by subclassing the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class and built a `word_count.pex` file for that topology in the `~/topology` folder. You can submit the topology to a cluster called `local` like this:
-
-```bash
-$ heron submit local \
-  ~/topology/word_count.pex \
-  - # No class specified
-```
-
-Note the `-` in this submission command. If you define a topology by subclassing `TopologyBuilder` you do not need to instruct Heron where your main method is located.
-
-> #### Example topologies buildable as PEXs
-> * See [this repo](https://github.com/streamlio/pants-dev-environment) for an example of a Heron topology written in Python and deployable as a Pants-packaged PEX.
-> * See [this repo](https://github.com/streamlio/bazel-dev-environment) for an example of a Heron topology written in Python and deployable as a Bazel-packaged PEX.
-
-## Defining a topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class
-
-If you create a Python topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class, you need to create a new topology class, like this:
-
-```python
-from my_spout import WordSpout
-from my_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import Topology
-
-
-class MyTopology(Topology):
-    my_spout = WordSpout.spec(par=2)
-    my_bolt_inputs = {my_spout: Grouping.fields("word")}
-    my_bolt = CountBolt.spec(par=3, inputs=my_bolt_inputs)
-```
-
-All you need to do is place [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s as the class attributes
-of your topology class, which are returned by the `spec()` method of
-your spout or bolt class. You do *not* need to run a `build` method or anything like that; the `Topology` class will automatically detect which spouts and bolts are included in the topology.
-
-> If you use this method to define a new Python topology, you do *not* need to have a main function.
-
-For bolts, the [`spec`](/api/python/bolt/bolt.m.html#heronpy.bolt.bolt.Bolt.spec) method for spouts takes three optional arguments::
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this bolt or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this bolt | |
-`par` | `int` | The number of instances of this bolt in the topology | |
-`config` | `dict` | Specifies the configuration for this bolt | `None`
-
-
-For spouts, the [`spec`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout.spec) method takes four optional arguments:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this spout or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this spout | `None` |
-`inputs` | `dict` or `list` | Either a `dict` mapping from [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) to [`Grouping`](/api/python/stream.m.html#heronpy.stream.Grouping) *or* a list of [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s, in which case the [`shuffle`](/api/python/stream.m.html#heronpy.stream.Grouping.SHUFFLE) grouping is used
-`par` | `int` | The number of instances of this spout in the topology | `1` |
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-### Example
-
-Here's an example topology definition with one spout and one bolt:
-
-```python
-from my_spout import WordSpout
-from my_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import Topology
-
-
-class WordCount(Topology):
-    word_spout = WordSpout.spec(par=2)
-    count_bolt = CountBolt.spec(par=2, inputs={word_spout: Grouping.fields("word")})
-```
-
-### Launching
-
-If you defined your topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class,
-your main Python file should *not* contain a main method. You will, however, need to instruct Heron which class contains your topology definition.
-
-Let's say that you've defined a topology by subclassing `Topology` and built a PEX stored in `~/topology/dist/word_count.pex`. The class containing your topology definition is `topology.word_count.WordCount`. You can submit the topology to a cluster called `local` like this:
-
-```bash
-$ heron submit local \
-  ~/topology/dist/word_count.pex \
-  topology.word_count.WordCount \ # Specifies the topology class definition
-  WordCountTopology
-```
-
-### Topology-wide configuration
-
-If you're building a Python topology by subclassing `Topology`, you can specify configuration for the topology using the [`set_config`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.set_config) method. A topology's config is a `dict` in which the keys are a series constants from the [`api_constants`](/api/python/api_constants.m.html) module and values are configuration values for those parameters.
-
-Here's an example:
-
-```python
-from heronpy.api.topology import Topology
-from heronpy.api import api_constants
-
-
-class MyTopology(Topology):
-    config = {
-        api_constants.TOPOLOGY_ENABLE_MESSAGE_TIMEOUTS: True
-    }
-    # Add bolts and spouts, etc.
-```
-
-## Multiple streams
-
-To specify that a component has multiple output streams, instead of using a list of
-strings for `outputs`, you can specify a list of `Stream` objects, in the following manner.
-
-```python
-class MultiStreamSpout(Spout):
-    outputs = [
-        Stream(fields=["normal", "fields"], name="default"),
-        Stream(fields=["error_message"], name="error_stream"),
-    ]
-```
-
-To select one of these streams as the input for your bolt, you can simply
-use `[]` to specify the stream you want. Without any stream specified, the `default`
-stream will be used.
-
-```python
-class MultiStreamTopology(Topology):
-    spout = MultiStreamSpout.spec()
-    error_bolt = ErrorBolt.spec(inputs={spout["error_stream"]: Grouping.LOWEST})
-    consume_bolt = ConsumeBolt.spec(inputs={spout: Grouping.SHUFFLE})
-```
-
-## Declaring output fields using the `spec()` method
-
-In Python topologies, the output fields of your spouts and bolts
-need to be declared by placing `outputs` class attributes, as there is
-no `declareOutputFields()` method. `heronpy` enables you to dynamically declare output fields as a list using the
-`optional_outputs` argument in the `spec()` method.
-
-This is useful in a situation like below.
-
-```python
-class IdentityBolt(Bolt):
-    # Statically declaring output fields is not allowed
-    class process(self, tup):
-        emit([tup.values])
-
-
-class DynamicOutputField(Topology):
-    spout = WordSpout.spec()
-    bolt = IdentityBolt.spec(inputs={spout: Grouping.ALL}, optional_outputs=["word"])
-```
-
-You can also declare outputs in the `add_spout()` and the `add_bolt()`
-method for the `TopologyBuilder` in the same way.
+By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
 
 ## Example topologies
 
@@ -867,3 +535,4 @@ $ heron submit local \
 ```
 
 By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
+
diff --git a/website2/website/sidebars.json b/website2/website/sidebars.json
index b467216..d46ef12 100755
--- a/website2/website/sidebars.json
+++ b/website2/website/sidebars.json
@@ -23,7 +23,6 @@
     ],
     "Guides": [
       "guides-effectively-once-java-topologies",
-      "guides-python-topologies",
       "guides-data-model",
       "guides-tuple-serialization",
       "guides-ui-guide",
diff --git a/website2/website/versioned_docs/version-0.20.0-incubating/compiling-code-organization.md b/website2/website/versioned_docs/version-0.20.0-incubating/compiling-code-organization.md
index aa423f9..05a12bd 100644
--- a/website2/website/versioned_docs/version-0.20.0-incubating/compiling-code-organization.md
+++ b/website2/website/versioned_docs/version-0.20.0-incubating/compiling-code-organization.md
@@ -23,7 +23,7 @@ original_id: compiling-code-organization
 
 This document contains information about the Heron codebase intended primarily
 for developers who want to contribute to Heron. The Heron codebase lives on
-[github]({{% githubMaster %}}).
+[github](https://github.com/apache/incubator-heron/tree/master).
 
 If you're looking for documentation about developing topologies for a Heron
 cluster, see [Building Topologies](topology-development-topology-api-java) instead.
@@ -36,12 +36,11 @@ The primary programming languages for Heron are C++, Java, and Python.
 [Topology Master](heron-architecture#topology-master), and
 [Stream Manager](heron-architecture#stream-manager).
 
-* **Java 8** is used primarily for Heron's [topology
+* **Java 11** is used primarily for Heron's [topology
 API](heron-topology-concepts), and [Heron Instance](heron-architecture#heron-instance).
 It is currently the only language in which topologies can be written. Instructions can be found
 in [Building Topologies](../../developers/java/topologies), while documentation for the Java
-API can be found [here](/api/org/apache/heron/api/topology/package-summary.html). Please note that Heron topologies do not
-require Java 8 and can be written in Java 7 or later.
+API can be found [here](/api/org/apache/heron/api/topology/package-summary.html). Please note that Heron topologies do not require Java 11 and can be written in Java 7 or later.
 
 * **Python 2** (specifically 2.7) is used primarily for Heron's [CLI interface](user-manuals-heron-cli) and UI components such as [Heron UI](user-manuals-heron-ui) and the [Heron Tracker](user-manuals-heron-tracker-runbook).
 
@@ -53,7 +52,7 @@ Information on setting up and using Bazel for Heron can be found in [Compiling H
 * **Inter-component communication** --- Heron uses [Protocol
 Buffers](https://developers.google.com/protocol-buffers/?hl=en) for
 communication between components. Most `.proto` definition files can be found in
-[`heron/proto`]({{% githubMaster %}}/heron/proto).
+[`heron/proto`](https://github.com/apache/incubator-heron/tree/master/heron/proto).
 
 * **Cluster coordination** --- Heron relies heavily on ZooKeeper for cluster
 coordination for distributed deployment, be it for [Aurora](schedulers-aurora-cluster) or for a [custom
@@ -63,7 +62,7 @@ Management](#state-management) section below.
 
 ## Common Utilities
 
-The [`heron/common`]({{% githubMaster %}}/heron/common) contains a variety of
+The [`heron/common`](https://github.com/apache/incubator-heron/tree/master/heron/common) contains a variety of
 utilities for each of Heron's languages, including useful constants, file
 utilities, networking interfaces, and more.
 
@@ -72,7 +71,7 @@ utilities, networking interfaces, and more.
 Heron supports two cluster schedulers out of the box:
 [Aurora](schedulers-aurora-cluster) and a [local
 scheduler](schedulers-local). The Java code for each of those
-schedulers can be found in [`heron/schedulers`]({{% githubMaster %}}/heron/schedulers)
+schedulers can be found in [`heron/schedulers`](https://github.com/apache/incubator-heron/tree/master/heron/schedulers)
 , while the underlying scheduler API can be found [here](/api/org/apache/heron/spi/scheduler/package-summary.html)
 
 Info on custom schedulers can be found in [Implementing a Custom
@@ -85,10 +84,10 @@ Deployment](schedulers-local).
 
 The parts of Heron's codebase related to
 [ZooKeeper](http://zookeeper.apache.org/) are mostly contained in
-[`heron/state`]({{% githubMaster %}}/heron/state). There are ZooKeeper-facing
-interfaces for [C++]({{% githubMaster %}}/heron/state/src/cpp),
-[Java]({{% githubMaster %}}/heron/state/src/java), and
-[Python]({{% githubMaster %}}/heron/state/src/python) that are used in a variety of
+[`heron/state`](https://github.com/apache/incubator-heron/tree/master/heron/state). There are ZooKeeper-facing
+interfaces for [C++](https://github.com/apache/incubator-heron/tree/master/heron/state/src/cpp),
+[Java](https://github.com/apache/incubator-heron/tree/master/heron/state/src/java), and
+[Python](https://github.com/apache/incubator-heron/tree/master/heron/state/src/python) that are used in a variety of
 Heron components.
 
 ## Topology Components
@@ -97,25 +96,25 @@ Heron components.
 
 The C++ code for Heron's [Topology
 Master](heron-architecture#topology-master) is written in C++ can be
-found in [`heron/tmaster`]({{% githubMaster %}}/heron/tmaster).
+found in [`heron/tmaster`](https://github.com/apache/incubator-heron/tree/master/heron/tmaster).
 
 ### Stream Manager
 
 The C++ code for Heron's [Stream
 Manager](heron-architecture#stream-manager) can be found in
-[`heron/stmgr`]({{% githubMaster %}}/heron/stmgr).
+[`heron/stmgr`](https://github.com/apache/incubator-heron/tree/master/heron/stmgr).
 
 ### Heron Instance
 
 The Java code for [Heron
 instances](heron-architecture#heron-instance) can be found in
-[`heron/instance`]({{% githubMaster %}}/heron/instance).
+[`heron/instance`](https://github.com/apache/incubator-heron/tree/master/heron/instance).
 
 ### Metrics Manager
 
 The Java code for Heron's [Metrics
 Manager](heron-architecture#metrics-manager) can be found in
-[`heron/metricsmgr`]({{% githubMaster %}}/heron/metricsmgr).
+[`heron/metricsmgr`](https://github.com/apache/incubator-heron/tree/master/heron/metricsmgr).
 
 If you'd like to implement your own custom metrics handler (known as a **metrics
 sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
@@ -125,7 +124,7 @@ sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
 ### Topology API
 
 Heron's API for writing topologies is written in Java. The code for this API can
-be found in [`heron/api`]({{% githubMaster %}}/heron/api).
+be found in [`heron/api`](https://github.com/apache/incubator-heron/tree/master/heron/api).
 
 Documentation for writing topologies can be found in [Building
 Topologies](topology-development-topology-api-java), while API documentation can be found
@@ -144,7 +143,7 @@ The Java API for simulator can be found in
 Heron's codebase includes a wide variety of example
 [topologies](heron-topology-concepts) built using Heron's topology API for
 Java. Those examples can be found in
-[`heron/examples`]({{% githubMaster %}}/heron/examples).
+[`heron/examples`](https://github.com/apache/incubator-heron/tree/master/heron/examples).
 
 ## User Interface Components
 
@@ -154,45 +153,45 @@ Heron has a tool called `heron` that is used to both provide a CLI interface
 for [managing topologies](user-manuals-heron-cli) and to perform much of
 the heavy lifting behind assembling physical topologies in your cluster.
 The Python code for `heron` can be found in
-[`heron/tools/cli`]({{% githubMaster %}}/heron/tools/cli).
+[`heron/tools/cli`](https://github.com/apache/incubator-heron/tree/master/heron/tools/cli).
 
 Sample configurations for different Heron schedulers
 
-* [Local scheduler](schedulers-local) config can be found in [`heron/config/src/yaml/conf/local`]({{% githubMaster %}}/heron/config/src/yaml/conf/local),
-* [Aurora scheduler](schedulers-aurora-cluster) config can be found [`heron/config/src/yaml/conf/aurora`]({{% githubMaster %}}/heron/config/src/yaml/conf/aurora).
+* [Local scheduler](schedulers-local) config can be found in [`heron/config/src/yaml/conf/local`](https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/local),
+* [Aurora scheduler](schedulers-aurora-cluster) config can be found [`heron/config/src/yaml/conf/aurora`]({https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/aurora).
 
 ### Heron Tracker
 
 The Python code for the [Heron Tracker](user-manuals-heron-tracker-runbook) can be
-found in [`heron/tools/tracker`]({{% githubMaster %}}/heron/tools/tracker).
+found in [`heron/tools/tracker`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker).
 
 The Tracker is a web server written in Python. It relies on the
 [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new HTTP
 routes to the Tracker in
-[`main.py`]({{% githubMaster %}}/heron/tools/tracker/src/python/main.py) and
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker/src/python/main.py) and
 corresponding handlers in the
-[`handlers`]({{% githubMaster %}}/heron/tools/tracker/src/python/handlers) directory.
+[`handlers`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker/src/python/handlers) directory.
 
 ### Heron UI
 
 The Python code for the [Heron UI](user-manuals-heron-ui) can be found in
-[`heron/tools/ui`]({{% githubMaster %}}/heron/tools/ui).
+[`heron/tools/ui`](https://github.com/apache/incubator-heron/tree/master/heron/tools/ui).
 
 Like Heron Tracker, Heron UI is a web server written in Python that relies on
 the [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new
 HTTP routes to Heron UI in
-[`main.py`]({{% githubMaster %}}/heron/web/source/python/main.py) and corresponding
-handlers in the [`handlers`]({{% githubMaster %}}/heron/web/source/python/handlers)
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/web/source/python/main.py) and corresponding
+handlers in the [`handlers`](https://github.com/apache/incubator-heron/tree/master/heron/web/source/python/handlers)
 directory.
 
 ### Heron Shell
 
 The Python code for the [Heron Shell](user-manuals-heron-shell) can be
-found in [`heron/shell`]({{% githubMaster %}}/heron/shell). The HTTP handlers and
+found in [`heron/shell`](https://github.com/apache/incubator-heron/tree/master/heron/shell). The HTTP handlers and
 web server are defined in
-[`main.py`]({{% githubMaster %}}/heron/shell/src/python/main.py) while the HTML,
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/shell/src/python/main.py) while the HTML,
 JavaScript, CSS, and images for the web UI can be found in the
-[`assets`]({{% githubMaster %}}/heron/shell/assets) directory.
+[`assets`](https://github.com/apache/incubator-heron/tree/master/heron/shell/assets) directory.
 
 ## Tests
 
diff --git a/website2/website/versioned_docs/version-0.20.0-incubating/topology-development-topology-api-python.md b/website2/website/versioned_docs/version-0.20.0-incubating/topology-development-topology-api-python.md
index 7a3e77f..ab29756 100644
--- a/website2/website/versioned_docs/version-0.20.0-incubating/topology-development-topology-api-python.md
+++ b/website2/website/versioned_docs/version-0.20.0-incubating/topology-development-topology-api-python.md
@@ -182,8 +182,7 @@ $ heron submit local \
 Note the `-` in this submission command. If you define a topology by subclassing `TopologyBuilder` you do not need to instruct Heron where your main method is located.
 
 > #### Example topologies buildable as PEXs
-> * See [this repo](https://github.com/streamlio/pants-dev-environment) for an example of a Heron topology written in Python and deployable as a Pants-packaged PEX.
-> * See [this repo](https://github.com/streamlio/bazel-dev-environment) for an example of a Heron topology written in Python and deployable as a Bazel-packaged PEX.
+> * TODO
 
 ## Defining a topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class
 
@@ -324,45 +323,6 @@ class DynamicOutputField(Topology):
 You can also declare outputs in the `add_spout()` and the `add_bolt()`
 method for the `TopologyBuilder` in the same way.
 
-## Example topologies
-
-There are a number of example topologies that you can peruse in the [`examples/src/python`]({{% githubMaster %}}/examples/src/python) directory of the [Heron repo]({{% githubMaster %}}):
-
-Topology | File | Description
-:--------|:-----|:-----------
-Word count | [`word_count_topology.py`]({{% githubMaster %}}/examples/src/python/word_count_topology.py) | The [`WordSpout`]({{% githubMaster %}}/examples/src/python/spout/word_spout.py) spout emits random words from a list, while the [`CountBolt`]({{% githubMaster %}}/examples/src/python/bolt/count_bolt.py) bolt counts the number of words that have been emitted.
-Multiple streams | [`multi_stream_topology.py`]({{% githubMaster %}}/examples/src/python/multi_stream_topology.py) | The [`MultiStreamSpout`]({{% githubMaster %}}/examples/src/python/spout/multi_stream_spout.py) emits multiple streams to downstream bolts.
-Half acking | [`half_acking_topology.py`]({{% githubMaster %}}/examples/src/python/half_acking_topology.py) | The [`HalfAckBolt`]({{% githubMaster %}}/examples/src/python/bolt/half_ack_bolt.py) acks only half of all received tuples.
-Custom grouping | [`custom_grouping_topology.py`]({{% githubMaster %}}/examples/src/python/custom_grouping_topology.py) | The [`SampleCustomGrouping`]({{% githubMaster %}}/examples/src/python/custom_grouping_topology.py#L26) class provides a custom field grouping.
-
-You can build the respective PEXs for these topologies using the following commands:
-
-```shell
-$ bazel build examples/src/python:word_count
-$ bazel build examples/src/python:multi_stream
-$ bazel build examples/src/python:half_acking
-$ bazel build examples/src/python:custom_grouping
-```
-
-All built PEXs will be stored in `bazel-bin/examples/src/python`. You can submit them to Heron like so:
-
-```shell
-$ heron submit local \
-  bazel-bin/examples/src/python/word_count.pex - \
-  WordCount
-$ heron submit local \
-  bazel-bin/examples/src/python/multi_stream.pex \
-  heron.examples.src.python.multi_stream_topology.MultiStream
-$ heron submit local \
-  bazel-bin/examples/src/python/half_acking.pex - \
-  HalfAcking
-$ heron submit local \
-  bazel-bin/examples/src/python/custom_grouping.pex \
-  heron.examples.src.python.custom_grouping_topology.CustomGrouping
-```
-
-By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
-
 ## Bolts 
 
  Bolts must implement the `Bolt` interface, which has the following methods.
@@ -536,299 +496,7 @@ class WordSpout(Spout):
         self.emit([word])
 ```
 
-## Topologies Further
-
-```shell
-$ pip install heronpy
-$ easy_install heronpy
-```
-
-Then you can include `heronpy` in your project files. Here's an example:
-
-```python
-from heronpy.api.bolt.bolt import Bolt
-from heronpy.api.spout.spout import Spout
-from heronpy.api.topology import Topology
-```
-
-## Writing topologies in Python
-
-Heron [topologies](heron-topologies-concepts) are networks of [spouts](../spouts) that pull data into a topology and [bolts](../bolts) that process that ingested data.
-
-> You can see how to create Python spouts in the [Implementing Python Spouts](../spouts) guide and how to create Python bolts in the [Implementing Python Bolts](../bolts) guide.
-
-Once you've defined spouts and bolts for a topology, you can then compose the topology in one of two ways:
-
-* You can use the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class inside of a main function.
-
-    Here's an example:
-
-    ```python
-    #!/usr/bin/env python
-    from heronpy.api.topology import TopologyBuilder
-
-
-    if __name__ == "__main__":
-        builder = TopologyBuilder("MyTopology")
-        # Add spouts and bolts
-        builder.build_and_submit()
-    ```
-
-* You can subclass the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class.
-
-    Here's an example:
-
-    ```python
-    from heronpy.api.stream import Grouping
-    from heronpy.api.topology import Topology
-
-
-    class MyTopology(Topology):
-        my_spout = WordSpout.spec(par=2)
-        my_bolt = CountBolt.spec(par=3, inputs={spout: Grouping.fields("word")})
-    ```
-
-## Defining topologies using the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class
-
-If you create a Python topology using a [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder), you need to instantiate a `TopologyBuilder` inside of a standard Python main function, like this:
-
-```python
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    builder = TopologyBuilder("MyTopology")
-```
-
-Once you've created a `TopologyBuilder` object, you can add [bolts](../bolts) using the [`add_bolt`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.add_bolt) method and [spouts](../spouts) using the [`add_spout`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.add_spout) method. Here's an example:
-
-```python
-builder = TopologyBuilder("MyTopology")
-builder.add_bolt("my_bolt", CountBolt, par=3)
-builder.add_spout("my_spout", WordSpout, par=2)
-```
-
-Both the `add_bolt` and `add_spout` methods return the corresponding [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) object.
-
-The `add_bolt` method takes four arguments and an optional `config` parameter:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this bolt | |
-`bolt_cls` | class | The subclass of [`Bolt`](/api/python/bolt/bolt.m.html#heronpy.bolt.bolt.Bolt) that defines this bolt | |
-`par` | `int` | The number of instances of this bolt in the topology | |
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-The `add_spout` method takes three arguments and an optional `config` parameter:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this spout | |
-`spout_cls` | class | The subclass of [`Spout`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout) that defines this spout | |
-`par` | `int` | The number of instances of this spout in the topology | |
-`inputs` | `dict` or `list` | Either a `dict` mapping from [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) to [`Grouping`](/api/python/stream.m.html#heronpy.stream.Grouping) *or* a list of [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s, in which case the [`shuffle`](/api/python/stream.m.html#heronpy.stream.Grouping.SHUFFLE) grouping is used
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-### Example
-
-The following is an example implementation of a word count topology in Python that subclasses [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder).
-
-```python
-from your_spout import WordSpout
-from your_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    builder = TopologyBuilder("WordCountTopology")
-    # piece together the topology
-    word_spout = builder.add_spout("word_spout", WordSpout, par=2)
-    count_bolt = builder.add_bolt("count_bolt", CountBolt, par=2, inputs={word_spout: Grouping.fields("word")})
-    # submit the toplogy
-    builder.build_and_submit()
-```
-
-Note that arguments to the main method can be passed by providing them in the
-`heron submit` command.
-
-### Topology-wide configuration
-
-If you're building a Python topology using a `TopologyBuilder`, you can specify configuration for the topology using the [`set_config`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.set_config) method. A topology's config is a `dict` in which the keys are a series constants from the [`api_constants`](/api/python/api_constants.m.html) module and values are configuration values for those parameters.
-
-Here's an example:
-
-```python
-from heronpy.api import api_constants
-from heronpy.api.topology import TopologyBuilder
-
-
-if __name__ == "__main__":
-    topology_config = {
-        api_constants.TOPOLOGY_ENABLE_MESSAGE_TIMEOUTS: True
-    }
-    builder = TopologyBuilder("MyTopology")
-    builder.set_config(topology_config)
-    # Add bolts and spouts, etc.
-```
-
-### Launching the topology
-
-If you want to [submit](../../../operators/heron-cli#submitting-a-topology) Python topologies to a Heron cluster, they need to be packaged as a [PEX](https://pex.readthedocs.io/en/stable/whatispex.html) file. In order to produce PEX files, we recommend using a build tool like [Pants](http://www.pantsbuild.org/python_readme.html) or [Bazel](https://github.com/benley/bazel_rules_pex).
-
-If you defined your topology by subclassing the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class and built a `word_count.pex` file for that topology in the `~/topology` folder. You can submit the topology to a cluster called `local` like this:
-
-```bash
-$ heron submit local \
-  ~/topology/word_count.pex \
-  - # No class specified
-```
-
-Note the `-` in this submission command. If you define a topology by subclassing `TopologyBuilder` you do not need to instruct Heron where your main method is located.
-
-> #### Example topologies buildable as PEXs
-> * See [this repo](https://github.com/streamlio/pants-dev-environment) for an example of a Heron topology written in Python and deployable as a Pants-packaged PEX.
-> * See [this repo](https://github.com/streamlio/bazel-dev-environment) for an example of a Heron topology written in Python and deployable as a Bazel-packaged PEX.
-
-## Defining a topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class
-
-If you create a Python topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class, you need to create a new topology class, like this:
-
-```python
-from my_spout import WordSpout
-from my_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import Topology
-
-
-class MyTopology(Topology):
-    my_spout = WordSpout.spec(par=2)
-    my_bolt_inputs = {my_spout: Grouping.fields("word")}
-    my_bolt = CountBolt.spec(par=3, inputs=my_bolt_inputs)
-```
-
-All you need to do is place [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s as the class attributes
-of your topology class, which are returned by the `spec()` method of
-your spout or bolt class. You do *not* need to run a `build` method or anything like that; the `Topology` class will automatically detect which spouts and bolts are included in the topology.
-
-> If you use this method to define a new Python topology, you do *not* need to have a main function.
-
-For bolts, the [`spec`](/api/python/bolt/bolt.m.html#heronpy.bolt.bolt.Bolt.spec) method for spouts takes three optional arguments::
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this bolt or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this bolt | |
-`par` | `int` | The number of instances of this bolt in the topology | |
-`config` | `dict` | Specifies the configuration for this bolt | `None`
-
-
-For spouts, the [`spec`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout.spec) method takes four optional arguments:
-
-Argument | Data type | Description | Default
-:--------|:----------|:------------|:-------
-`name` | `str` | The unique identifier assigned to this spout or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this spout | `None` |
-`inputs` | `dict` or `list` | Either a `dict` mapping from [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) to [`Grouping`](/api/python/stream.m.html#heronpy.stream.Grouping) *or* a list of [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s, in which case the [`shuffle`](/api/python/stream.m.html#heronpy.stream.Grouping.SHUFFLE) grouping is used
-`par` | `int` | The number of instances of this spout in the topology | `1` |
-`config` | `dict` | Specifies the configuration for this spout | `None`
-
-### Example
-
-Here's an example topology definition with one spout and one bolt:
-
-```python
-from my_spout import WordSpout
-from my_bolt import CountBolt
-
-from heronpy.api.stream import Grouping
-from heronpy.api.topology import Topology
-
-
-class WordCount(Topology):
-    word_spout = WordSpout.spec(par=2)
-    count_bolt = CountBolt.spec(par=2, inputs={word_spout: Grouping.fields("word")})
-```
-
-### Launching
-
-If you defined your topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class,
-your main Python file should *not* contain a main method. You will, however, need to instruct Heron which class contains your topology definition.
-
-Let's say that you've defined a topology by subclassing `Topology` and built a PEX stored in `~/topology/dist/word_count.pex`. The class containing your topology definition is `topology.word_count.WordCount`. You can submit the topology to a cluster called `local` like this:
-
-```bash
-$ heron submit local \
-  ~/topology/dist/word_count.pex \
-  topology.word_count.WordCount \ # Specifies the topology class definition
-  WordCountTopology
-```
-
-### Topology-wide configuration
-
-If you're building a Python topology by subclassing `Topology`, you can specify configuration for the topology using the [`set_config`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.set_config) method. A topology's config is a `dict` in which the keys are a series constants from the [`api_constants`](/api/python/api_constants.m.html) module and values are configuration values for those parameters.
-
-Here's an example:
-
-```python
-from heronpy.api.topology import Topology
-from heronpy.api import api_constants
-
-
-class MyTopology(Topology):
-    config = {
-        api_constants.TOPOLOGY_ENABLE_MESSAGE_TIMEOUTS: True
-    }
-    # Add bolts and spouts, etc.
-```
-
-## Multiple streams
-
-To specify that a component has multiple output streams, instead of using a list of
-strings for `outputs`, you can specify a list of `Stream` objects, in the following manner.
-
-```python
-class MultiStreamSpout(Spout):
-    outputs = [
-        Stream(fields=["normal", "fields"], name="default"),
-        Stream(fields=["error_message"], name="error_stream"),
-    ]
-```
-
-To select one of these streams as the input for your bolt, you can simply
-use `[]` to specify the stream you want. Without any stream specified, the `default`
-stream will be used.
-
-```python
-class MultiStreamTopology(Topology):
-    spout = MultiStreamSpout.spec()
-    error_bolt = ErrorBolt.spec(inputs={spout["error_stream"]: Grouping.LOWEST})
-    consume_bolt = ConsumeBolt.spec(inputs={spout: Grouping.SHUFFLE})
-```
-
-## Declaring output fields using the `spec()` method
-
-In Python topologies, the output fields of your spouts and bolts
-need to be declared by placing `outputs` class attributes, as there is
-no `declareOutputFields()` method. `heronpy` enables you to dynamically declare output fields as a list using the
-`optional_outputs` argument in the `spec()` method.
-
-This is useful in a situation like below.
-
-```python
-class IdentityBolt(Bolt):
-    # Statically declaring output fields is not allowed
-    class process(self, tup):
-        emit([tup.values])
-
-
-class DynamicOutputField(Topology):
-    spout = WordSpout.spec()
-    bolt = IdentityBolt.spec(inputs={spout: Grouping.ALL}, optional_outputs=["word"])
-```
-
-You can also declare outputs in the `add_spout()` and the `add_bolt()`
-method for the `TopologyBuilder` in the same way.
+By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
 
 ## Example topologies
 
@@ -868,3 +536,4 @@ $ heron submit local \
 ```
 
 By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
+
diff --git a/website2/docs/compiling-code-organization.md b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-code-organization.md
similarity index 70%
copy from website2/docs/compiling-code-organization.md
copy to website2/website/versioned_docs/version-0.20.2-incubating/compiling-code-organization.md
index 053bc48..c262152 100644
--- a/website2/docs/compiling-code-organization.md
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-code-organization.md
@@ -1,7 +1,8 @@
 ---
-id: compiling-code-organization
+id: version-0.20.2-incubating-compiling-code-organization
 title: Code Organization
 sidebar_label: Code Organization
+original_id: compiling-code-organization
 ---
 <!--
     Licensed to the Apache Software Foundation (ASF) under one
@@ -22,7 +23,7 @@ sidebar_label: Code Organization
 
 This document contains information about the Heron codebase intended primarily
 for developers who want to contribute to Heron. The Heron codebase lives on
-[github]({{% githubMaster %}}).
+[github](https://github.com/apache/incubator-heron/tree/master).
 
 If you're looking for documentation about developing topologies for a Heron
 cluster, see [Building Topologies](topology-development-topology-api-java) instead.
@@ -51,7 +52,7 @@ Information on setting up and using Bazel for Heron can be found in [Compiling H
 * **Inter-component communication** --- Heron uses [Protocol
 Buffers](https://developers.google.com/protocol-buffers/?hl=en) for
 communication between components. Most `.proto` definition files can be found in
-[`heron/proto`]({{% githubMaster %}}/heron/proto).
+[`heron/proto`](https://github.com/apache/incubator-heron/tree/master/heron/proto).
 
 * **Cluster coordination** --- Heron relies heavily on ZooKeeper for cluster
 coordination for distributed deployment, be it for [Aurora](schedulers-aurora-cluster) or for a [custom
@@ -61,7 +62,7 @@ Management](#state-management) section below.
 
 ## Common Utilities
 
-The [`heron/common`]({{% githubMaster %}}/heron/common) contains a variety of
+The [`heron/common`](https://github.com/apache/incubator-heron/tree/master/heron/common) contains a variety of
 utilities for each of Heron's languages, including useful constants, file
 utilities, networking interfaces, and more.
 
@@ -70,7 +71,7 @@ utilities, networking interfaces, and more.
 Heron supports two cluster schedulers out of the box:
 [Aurora](schedulers-aurora-cluster) and a [local
 scheduler](schedulers-local). The Java code for each of those
-schedulers can be found in [`heron/schedulers`]({{% githubMaster %}}/heron/schedulers)
+schedulers can be found in [`heron/schedulers`](https://github.com/apache/incubator-heron/tree/master/heron/schedulers)
 , while the underlying scheduler API can be found [here](/api/org/apache/heron/spi/scheduler/package-summary.html)
 
 Info on custom schedulers can be found in [Implementing a Custom
@@ -83,10 +84,10 @@ Deployment](schedulers-local).
 
 The parts of Heron's codebase related to
 [ZooKeeper](http://zookeeper.apache.org/) are mostly contained in
-[`heron/state`]({{% githubMaster %}}/heron/state). There are ZooKeeper-facing
-interfaces for [C++]({{% githubMaster %}}/heron/state/src/cpp),
-[Java]({{% githubMaster %}}/heron/state/src/java), and
-[Python]({{% githubMaster %}}/heron/state/src/python) that are used in a variety of
+[`heron/state`](https://github.com/apache/incubator-heron/tree/master/heron/state). There are ZooKeeper-facing
+interfaces for [C++](https://github.com/apache/incubator-heron/tree/master/heron/state/src/cpp),
+[Java](https://github.com/apache/incubator-heron/tree/master/heron/state/src/java), and
+[Python](https://github.com/apache/incubator-heron/tree/master/heron/state/src/python) that are used in a variety of
 Heron components.
 
 ## Topology Components
@@ -95,25 +96,25 @@ Heron components.
 
 The C++ code for Heron's [Topology
 Master](heron-architecture#topology-master) is written in C++ can be
-found in [`heron/tmaster`]({{% githubMaster %}}/heron/tmaster).
+found in [`heron/tmaster`](https://github.com/apache/incubator-heron/tree/master/heron/tmaster).
 
 ### Stream Manager
 
 The C++ code for Heron's [Stream
 Manager](heron-architecture#stream-manager) can be found in
-[`heron/stmgr`]({{% githubMaster %}}/heron/stmgr).
+[`heron/stmgr`](https://github.com/apache/incubator-heron/tree/master/heron/stmgr).
 
 ### Heron Instance
 
 The Java code for [Heron
 instances](heron-architecture#heron-instance) can be found in
-[`heron/instance`]({{% githubMaster %}}/heron/instance).
+[`heron/instance`](https://github.com/apache/incubator-heron/tree/master/heron/instance).
 
 ### Metrics Manager
 
 The Java code for Heron's [Metrics
 Manager](heron-architecture#metrics-manager) can be found in
-[`heron/metricsmgr`]({{% githubMaster %}}/heron/metricsmgr).
+[`heron/metricsmgr`](https://github.com/apache/incubator-heron/tree/master/heron/metricsmgr).
 
 If you'd like to implement your own custom metrics handler (known as a **metrics
 sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
@@ -123,7 +124,7 @@ sink**), see [Implementing a Custom Metrics Sink](extending-heron-metric-sink).
 ### Topology API
 
 Heron's API for writing topologies is written in Java. The code for this API can
-be found in [`heron/api`]({{% githubMaster %}}/heron/api).
+be found in [`heron/api`](https://github.com/apache/incubator-heron/tree/master/heron/api).
 
 Documentation for writing topologies can be found in [Building
 Topologies](topology-development-topology-api-java), while API documentation can be found
@@ -142,7 +143,7 @@ The Java API for simulator can be found in
 Heron's codebase includes a wide variety of example
 [topologies](heron-topology-concepts) built using Heron's topology API for
 Java. Those examples can be found in
-[`heron/examples`]({{% githubMaster %}}/heron/examples).
+[`heron/examples`](https://github.com/apache/incubator-heron/tree/master/heron/examples).
 
 ## User Interface Components
 
@@ -152,45 +153,45 @@ Heron has a tool called `heron` that is used to both provide a CLI interface
 for [managing topologies](user-manuals-heron-cli) and to perform much of
 the heavy lifting behind assembling physical topologies in your cluster.
 The Python code for `heron` can be found in
-[`heron/tools/cli`]({{% githubMaster %}}/heron/tools/cli).
+[`heron/tools/cli`](https://github.com/apache/incubator-heron/tree/master/heron/tools/cli).
 
 Sample configurations for different Heron schedulers
 
-* [Local scheduler](schedulers-local) config can be found in [`heron/config/src/yaml/conf/local`]({{% githubMaster %}}/heron/config/src/yaml/conf/local),
-* [Aurora scheduler](schedulers-aurora-cluster) config can be found [`heron/config/src/yaml/conf/aurora`]({{% githubMaster %}}/heron/config/src/yaml/conf/aurora).
+* [Local scheduler](schedulers-local) config can be found in [`heron/config/src/yaml/conf/local`](https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/local),
+* [Aurora scheduler](schedulers-aurora-cluster) config can be found [`heron/config/src/yaml/conf/aurora`]({https://github.com/apache/incubator-heron/tree/master/heron/config/src/yaml/conf/aurora).
 
 ### Heron Tracker
 
 The Python code for the [Heron Tracker](user-manuals-heron-tracker-runbook) can be
-found in [`heron/tools/tracker`]({{% githubMaster %}}/heron/tools/tracker).
+found in [`heron/tools/tracker`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker).
 
 The Tracker is a web server written in Python. It relies on the
 [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new HTTP
 routes to the Tracker in
-[`main.py`]({{% githubMaster %}}/heron/tools/tracker/src/python/main.py) and
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker/src/python/main.py) and
 corresponding handlers in the
-[`handlers`]({{% githubMaster %}}/heron/tools/tracker/src/python/handlers) directory.
+[`handlers`](https://github.com/apache/incubator-heron/tree/master/heron/tools/tracker/src/python/handlers) directory.
 
 ### Heron UI
 
 The Python code for the [Heron UI](user-manuals-heron-ui) can be found in
-[`heron/tools/ui`]({{% githubMaster %}}/heron/tools/ui).
+[`heron/tools/ui`](https://github.com/apache/incubator-heron/tree/master/heron/tools/ui).
 
 Like Heron Tracker, Heron UI is a web server written in Python that relies on
 the [Tornado](http://www.tornadoweb.org/en/stable/) framework. You can add new
 HTTP routes to Heron UI in
-[`main.py`]({{% githubMaster %}}/heron/web/source/python/main.py) and corresponding
-handlers in the [`handlers`]({{% githubMaster %}}/heron/web/source/python/handlers)
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/web/source/python/main.py) and corresponding
+handlers in the [`handlers`](https://github.com/apache/incubator-heron/tree/master/heron/web/source/python/handlers)
 directory.
 
 ### Heron Shell
 
 The Python code for the [Heron Shell](user-manuals-heron-shell) can be
-found in [`heron/shell`]({{% githubMaster %}}/heron/shell). The HTTP handlers and
+found in [`heron/shell`](https://github.com/apache/incubator-heron/tree/master/heron/shell). The HTTP handlers and
 web server are defined in
-[`main.py`]({{% githubMaster %}}/heron/shell/src/python/main.py) while the HTML,
+[`main.py`](https://github.com/apache/incubator-heron/tree/master/heron/shell/src/python/main.py) while the HTML,
 JavaScript, CSS, and images for the web UI can be found in the
-[`assets`]({{% githubMaster %}}/heron/shell/assets) directory.
+[`assets`](https://github.com/apache/incubator-heron/tree/master/heron/shell/assets) directory.
 
 ## Tests
 
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/compiling-docker.md b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-docker.md
new file mode 100644
index 0000000..0d0b66a
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-docker.md
@@ -0,0 +1,253 @@
+---
+id: version-0.20.2-incubating-compiling-docker
+title: Compiling With Docker
+sidebar_label: Compiling With Docker
+original_id: compiling-docker
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+For developing Heron, you will need to compile it for the environment that you
+want to use it in. If you'd like to use Docker to create that build environment,
+Heron provides a convenient script to make that process easier.
+
+Currently Debian10 and Ubuntu 18.04 are actively being supported.  There is also limited support for Ubuntu 14.04, Debian9, and CentOS 7. If you
+need another platform there are instructions for adding new ones
+[below](#contributing-new-environments).
+
+### Requirements
+
+* [Docker](https://docs.docker.com)
+
+### Running Docker in a Virtual Machine
+
+If you are running Docker in a virtual machine (VM), it is recommended that you
+adjust your settings to help speed up the build. To do this, open
+[VirtualBox](https://www.virtualbox.org/wiki/Downloads) and go to the container
+in which Docker is running (usually "default" or whatever name you used to
+create the VM), click on the VM, and then click on **Settings**.
+
+**Note**: You will need to stop the VM before modifying these settings.
+
+![VirtualBox Processors](assets/virtual-box-processors.png)
+![VirtualBox Memory](assets/virtual-box-memory.png)
+
+## Building Heron
+
+Heron provides a `build-arfifacts.sh` script for Docker located in the
+`docker` folder. To run that script:
+
+```bash
+$ cd /path/to/heron/repo
+$ docker/build-artifacts.sh
+```
+
+Running the script by itself will display usage information:
+
+```
+Script to build heron docker image for different platforms
+  Input - directory containing the artifacts from the directory <artifact-directory>
+  Output - docker image tar file saved in the directory <artifact-directory> 
+  
+Usage: ./docker/scripts/build-docker.sh <platform> <version_string> <artifact-directory> [-s|--squash]
+  
+Argument options:
+  <platform>: darwin, debian9, debian10, ubuntu14.04, ubuntu18.04, centos7
+  <version_string>: Version of Heron build, e.g. v0.17.5.1-rc
+  <artifact-directory>: Location of compiled Heron artifact
+  [-s|--squash]: Enables using Docker experimental feature --squash
+  
+Example:
+  ./build-docker.sh ubuntu18.04 0.12.0 ~/ubuntu
+
+NOTE: If running on OSX, the output directory will need to
+      be under /Users so virtualbox has access to.
+```
+
+The following arguments are required:
+
+* `platform` --- Currently we are focused on supporting the `debian10` and `ubuntu18.04` platforms.  
+We also support building Heron locally on OSX.  You can specify this as listing `darwin` as the platform.
+ All options are:
+   - `centos7`
+   - `darwin`
+   - `debian9`
+   - `debian10`
+   - `ubuntu14.04`
+   - `ubuntu18.04`
+    
+   
+  You can add other platforms using the [instructions
+  below](#contributing-new-environments).
+* `version-string` --- The Heron release for which you'd like to build
+  artifacts.
+* `output-directory` --- The directory in which you'd like the release to be
+  built.
+
+Here's an example usage:
+
+```bash
+$ docker/scripts/build-artifacts.sh debian10 0.22.1-incubating ~/heron-release
+```
+
+This will build a Docker container specific to Debian10, create a source
+tarball of the Heron repository, run a full release build of Heron, and then
+copy the artifacts into the `~/heron-release` directory.
+
+Optionally, you can also include a tarball of the Heron source if you have one.
+By default, the script will create a tarball of the current source in the Heron
+repo and use that to build the artifacts.
+
+**Note**: If you are running on Mac OS X, Docker must be run inside a VM.
+Therefore, you must make sure that both the source tarball and destination
+directory are somewhere under your home directory. For example, you cannot
+output the Heron artifacts to `/tmp` because `/tmp` refers to the directory
+inside the VM, not on the host machine. Your home directory, however, is
+automatically linked in to the VM and can be accessed normally.
+
+After the build has completed, you can go to your output directory and see all
+of the generated artifacts:
+
+```bash
+$ ls ~/heron-release
+heron-0.22.1-incubating-debian10.tar
+heron-0.22.1-incubating-debian10.tar.gz
+heron-core-0.22.1-incubating-debian10.tar.gz
+heron-install-0.22.1-incubating-debian10.sh
+heron-layer-0.22.1-incubating-debian10.tar
+heron-tools-0.22.1-incubating-debian10.tar.gz
+```
+
+## Set Up A Docker Based Development Environment
+
+In case you want to have a development environment instead of making a full build,
+Heron provides two helper scripts for you. It could be convenient if you don't want
+to set up all the libraries and tools on your machine directly.
+
+The following commands are to create a new docker image with a development environment
+and start the container based on it:
+```bash
+$ cd /path/to/heron/repo
+$ docker/scripts/dev-env-create.sh heron-dev
+```
+
+After the commands, a new docker container is started with all the libraries and tools
+installed. The operation system is Ubuntu 18.04 by default. Now you can build Heron
+like:
+```bash
+\# bazel build --config=debian scripts/packages:binpkgs
+\# bazel build --config=debian scripts/packages:tarpkgs
+```
+
+The current folder is mapped to the '/heron' directory in the container and any changes
+you make on the host machine will be reflected in the container. Note that when you exit
+the container and re-run the script, a new container will be started with a fresh new
+environment.
+
+When a development environment container is running, you can use the follow script
+to start a new terminal in the container.
+```bash
+$ cd /path/to/heron/repo
+$ docker/scripts/dev-env-run.sh heron-dev
+```
+
+## Contributing New Environments
+
+You'll notice that there are multiple
+[Dockerfiles](https://docs.docker.com/engine/reference/builder/) in the `docker`
+directory of Heron's source code, one for each of the currently supported
+platforms.
+
+To add support for a new platform, add a new `Dockerfile` to that directory and
+append the name of the platform to the name of the file. If you'd like to add
+support for Debian 8, for example, add a file named `Dockerfile.debian8`. Once
+you've done that, follow the instructions in the [Docker
+documentation](https://docs.docker.com/engine/articles/dockerfile_best-practices/).
+
+You should make sure that your `Dockerfile` specifies *at least* all of the
+following:
+
+### Step 1 --- The OS being used in a [`FROM`](https://docs.docker.com/engine/reference/builder/#from) statement.
+
+Here's an example:
+
+```dockerfile
+FROM centos:centos7
+ ```
+
+### Step 2 --- A `TARGET_PLATFORM` environment variable using the [`ENV`](https://docs.docker.com/engine/reference/builder/#env) instruction.
+
+Here's an example:
+
+```dockerfile
+ENV TARGET_PLATFORM centos
+```
+
+### Step 3 --- A general dependency installation script using a [`RUN`](https://docs.docker.com/engine/reference/builder/#run) instruction.
+
+Here's an example:
+
+```dockerfile
+RUN apt-get update && apt-get -y install \
+         automake \
+         build-essential \
+         cmake \
+         curl \
+         libssl-dev \
+         git \
+         libtool \
+         libunwind8 \
+         libunwind-setjmp0-dev \
+         python \
+         python2.7-dev \
+         python-software-properties \
+         software-properties-common \
+         python-setuptools \
+         unzip \
+         wget
+```
+
+### Step 4 --- An installation script for Java 11 and a `JAVA_HOME` environment variable
+
+Here's an example:
+
+```dockerfile
+RUN \
+     echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | debconf-set-selections && \
+     add-apt-repository -y ppa:webupd8team/java && \
+     apt-get update && \
+     apt-get install -y openjdk-11-jdk-headless && \
+     rm -rf /var/lib/apt/lists/*
+
+ENV JAVA_HOME /usr/lib/jvm/java-11-openjdk-amd64
+```
+
+#### Step 5 - An installation script for [Bazel](http://bazel.io/) version {{% bazelVersion %}} or above.
+Here's an example:
+
+```dockerfile
+RUN wget -O /tmp/bazel.sh https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-linux-x86_64.sh \
+         && chmod +x /tmp/bazel.sh \
+         && /tmp/bazel.sh
+```
+
+### Step 6 --- Add the `bazelrc` configuration file for Bazel and the `compile.sh` script (from the `docker` folder) that compiles Heron
+
+```dockerfile
+ADD bazelrc /root/.bazelrc
+ADD compile.sh /compile.sh
+```
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/compiling-linux.md b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-linux.md
new file mode 100644
index 0000000..373f4ce
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-linux.md
@@ -0,0 +1,212 @@
+---
+id: version-0.20.2-incubating-compiling-linux
+title: Compiling on Linux
+sidebar_label: Compiling on Linux
+original_id: compiling-linux
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+Heron can currently be built on the following Linux platforms:
+
+* [Ubuntu 18.04](#building-on-ubuntu-18.04)
+* [CentOS 7](#building-on-centos-7)
+
+## Building on Ubuntu 18.04
+
+To build Heron on a fresh Ubuntu 18.04 installation:
+
+### Step 1 --- Update Ubuntu
+
+```bash
+$ sudo apt-get update -y
+$ sudo apt-get upgrade -y
+```
+
+### Step 2 --- Install required libraries
+
+```bash
+$ sudo apt-get install git build-essential automake cmake libtool-bin zip ant \
+  libunwind-setjmp0-dev zlib1g-dev unzip pkg-config python3-setuptools libcppunit-dev -y
+```
+
+#### Step 3 --- Set the following environment variables
+
+```bash
+export CC=/usr/bin/gcc
+export CCX=/usr/bin/g++
+```
+
+### Step 4 --- Install JDK 11 and set JAVA_HOME
+
+```bash
+$ sudo add-apt-repository ppa:webupd8team/java
+$ sudo apt-get update -y
+$ sudo apt-get install openjdk-11-jdk-headless -y
+$ export JAVA_HOME="/usr/lib/jvm/java-11-openjdk-amd64"
+```
+
+#### Step 5 - Install Bazel {{% bazelVersion %}}
+
+```bash
+wget -O /tmp/bazel.sh https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-linux-x86_64.sh
+chmod +x /tmp/bazel.sh
+/tmp/bazel.sh --user
+```
+
+Make sure to download the appropriate version of Bazel (currently {{%
+bazelVersion %}}).
+
+### Step 6 --- Install python development tools
+```bash
+$ sudo apt-get install  python3-dev python3-pip
+```
+
+### Step 7 --- Make sure the Bazel executable is in your `PATH`
+
+```bash
+$ export PATH="$PATH:$HOME/bin"
+```
+
+### Step 8 --- Fetch the latest version of Heron's source code
+
+```bash
+$ git clone https://github.com/apache/incubator-heron.git && cd heron
+```
+
+### Step 9 --- Configure Heron for building with Bazel
+
+```bash
+$ ./bazel_configure.py
+```
+
+### Step 10 --- Build the project
+
+```bash
+$ bazel build --config=ubuntu heron/...
+```
+
+### Step 11 --- Build the packages
+
+```bash
+$ bazel build --config=ubuntu scripts/packages:binpkgs
+$ bazel build --config=ubuntu scripts/packages:tarpkgs
+```
+
+This will install Heron packages in the `bazel-bin/scripts/packages/` directory.
+
+## Manually Installing Libraries
+
+If you encounter errors with [libunwind](http://www.nongnu.org/libunwind), [libtool](https://www.gnu.org/software/libtool), or
+[gperftools](https://github.com/gperftools/gperftools/releases), we recommend
+installing them manually.
+
+### Compling and installing libtool
+
+```bash
+$ wget http://ftpmirror.gnu.org/libtool/libtool-2.4.6.tar.gz
+$ tar -xvf libtool-2.4.6.tar.gz
+$ cd libtool-2.4.6
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Compiling and installing libunwind
+
+```bash
+$ wget http://download.savannah.gnu.org/releases/libunwind/libunwind-1.1.tar.gz
+$ tar -xvf libunwind-1.1.tar.gz
+$ cd libunwind-1.1
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Compiling and installing gperftools
+
+```bash
+$ wget https://github.com/gperftools/gperftools/releases/download/gperftools-2.5/gperftools-2.5.tar.gz
+$ tar -xvf gperftools-2.5.tar.gz
+$ cd gperftools-2.5
+$ ./configure
+$ make
+$ sudo make install
+```
+
+## Building on CentOS 7
+
+To build Heron on a fresh CentOS 7 installation:
+
+### Step 1 --- Install the required dependencies
+
+```bash
+$ sudo yum install gcc gcc-c++ kernel-devel wget unzip zlib-devel zip git automake cmake patch libtool cppunit-devel ant pkg-config -y
+```
+
+### Step 2 --- Install libunwind from source
+
+```bash
+$ wget http://download.savannah.gnu.org/releases/libunwind/libunwind-1.1.tar.gz
+$ tar xvf libunwind-1.1.tar.gz
+$ cd libunwind-1.1
+$ ./configure
+$ make
+$ sudo make install
+```
+
+### Step 3 --- Set the following environment variables
+
+```bash
+$ export CC=/usr/bin/gcc
+$ export CCX=/usr/bin/g++
+```
+
+### Step 4 --- Install JDK 11
+
+```bash
+$ sudo yum install java-11-openjdk java-11-openjdk-devel
+$ export JAVA_HOME=/usr/lib/jvm/java-11-openjdk
+```
+
+#### Step 5 - Install Bazel {{% bazelVersion %}}
+
+```bash
+wget -O /tmp/bazel.sh https://github.com/bazelbuild/bazel/releases/download/0.26.0/bazel-0.26.0-installer-linux-x86_64.sh
+chmod +x /tmp/bazel.sh
+/tmp/bazel.sh --user
+```
+
+Make sure to download the appropriate version of Bazel (currently {{%
+bazelVersion %}}).
+
+### Step 6 --- Download Heron and compile it
+
+```bash
+$ git clone https://github.com/apache/incubator-heron.git && cd heron
+$ ./bazel_configure.py
+$ bazel build --config=centos heron/...
+```
+
+### Step 7 --- Build the binary packages
+
+```bash
+$ bazel build --config=centos scripts/packages:binpkgs
+$ bazel build --config=centos scripts/packages:tarpkgs
+```
+
+This will install Heron packages in the `bazel-bin/scripts/packages/` directory.
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/compiling-osx.md b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-osx.md
new file mode 100644
index 0000000..f955268
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-osx.md
@@ -0,0 +1,90 @@
+---
+id: version-0.20.2-incubating-compiling-osx
+title: Compiling on OS X
+sidebar_label: Compiling on OS X
+original_id: compiling-osx
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+This is a step-by-step guide to building Heron on Mac OS X (versions 10.10 and
+  10.11).
+
+### Step 1 --- Install Homebrew
+
+If [Homebrew](http://brew.sh/) isn't yet installed on your system, you can
+install it using this one-liner:
+
+```bash
+$ /usr/bin/ruby -e "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/master/install)"
+```
+
+### Step 2 -- Install Bazel
+```bash
+wget -O /tmp/bazel.sh https://github.com/bazelbuild/bazel/releases/download/3.4.1/bazel-3.4.1-installer-darwin-x86_64.sh
+chmod +x /tmp/bazel.sh
+/tmp/bazel.sh --user
+```
+
+### Step 2 --- Install other required libraries
+
+```bash
+brew install automake
+brew install cmake
+brew install libtool
+brew install ant
+brew install cppunit
+brew install pkg-config
+```
+
+### Step 3 --- Set the following environment variables
+
+```bash
+$ export CC=/usr/bin/clang
+$ export CXX=/usr/bin/clang++
+$ echo $CC $CXX
+```
+
+### Step 4 --- Fetch the latest version of Heron's source code
+
+```bash
+$ git clone https://github.com/apache/incubator-heron.git && cd incubator-heron
+```
+
+### Step 5 --- Configure Heron for building with Bazel
+
+```bash
+$ ./bazel_configure.py
+```
+
+If this configure script fails with missing dependencies, Homebrew can be used
+to install those dependencies.
+
+### Step 6 --- Build the project
+
+```bash
+$ bazel build --config=darwin heron/...
+```
+
+### Step 7 --- Build the packages
+
+```bash
+$ bazel build --config=darwin scripts/packages:binpkgs
+$ bazel build --config=darwin scripts/packages:tarpkgs
+```
+
+This will install Heron packages in the `bazel-bin/scripts/packages/` directory.
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/compiling-overview.md b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-overview.md
new file mode 100644
index 0000000..5c8b981
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/compiling-overview.md
@@ -0,0 +1,145 @@
+---
+id: version-0.20.2-incubating-compiling-overview
+title: Compiling Heron
+sidebar_label: Compiling Overview
+original_id: compiling-overview
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+Heron is currently available for [Mac OS X 10.14](compiling-osx),
+[Ubuntu 18.04](compiling-linux), and [Debian10](compiling-docker#building-heron).
+ This guide describes the basics of the
+Heron build system. For step-by-step build instructions for other platforms,
+the following guides are available:
+
+* [Building on Linux Platforms](compiling-linux)
+* [Building on Mac OS X](compiling-osx)
+
+Heron can be built either [in its entirety](#building-all-components), as [individual components](#building-specific-components).
+
+Instructions on running unit tests for Heron can also be found in [Testing Heron](compiling-running-tests).
+
+## Requirements
+
+You must have the following installed to compile Heron:
+
+* [Bazel](http://bazel.io/docs/install.html) = {{% bazelVersion %}}. Later
+  versions might work but have not been tested. See [Installing Bazel](#installing-bazel)below.
+* [Java 11](https://www.oracle.com/java/technologies/javase-jdk11-downloads.html)
+  is required by Bazel and Heron;
+  topologies can be written in Java 7 or above
+  , but Heron jars are required to run with a Java 11 JRE.
+* [Autoconf](http://www.gnu.org/software/autoconf/autoconf.html) >=
+  2.6.3
+* [Automake](https://www.gnu.org/software/automake/) >= 1.11.1
+* [GNU Make](https://www.gnu.org/software/make/) >= 3.81
+* [GNU Libtool](http://www.gnu.org/software/libtool/) >= 2.4.6
+* [gcc/g++](https://gcc.gnu.org/) >= 4.8.1 (Linux platforms)
+* [CMake](https://cmake.org/) >= 2.6.4
+* [Python](https://www.python.org/) >= 3.4
+* [Perl](https://www.perl.org/) >= 5.8.8
+* [Ant] (https://ant.apache.org/) >= 1.10.0
+* [CppUnit] (https://freedesktop.org/wiki/Software/cppunit/) >= 1.10.1
+* [Pkg-Config] (https://www.freedesktop.org/wiki/Software/pkg-config/) >= 0.29.2
+
+Export the `CC` and `CXX` environment variables with a path specific to your
+machine:
+
+```bash
+$ export CC=/your-path-to/bin/c_compiler
+$ export CXX=/your-path-to/bin/c++_compiler
+$ echo $CC $CXX
+```
+
+## Installing Bazel
+
+Heron uses the [Bazel](http://bazel.io) build tool. Bazel releases can be found here:
+https://github.com/bazelbuild/bazel/releases/tag/{{% bazelVersion %}}
+and installation instructions can be found [here](http://bazel.io/docs/install.html).
+
+To ensure that Bazel has been installed, run `bazel version` and check the
+version (listed next to `Build label` in the script's output) to ensure that you
+have Bazel {{% bazelVersion %}}.
+
+## Configuring Bazel
+
+There is a Python script that you can run to configure Bazel on supported
+platforms:
+
+```bash
+$ cd /path/to/heron
+$ ./bazel_configure.py
+```
+
+## Building
+
+### Bazel OS Environments
+
+Bazel builds are specific to a given OS. When building you must specify an
+OS-specific configuration using the `--config` flag. The following OS values
+are supported:
+
+* `darwin` (Mac OS X)
+* `ubuntu` (Ubuntu 18.04)
+* `debian` (Debian10)
+* `centos5` (CentOS 7)
+
+For example, on Mac OS X (`darwin`), the following command will build all
+packages:
+
+```bash
+$ bazel build --config=darwin heron/...
+```
+
+Production release packages include additional performance optimizations
+not enabled by default. Enabling these optimizations increases build time.
+To enable production optimizations, include the `opt` flag:
+```bash
+$ bazel build -c opt --config=PLATFORM heron/...
+```
+
+### Building All Components
+
+The Bazel build process can produce either executable install scripts or
+bundled tars. To build executables or tars for all Heron components at once,
+use the following `bazel build` commands, respectively:
+
+```bash
+$ bazel build --config=PLATFORM scripts/packages:binpkgs
+$ bazel build --config=PLATFORM scripts/packages:tarpkgs
+```
+
+Resulting artifacts can be found in subdirectories below the `bazel-bin`
+directory. The `heron-tracker` executable, for example, can be found at
+`bazel-bin/heron/tools/tracker/src/python/heron-tracker`.
+
+### Building Specific Components
+
+As an alternative to building a full release, you can build Heron executables
+for a single Heron component (such as the [Heron
+Tracker](user-manuals-heron-tracker-runbook)) by passing a target to the `bazel
+build` command. For example, the following command would build the Heron Tracker:
+
+```bash
+$ bazel build --config=darwin heron/tools/tracker/src/python:heron-tracker
+```
+
+## Testing Heron
+
+Instructions for running Heron unit tests can be found at [Testing
+Heron](compiling-running-tests).
diff --git a/website2/docs/guides-effectively-once-java-topologies.md b/website2/website/versioned_docs/version-0.20.2-incubating/guides-effectively-once-java-topologies.md
similarity index 98%
copy from website2/docs/guides-effectively-once-java-topologies.md
copy to website2/website/versioned_docs/version-0.20.2-incubating/guides-effectively-once-java-topologies.md
index 77c46b8..4fa1e13 100644
--- a/website2/docs/guides-effectively-once-java-topologies.md
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/guides-effectively-once-java-topologies.md
@@ -1,7 +1,8 @@
 ---
-id: guides-effectively-once-java-topologies
+id: version-0.20.2-incubating-guides-effectively-once-java-topologies
 title: Effectively Once Java Topologies
 sidebar_label: Effectively Once Java Topologies
+original_id: guides-effectively-once-java-topologies
 ---
 <!--
     Licensed to the Apache Software Foundation (ASF) under one
@@ -240,7 +241,7 @@ public class EffectivelyOnceTopology {
 
 ### Submitting the topology
 
-The code for this topology can be found in [this GitHub repository](https://github.com/streamlio/heron-java-effectively-once-example). You can clone the repo locally like this:
+The code for this topology can be found in. You can clone the repo locally like this:
 
 ```bash
 $ git clone https://github.com/streamlio/heron-java-effectively-once-example
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/heron-streamlet-concepts.md b/website2/website/versioned_docs/version-0.20.2-incubating/heron-streamlet-concepts.md
new file mode 100644
index 0000000..9daee1c
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/heron-streamlet-concepts.md
@@ -0,0 +1,811 @@
+---
+id: version-0.20.2-incubating-heron-streamlet-concepts
+title: Heron Streamlets
+sidebar_label: Heron Streamlets
+original_id: heron-streamlet-concepts
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+When it was first released, Heron offered a **Topology API**---heavily indebted to the [Storm API](http://storm.apache.org/about/simple-api.html)---for developing topology logic. In the original Topology API, developers creating topologies were required to explicitly:
+
+* define the behavior of every [spout](topology-development-topology-api-java#spouts) and [bolt](topology-development-topology-api-java#bolts) in the topology 
+* specify how those spouts and bolts are meant to be interconnected
+
+### Problems with the Topology API
+
+Although the Storm-inspired API provided a powerful low-level interface for creating topologies, the spouts-and-bolts model also presented a variety of drawbacks for Heron developers:
+
+Drawback | Description
+:--------|:-----------
+Verbosity | In the original Topology API for both Java and Python, creating spouts and bolts required substantial boilerplate and forced developers to both provide implementations for spout and bolt classes and also to specify the connections between those spouts and bolts.
+Difficult debugging | When spouts, bolts, and the connections between them need to be created "by hand," it can be challenging to trace the origin of problems in the topology's processing chain
+Tuple-based data model | In the older topology API, spouts and bolts passed [tuples](https://en.wikipedia.org/wiki/Tuple) and nothing but tuples within topologies. Although tuples are a powerful and flexible data type, the topology API forced *all* spouts and bolts to implement their own serialization/deserialization logic.
+
+### Advantages of the Streamlet API
+
+In contrast with the Topology API, the Heron Streamlet API offers:
+
+Advantage | Description
+:---------|:-----------
+Boilerplate-free code | Instead of needing to implement spout and bolt classes over and over again, the Heron Streamlet API enables you to create stream processing logic out of functions, such as map, flatMap, join, and filter functions, instead.
+Easy debugging | With the Heron Streamlet API, you don't have to worry about spouts and bolts, which means that you can more easily surface problems with your processing logic.
+Completely flexible, type-safe data model | Instead of requiring that all processing components pass tuples to one another (which implicitly requires serialization to and deserializaton from your application-specific types), the Heron Streamlet API enables you to write your processing logic in accordance with whatever types you'd like---including tuples, if you wish.<br /><br />In the Streamlet API for [Java](topology-development-streamlet-api), all streamlets are typed (e.g. `Streamlet< [...]
+
+## Streamlet API topology model
+
+Instead of spouts and bolts, as with the Topology API, the Streamlet API enables you to create **processing graphs** that are then automatically converted to spouts and bolts under the hood. Processing graphs consist of the following components:
+
+* **Sources** supply the processing graph with data from random generators, databases, web service APIs, filesystems, pub-sub messaging systems, or anything that implements the [source](#source-operations) interface.
+* **Operators** supply the graph's processing logic, operating on data passed into the graph by sources.
+* **Sinks** are the terminal endpoints of the processing graph, determining what the graph *does* with the processed data. Sinks can involve storing data in a database, logging results to stdout, publishing messages to a topic in a pub-sub messaging system, and much more.
+
+The diagram below illustrates both the general model (with a single source, three operators, and one sink), and a more concrete example that includes two sources (an [Apache Pulsar](https://pulsar.incubator.apache.org) topic and the [Twitter API](https://developer.twitter.com/en/docs)), three operators (a [join](#join-operations), [flatMap](#flatmap-operations), and [reduce](#reduce-operations) operation), and two [sinks](#sink-operations) (an [Apache Cassandra](http://cassandra.apache.o [...]
+
+![Topology Operators](https://www.lucidchart.com/publicSegments/view/d84026a1-d12e-4878-b8d5-5aa274ec0415/image.png)
+
+### Streamlets
+
+The core construct underlying the Heron Streamlet API is that of the **streamlet**. A streamlet is an unbounded, ordered collection of **elements** of some data type (streamlets can consist of simple types like integers and strings or more complex, application-specific data types).
+
+**Source streamlets** supply a Heron processing graph with data inputs. These inputs can come from a wide variety of sources, such as pub-sub messaging systems like [Apache
+Kafka](http://kafka.apache.org/) and [Apache Pulsar](https://pulsar.incubator.apache.org) (incubating), random generators, or static files like CSV or [Apache Parquet](https://parquet.apache.org/) files.
+
+Source streamlets can then be manipulated in a wide variety of ways. You can, for example:
+
+* apply [map](#map-operations), [filter](#filter-operations), [flatMap](#flatmap-operations), and many other operations to them
+* apply operations, such as [join](#join-operations) and [union](#union-operations) operations, that combine streamlets together
+* [reduce](#reduce-by-key-and-window-operations) all elements in a streamlet to some single value, based on key
+* send data to [sinks](#sink-operations) (store elements)
+
+The diagram below shows an example streamlet:
+
+![Streamlet](https://www.lucidchart.com/publicSegments/view/5c451e53-46f8-4e36-86f4-9a11ca015c21/image.png)
+
+
+In this diagram, the **source streamlet** is produced by a random generator that continuously emits random integers between 1 and 100. From there:
+
+* A filter operation is applied to the source streamlet that filters out all values less than or equal to 30
+* A *new streamlet* is produced by the filter operation (with the Heron Streamlet API, you're always transforming streamlets into other streamlets)
+* A map operation adds 15 to each item in the streamlet, which produces the final streamlet in our graph. We *could* hypothetically go much further and add as many transformation steps to the graph as we'd like.
+* Once the final desired streamlet is created, each item in the streamlet is sent to a sink. Sinks are where items leave the processing graph. 
+
+### Supported languages
+
+The Heron Streamlet API is currently available for:
+
+* [Java](topology-development-streamlet-api)
+* [Scala](topology-development-streamlet-scala)
+
+### The Heron Streamlet API and topologies
+
+With the Heron Streamlet API *you still create topologies*, but only implicitly. Heron automatically performs the heavy lifting of converting the streamlet-based processing logic that you create into spouts and bolts and, from there, into containers that are then deployed using whichever [scheduler](schedulers-local.md) your Heron cluster relies upon.
+
+From the standpoint of both operators and developers [managing topologies' lifecycles](#topology-lifecycle), the resulting topologies are equivalent. From a development workflow standpoint, however, the difference is profound. You can think of the Streamlet API as a highly convenient tool for creating spouts, bolts, and the logic that connects them.
+
+The basic workflow looks like this:
+
+![Streamlet](https://www.lucidchart.com/publicSegments/view/6b2e9b49-ef1f-45c9-8094-1e2cefbaed7b/image.png)
+
+When creating topologies using the Heron Streamlet API, you simply write code (example [below](#java-processing-graph-example)) in a highly functional style. From there:
+
+* that code is automatically converted into spouts, bolts, and the necessary connective logic between spouts and bolts
+* the spouts and bolts are automatically converted into a [logical plan](topology-development-topology-api-java#logical-plan) that specifies how the spouts and bolts are connected to each other
+* the logical plan is automatically converted into a [physical plan](topology-development-topology-api-java#physical-plan) that determines how the spout and bolt instances (the colored boxes above) are distributed across the specified number of containers (in this case two)
+
+With a physical plan in place, the Streamlet API topology can be submitted to a Heron cluster.
+
+#### Java processing graph example
+
+The code below shows how you could implement the processing graph shown [above](#streamlets) in Java:
+
+```java
+import java.util.concurrent.ThreadLocalRandom;
+
+import org.apache.heron.streamlet.Builder;
+import org.apache.heron.streamlet.Config;
+import org.apache.heron.streamlet.Runner;
+
+Builder builder = Builder.newBuilder();
+
+// Function for generating random integers
+int randomInt(int lower, int upper) {
+    return ThreadLocalRandom.current().nextInt(lower, upper + 1);
+}
+
+// Source streamlet
+builder.newSource(() -> randomInt(1, 100))
+    // Filter operation
+    .filter(i -> i > 30)
+    // Map operation
+    .map(i -> i + 15)
+    // Log sink
+    .log();
+
+Config config = new Config();
+// This topology will be spread across two containers
+config.setNumContainers(2);
+
+// Submit the processing graph to Heron as a topology
+new Runner("IntegerProcessingGraph", config, builder).run();
+```
+
+As you can see, the Java code for the example streamlet processing graph requires very little boilerplate and is heavily indebted to Java [lambda](https://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html) patterns.
+
+## Streamlet operations
+
+In the Heron Streamlet API, processing data means *transforming streamlets into other streamlets*. This can be done using a wide variety of available operations, including many that you may be familiar with from functional programming:
+
+Operation | Description
+:---------|:-----------
+[map](#map-operations) | Returns a new streamlet by applying the supplied mapping function to each element in the original streamlet
+[flatMap](#flatMap-operations) | Like a map operation but with the important difference that each element of the streamlet is flattened into a collection type
+[filter](#filter-operations) | Returns a new streamlet containing only the elements that satisfy the supplied filtering function
+[union](#filter-operations) | Unifies two streamlets into one, without [windowing](#windowing) or modifying the elements of the two streamlets
+[clone](#clone-operations) | Creates any number of identical copies of a streamlet
+[transform](#transform-operations) | Transform a streamlet using whichever logic you'd like (useful for transformations that don't neatly map onto the available operations) | Modify the elements from an incoming streamlet and update the topology's state
+[keyBy](#key-by-operations) | Returns a new key-value streamlet by applying the supplied extractors to each element in the original streamlet
+[reduceByKey](#reduce-by-key-operations) | Produces a streamlet of key-value on each key and in accordance with a reduce function that you apply to all the accumulated values
+[reduceByKeyAndWindow](#reduce-by-key-and-window-operations) |  Produces a streamlet of key-value on each key, within a [time window](#windowing), and in accordance with a reduce function that you apply to all the accumulated values
+[countByKey](#count-by-key-operations) | A special reduce operation of counting number of tuples on each key
+[countByKeyAndWindow](#count-by-key-and-window-operations) | A special reduce operation of counting number of tuples on each key, within a [time window](#windowing)
+[split](#split-operations) | Split a streamlet into multiple streamlets with different id.
+[withStream](#with-stream-operations) | Select a stream with id from a streamlet that contains multiple streams
+[applyOperator](#apply-operator-operations) | Returns a new streamlet by applying an user defined operator to the original streamlet
+[join](#join-operations) | Joins two separate key-value streamlets into a single streamlet on a key, within a [time window](#windowing), and in accordance with a join function
+[log](#log-operations) | Logs the final streamlet output of the processing graph to stdout
+[toSink](#sink-operations) | Sink operations terminate the processing graph by storing elements in a database, logging elements to stdout, etc.
+[consume](#consume-operations) | Consume operations are like sink operations except they don't require implementing a full sink interface (consume operations are thus suited for simple operations like logging)
+
+### Map operations
+
+Map operations create a new streamlet by applying the supplied mapping function to each element in the original streamlet.
+
+#### Java example
+
+```java
+import org.apache.heron.streamlet.Builder;
+
+Builder processingGraphBuilder = Builder.newBuilder();
+
+Streamlet<Integer> ones = processingGraphBuilder.newSource(() -> 1);
+Streamlet<Integer> thirteens = ones.map(i -> i + 12);
+```
+
+In this example, a supplier streamlet emits an indefinite series of 1s. The `map` operation then adds 12 to each incoming element, producing a streamlet of 13s. The effect of this operation is to transform the `Streamlet<Integer>` into a `Streamlet<Integer>` with different values (map operations can also convert streamlets into streamlets of a different type).
+
+### FlatMap operations
+
+FlatMap operations are like [map operations](#map-operations) but with the important difference that each element of the streamlet is "flattened" into a collection type. In the Java example below, a supplier streamlet emits the same sentence over and over again; the `flatMap` operation transforms each sentence into a Java `List` of individual words.
+
+#### Java example
+
+```java
+Streamlet<String> sentences = builder.newSource(() -> "I have nothing to declare but my genius");
+Streamlet<List<String>> words = sentences
+        .flatMap((sentence) -> Arrays.asList(sentence.split("\\s+")));
+```
+
+The effect of this operation is to transform the `Streamlet<String>` into a `Streamlet<List<String>>` containing each word emitted by the source streamlet.
+
+### Filter operations
+
+Filter operations retain some elements in a streamlet and exclude other elements on the basis of a provided filtering function.
+
+#### Java example
+
+```java
+Streamlet<Integer> randomInts =
+    builder.newSource(() -> ThreadLocalRandom.current().nextInt(1, 11));
+Streamlet<Integer> lessThanSeven = randomInts
+        .filter(i -> i <= 7);
+```
+
+In this example, a source streamlet consisting of random integers between 1 and 10 is modified by a filter operation that removes all streamlet elements that are greater than 7.
+
+### Union operations
+
+Union operations combine two streamlets of the same type into a single streamlet without modifying the elements.
+
+#### Java example
+
+```java
+Streamlet<String> oohs = builder.newSource(() -> "ooh");
+Streamlet<String> aahs = builder.newSource(() -> "aah");
+
+Streamlet<String> combined = oohs
+        .union(aahs);
+```
+
+Here, one streamlet is an endless series of "ooh"s while the other is an endless series of "aah"s. The `union` operation combines them into a single streamlet of alternating "ooh"s and "aah"s.
+
+### Clone operations
+
+Clone operations enable you to create any number of "copies" of a streamlet. Each of the "copy" streamlets contains all the elements of the original and can be manipulated just like the original streamlet.
+
+#### Java example
+
+```java
+import java.util.List;
+import java.util.concurrent.ThreadLocalRandom;
+
+Streamlet<Integer> integers = builder.newSource(() -> ThreadLocalRandom.current().nextInt(100));
+
+List<Streamlet<Integer>> copies = integers.clone(5);
+Streamlet<Integer> ints1 = copies.get(0);
+Streamlet<Integer> ints2 = copies.get(1);
+Streamlet<Integer> ints3 = copies.get(2);
+// and so on...
+```
+
+In this example, a streamlet of random integers between 1 and 100 is split into 5 identical streamlets.
+
+### Transform operations
+
+Transform operations are highly flexible operations that are most useful for:
+
+* operations involving state in [stateful topologies](heron-delivery-semantics#stateful-topologies)
+* operations that don't neatly fit into the other categories or into a lambda-based logic
+
+Transform operations require you to implement three different methods:
+
+* A `setup` method that enables you to pass a context object to the operation and to specify what happens prior to the `transform` step
+* A `transform` operation that performs the desired transformation
+* A `cleanup` method that allows you to specify what happens after the `transform` step
+
+The context object available to a transform operation provides access to:
+
+* the current state of the topology
+* the topology's configuration
+* the name of the stream
+* the stream partition
+* the current task ID
+
+Here's a Java example of a transform operation in a topology where a stateful record is kept of the number of items processed:
+
+```java
+import org.apache.heron.streamlet.Context;
+import org.apache.heron.streamlet.SerializableTransformer;
+
+import java.util.function.Consumer;
+
+public class CountNumberOfItems implements SerializableTransformer<String, String> {
+    private int numberOfItems;
+
+    public void setup(Context context) {
+        numberOfItems = (int) context.getState("number-of-items");
+        context.getState().put("number-of-items", numberOfItems + 1);
+    }
+
+    public void transform(String in, Consumer<String> consumer) {
+        String transformedString = // Apply some operation to the incoming value
+        consumer.accept(transformedString);
+    }
+
+    public void cleanup() {
+        System.out.println(
+                String.format("Successfully processed new state: %d", numberOfItems));
+    }
+}
+```
+
+This operation does a few things:
+
+* In the `setup` method, the [`Context`](/api/java/org/apache/heron/streamlet/Context.html) object is used to access the current state (which has the semantics of a Java `Map`). The current number of items processed is incremented by one and then saved as the new state.
+* In the `transform` method, the incoming string is transformed in some way and then "accepted" as the new value.
+* In the `cleanup` step, the current count of items processed is logged.
+
+Here's that operation within the context of a streamlet processing graph:
+
+```java
+builder.newSource(() -> "Some string over and over");
+        .transform(new CountNumberOfItems())
+        .log();
+```
+
+### Key by operations
+
+Key by operations convert each item in the original streamlet into a key-value pair and return a new streamlet.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .keyBy(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Value extractor (get the length of each word)
+        word -> workd.length()
+    )
+    // The result is logged
+    .log();
+```
+
+### Reduce by key operations
+
+You can apply [reduce](https://docs.oracle.com/javase/tutorial/collections/streams/reduction.html) operations to streamlets by specifying:
+
+* a key extractor that determines what counts as the key for the streamlet
+* a value extractor that determines which final value is chosen for each element of the streamlet
+* a reduce function that produces a single value for each key in the streamlet
+
+Reduce by key operations produce a new streamlet of key-value window objects (which include a key-value pair including the extracted key and calculated value).
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .reduceByKeyAndWindow(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Value extractor (each word appears only once, hence the value is always 1)
+        word -> 1,
+        // Reduce operation (a running sum)
+        (x, y) -> x + y
+    )
+    // The result is logged
+    .log();
+```
+
+### Reduce by key and window operations
+
+You can apply [reduce](https://docs.oracle.com/javase/tutorial/collections/streams/reduction.html) operations to streamlets by specifying:
+
+* a key extractor that determines what counts as the key for the streamlet
+* a value extractor that determines which final value is chosen for each element of the streamlet
+* a [time window](heron-topology-concepts#window-operations) across which the operation will take place
+* a reduce function that produces a single value for each key in the streamlet
+
+Reduce by key and window operations produce a new streamlet of key-value window objects (which include a key-value pair including the extracted key and calculated value, as well as information about the window in which the operation took place).
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+import org.apache.heron.streamlet.WindowConfig;
+
+Builder builder = Builder.newBuilder();
+
+builder.newSource(() -> "Mary had a little lamb")
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .reduceByKeyAndWindow(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Value extractor (each word appears only once, hence the value is always 1)
+        word -> 1,
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+        // Reduce operation (a running sum)
+        (x, y) -> x + y
+    )
+    // The result is logged
+    .log();
+```
+
+### Count by key operations
+
+Count by key operations extract keys from data in the original streamlet and count the number of times a key has been encountered.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .countByKeyAndWindow(word -> word)
+    // The result is logged
+    .log();
+```
+
+### Count by key and window operations
+
+Count by key and window operations extract keys from data in the original streamlet and count the number of times a key has been encountered within each [time window](#windowing).
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+import org.apache.heron.streamlet.WindowConfig;
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    .countByKeyAndWindow(
+        // Key extractor (in this case, each word acts as the key)
+        word -> word,
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+    )
+    // The result is logged
+    .log();
+```
+
+### Split operations
+
+Split operations split a streamlet into multiple streamlets with different id by getting the corresponding stream ids from each item in the origina streamlet.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+Map<String, SerializablePredicate<String>> splitter = new HashMap();
+    splitter.put("long_word", s -> s.length() >= 4);
+    splitter.put("short_word", s -> s.length() < 4);
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    // Splits the stream into streams of long and short words
+    .split(splitter)
+    // Choose the stream of the short words
+    .withStream("short_word")
+    // The result is logged
+    .log();
+```
+
+### With stream operations
+
+With stream operations select a stream with id from a streamlet that contains multiple streams. They are often used with [split](#split-operations).
+
+### Apply operator operations
+
+Apply operator operations apply a user defined operator (like a bolt) to each element of the original streamlet and return a new streamlet.
+
+#### Java example
+
+```java
+import java.util.Arrays;
+
+private class MyBoltOperator extends MyBolt implements IStreamletRichOperator<Double, Double> {
+}
+
+Builder builder = Builder.newBuilder()
+    .newSource(() -> "Mary had a little lamb")
+    // Convert each sentence into individual words
+    .flatMap(sentence -> Arrays.asList(sentence.toLowerCase().split("\\s+")))
+    // Apply user defined operation
+    .applyOperator(new MyBoltOperator())
+    // The result is logged
+    .log();
+```
+
+### Join operations
+
+Join operations in the Streamlet API take two streamlets (a "left" and a "right" streamlet) and join them together:
+
+* based on a key extractor for each streamlet
+* over key-value elements accumulated during a specified [time window](#windowing)
+* based on a [join type](#join-types) ([inner](#inner-joins), [outer left](#outer-left-joins), [outer right](#outer-right-joins), or [outer](#outer-joins))
+* using a join function that specifies *how* values will be processed
+
+You may already be familiar with `JOIN` operations in SQL databases, like this:
+
+```sql
+SELECT username, email
+FROM all_users
+INNER JOIN banned_users ON all_users.username NOT IN banned_users.username;
+```
+
+> If you'd like to unite two streamlets into one *without* applying a window or a join function, you can use a [union](#union-operations) operation, which are available for key-value streamlets as well as normal streamlets.
+
+All join operations are performed:
+
+1. Over elements accumulated during a specified [time window](#windowing)
+1. In accordance with a key and value extracted from each streamlet element (you must provide extractor functions for both)
+1. In accordance with a join function that produces a "joined" value for each pair of streamlet elements
+
+#### Join types
+
+The Heron Streamlet API supports four types of joins:
+
+Type | What the join operation yields | Default?
+:----|:-------------------------------|:--------
+[Inner](#inner-joins) | All key-values with matched keys across the left and right stream | Yes
+[Outer left](#outer-left-joins) | All key-values with matched keys across both streams plus unmatched keys in the left stream |
+[Outer right](#outer-right-joins) | All key-values with matched keys across both streams plus unmatched keys in the left stream |
+[Outer](#outer-joins) | All key-values across both the left and right stream, regardless of whether or not any given element has a matching key in the other stream |
+
+#### Inner joins
+
+Inner joins operate over the [Cartesian product](https://en.wikipedia.org/wiki/Cartesian_product) of the left stream and the right stream, i.e. over all the whole set of all ordered pairs between the two streams. Imagine this set of key-value pairs accumulated within a time window:
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player1", 5) | ("player1", 12)
+("player1", 17) | ("player2", 27)
+
+An inner join operation would thus apply the join function to all key-values with matching keys, thus **3 &times; 2 = 6** in total, producing this set of key-values:
+
+Included key-values |
+:-------------------|
+("player1", 4) |
+("player1", 5) |
+("player1", 10) |
+("player1", 12) |
+("player1", 17) |
+
+> Note that the `("player2", 27)` key-value pair was *not* included in the stream because there's no matching key-value in the left streamlet.
+
+If the supplied join function, say, added the values together, then the resulting joined stream would look like this:
+
+Operation | Joined Streamlet
+:---------|:----------------
+4 + 10 | ("player1", 14)
+4 + 12 | ("player1", 16)
+5 + 10 | ("player1", 15)
+5 + 12 | ("player1", 17)
+17 + 10 | ("player1", 27)
+17 + 12 | ("player1", 29)
+
+> Inner joins are the "default" join type in the Heron Streamlet API. If you call the `join` method without specifying a join type, an inner join will be applied.
+
+##### Java example
+
+```java
+class Score {
+    String playerUsername;
+    int playerScore;
+
+    // Setters and getters
+}
+
+Streamlet<Score> scores1 = /* A stream of player scores */;
+Streamlet<Score> scores2 = /* A second stream of player scores */;
+
+scores1
+    .join(
+        scores2,
+        // Key extractor for the left stream (scores1)
+        score -> score.getPlayerUsername(),
+        // Key extractor for the right stream (scores2)
+        score -> score.getPlayerScore(),
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+        // Join function (selects the larger score as the value using
+        // using a ternary operator)
+        (x, y) ->
+            (x.getPlayerScore() >= y.getPlayerScore()) ?
+                x.getPlayerScore() :
+                y.getPlayerScore()
+    )
+    .log();
+```
+
+In this example, two streamlets consisting of `Score` objects are joined. In the `join` function, a key and value extractor are supplied along with a window configuration and a join function. The resulting, joined streamlet will consist of key-value pairs in which each player's username will be the key and the joined---in this case highest---score will be the value.
+
+By default, an [inner join](#inner-joins) is applied in join operations but you can also specify a different join type. Here's a Java example for an [outer right](#outer-right-joins) join:
+
+```java
+import org.apache.heron.streamlet.JoinType;
+
+scores1
+    .join(
+        scores2,
+        // Key extractor for the left stream (scores1)
+        score -> score.getPlayerUsername(),
+        // Key extractor for the right stream (scores2)
+        score -> score.getPlayerScore(),
+        // Window configuration
+        WindowConfig.TumblingCountWindow(50),
+        // Join type
+        JoinType.OUTER_RIGHT,
+        // Join function (selects the larger score as the value using
+        // using a ternary operator)
+        (x, y) ->
+            (x.getPlayerScore() >= y.getPlayerScore()) ?
+                x.getPlayerScore() :
+                y.getPlayerScore()
+    )
+    .log();
+```
+
+#### Outer left joins
+
+An outer left join includes the results of an [inner join](#inner-joins) *plus* all of the unmatched keys in the left stream. Take this example left and right streamlet:
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player2", 5) | ("player4", 12)
+("player3", 17) |
+
+The resulting set of key-values within the time window:
+
+Included key-values |
+:-------------------|
+("player1", 4) |
+("player1", 10) |
+("player2", 5) |
+("player3", 17) |
+
+In this case, key-values with a key of `player4` are excluded because they are in the right stream but have no matching key with any element in the left stream.
+
+#### Outer right joins
+
+An outer right join includes the results of an [inner join](#inner-joins) *plus* all of the unmatched keys in the right stream. Take this example left and right streamlet (from [above](#outer-left-joins)):
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player2", 5) | ("player4", 12)
+("player3", 17) |
+
+The resulting set of key-values within the time window:
+
+Included key-values |
+:-------------------|
+("player1", 4) |
+("player1", 10) |
+("player2", 5) |
+("player4", 17) |
+
+In this case, key-values with a key of `player3` are excluded because they are in the left stream but have no matching key with any element in the right stream.
+
+#### Outer joins
+
+Outer joins include *all* key-values across both the left and right stream, regardless of whether or not any given element has a matching key in the other stream. If you want to ensure that no element is left out of a resulting joined streamlet, use an outer join. Take this example left and right streamlet (from [above](#outer-left-joins)):
+
+Left streamlet | Right streamlet
+:--------------|:---------------
+("player1", 4) | ("player1", 10)
+("player2", 5) | ("player4", 12)
+("player3", 17) |
+
+The resulting set of key-values within the time window:
+
+Included key-values |
+:-------------------|
+("player1", 4)
+("player1", 10)
+("player2", 5)
+("player4", 12)
+("player3", 17)
+
+> Note that *all* key-values were indiscriminately included in the joined set.
+
+### Sink operations
+
+In processing graphs like the ones you build using the Heron Streamlet API, **sinks** are essentially the terminal points in your graph, where your processing logic comes to an end. A processing graph can end with writing to a database, publishing to a topic in a pub-sub messaging system, and so on. With the Streamlet API, you can implement your own custom sinks.
+
+#### Java example
+
+```java
+import org.apache.heron.streamlet.Context;
+import org.apache.heron.streamlet.Sink;
+
+public class FormattedLogSink implements Sink<T> {
+    private String streamletName;
+
+    public void setup(Context context) {
+        streamletName = context.getStreamletName();
+    }
+
+    public void put(T element) {
+        String message = String.format("Streamlet %s has produced an element with a value of: '%s'",
+                streamletName,
+                element.toString());
+        System.out.println(message);
+    }
+
+    public void cleanup() {}
+}
+```
+
+In this example, the sink fetches the name of the enclosing streamlet from the context passed in the `setup` method. The `put` method specifies how the sink handles each element that is received (in this case, a formatted message is logged to stdout). The `cleanup` method enables you to specify what happens after the element has been processed by the sink.
+
+Here is the `FormattedLogSink` at work in an example processing graph:
+
+```java
+Builder builder = Builder.newBuilder();
+
+builder.newSource(() -> "Here is a string to be passed to the sink")
+        .toSink(new FormattedLogSink());
+```
+
+> [Log operations](#log-operations) rely on a log sink that is provided out of the box. You'll need to implement other sinks yourself.
+
+### Consume operations
+
+Consume operations are like [sink operations](#sink-operations) except they don't require implementing a full sink interface. Consume operations are thus suited for simple operations like formatted logging.
+
+#### Java example
+
+```java
+Builder builder = Builder.newBuilder()
+        .newSource(() -> generateRandomInteger())
+        .filter(i -> i % 2 == 0)
+        .consume(i -> {
+            String message = String.format("Even number found: %d", i);
+            System.out.println(message);
+        });
+```
+
+## Partitioning
+
+In the topology API, processing parallelism can be managed via adjusting the number of spouts and bolts performing different operations, enabling you to, for example, increase the relative parallelism of a bolt by using three of that bolt instead of two.
+
+The Heron Streamlet API provides a different mechanism for controlling parallelism: **partitioning**. To understand partitioning, keep in mind that rather than physical spouts and bolts, the core processing construct in the Heron Streamlet API is the processing step. With the Heron Streamlet API, you can explicitly assign a number of partitions to each processing step in your graph (the default is one partition).
+
+The example topology [above](#streamlets), for example, has five steps:
+
+* the random integer source
+* the "add one" map operation
+* the union operation
+* the filtering operation
+* the logging operation.
+
+You could apply varying numbers of partitions to each step in that topology like this:
+
+```java
+Builder builder = Builder.newBuilder();
+
+Streamlet<Integer> zeroes = builder.newSource(() -> 0)
+        .setName("zeroes");
+
+builder.newSource(() -> ThreadLocalRandom.current().nextInt(1, 11))
+        .setName("random-ints")
+        .setNumPartitions(3)
+        .map(i -> i + 1)
+        .setName("add-one")
+        .repartition(3)
+        .union(zeroes)
+        .setName("unify-streams")
+        .repartition(2)
+        .filter(i -> i != 2)
+        .setName("remove-all-twos")
+        .repartition(1)
+        .log();
+```
+
+### Repartition operations
+
+As explained [above](#partitioning), when you set a number of partitions for a specific operation (included for source streamlets), the same number of partitions is applied to all downstream operations *until* a different number is explicitly set.
+
+```java
+import java.util.Arrays;
+
+Builder builder = Builder.newBuilder();
+
+builder.newSource(() -> ThreadLocalRandom.current().nextInt(1, 11))
+    .repartition(4, (element, numPartitions) -> {
+        if (element > 5) {
+            return Arrays.asList(0, 1);
+        } else {
+            return Arrays.asList(2, 3);
+        }
+    });
+```
+
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/schedulers-k8s-with-helm.md b/website2/website/versioned_docs/version-0.20.2-incubating/schedulers-k8s-with-helm.md
new file mode 100644
index 0000000..6b9b48a
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/schedulers-k8s-with-helm.md
@@ -0,0 +1,304 @@
+---
+id: version-0.20.2-incubating-schedulers-k8s-with-helm
+title: Kubernetes with Helm
+sidebar_label: Kubernetes with Helm
+original_id: schedulers-k8s-with-helm
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+> If you'd prefer to install Heron on Kubernetes *without* using the [Helm](https://helm.sh) package manager, see the [Heron on Kubernetes by hand](schedulers-k8s-by-hand) document.
+
+[Helm](https://helm.sh) is an open source package manager for [Kubernetes](https://kubernetes.io) that enables you to quickly and easily install even the most complex software systems on Kubernetes. Heron has a Helm [chart](https://docs.helm.sh/developing_charts/#charts) that you can use to install Heron on Kubernetes using just a few commands. The chart can be used to install Heron on the following platforms:
+
+* [Minikube](#minikube) (the default)
+* [Google Kubernetes Engine](#google-kubernetes-engine)
+* [Amazon Web Services](#amazon-web-services)
+* [Bare metal](#bare-metal)
+
+## Requirements
+
+In order to install Heron on Kubernetes using Helm, you'll need to have an existing Kubernetes cluster on one of the supported [platforms](#specifying-a-platform) (which includes [bare metal](#bare-metal) installations).
+
+## Installing the Helm client
+
+In order to get started, you need to install Helm on your machine. Installation instructions for [macOS](#helm-for-macos) and [Linux](#helm-for-linux) are below.
+
+### Helm for macOS
+
+You can install Helm on macOS using [Homebrew](https://brew.sh):
+
+```bash
+$ brew install kubernetes-helm
+```
+
+### Helm for Linux
+
+You can install Helm on Linux using a simple installation script:
+
+```bash
+$ curl -fsSL -o get_helm.sh https://raw.githubusercontent.com/helm/helm/master/scripts/get-helm-3
+$ chmod 700 get_helm.sh
+$ ./get_helm.sh
+```
+
+## Installing Heron on Kubernetes
+
+To use Helm with Kubernetes, you need to first make sure that [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl) is using the right configuration context for your cluster. To check which context is being used:
+
+```bash
+$ kubectl config current-context
+```
+
+Once you've installed the Helm client on your machine and gotten Helm pointing to your Kubernetes cluster, you need to make your client aware of the `heron-charts` Helm repository, which houses the chart for Heron:
+
+```bash
+$ helm repo add heron-charts https://storage.googleapis.com/heron-charts
+"heron-charts" has been added to your repositories
+```
+
+Create a namespace to install into:
+
+```bash
+$ kubectl create namespace heron
+```
+
+Now you can install the Heron package:
+
+```bash
+$ helm install heron-charts/heron -g -n heron
+```
+
+This will install Heron and provide the installation in the `heron` namespace (`-n`) with a random name (`-g`) like `jazzy-anaconda`. To provide the installation with a name, such as `heron-kube`:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron
+```
+
+### Specifying a platform
+
+The default platform for running Heron on Kubernetes is [Minikube](#minikube). To specify a different platform, you can use the `--set platform=PLATFORM` flag. Here's an example:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=gke
+```
+
+The available platforms are:
+
+Platform | Tag
+:--------|:---
+[Minikube](#minikube) | `minikube`
+[Google Kubernetes Engine](#google-kubernetes-engine) | `gke`
+[Amazon Web Services](#amazon-web-services) | `aws`
+[Bare metal](#bare-metal) | `baremetal`
+
+#### Minikube
+
+To run Heron on Minikube, you need to first [install Minikube](https://kubernetes.io/docs/tasks/tools/install-minikube/). Once Minikube is installed, you can start it by running `minikube start`. Please note, however, that Heron currently requires the following resources:
+
+* 7 GB of memory
+* 5 CPUs
+* 20 GB of disk space
+
+To start up Minikube with the minimum necessary resources:
+
+```bash
+$ minikube start \
+  --memory=7168 \
+  --cpus=5 \
+  --disk-size=20g
+```
+
+Once Minikube is running, you can then install Heron in one of two ways:
+
+Create a namespace to install into:
+
+```bash
+$ kubectl create namespace heron
+```
+
+```bash
+# Use the Minikube default
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron
+
+# Explicitly select Minikube
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=minikube
+```
+
+#### Google Kubernetes Engine
+
+The resources required to run Heron on [Google Kubernetes Engine](https://cloud.google.com/kubernetes-engine/) vary based on your use case. To run a basic Heron cluster intended for development and experimentation, you'll need at least:
+
+* 3 nodes
+* [n1-standard-4](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machines
+
+To create a cluster with those resources using the [gcloud](https://cloud.google.com/sdk/gcloud/) tool:
+
+```bash
+$ gcloud container clusters create heron-gke-dev-cluster \
+  --num-nodes=3 \
+  --machine-type=n1-standard-2
+```
+
+For a production-ready cluster you'll want a larger cluster with:
+
+* *at least* 8 nodes
+* [n1-standard-4 or n1-standard-8](https://cloud.google.com/compute/docs/machine-types#standard_machine_types) machines (preferably the latter)
+
+To create such a cluster:
+
+```bash
+$ gcloud container clusters create heron-gke-prod-cluster \
+  --num-nodes=8 \
+  --machine-type=n1-standard-8
+```
+
+Once the cluster has been successfully created, you'll need to install that cluster's credentials locally so that they can be used by [kubectl](https://kubernetes.io/docs/tasks/tools/install-kubectl/). You can do this in just one command:
+
+```bash
+$ gcloud container clusters get-credentials heron-gke-dev-cluster # or heron-gke-prod-cluster
+```
+
+Create a namespace to install into:
+
+```bash
+$ kubectl create namespace heron
+```
+
+Now you can install Heron:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=gke
+```
+
+##### Resource configurations
+
+Helm enables you to supply sets of variables via YAML files. There are currently a handful of different resource configurations that can be applied to your Heron on GKE cluster upon installation:
+
+Configuration | Description
+:-------------|:-----------
+[`small.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gke/small.yaml) | Smaller Heron cluster intended for basic testing, development, and experimentation
+[`medium.yaml`](https://github.com/apache/incubator-heron/blob/master/deploy/kubernetes/gke/medium.yaml) | Closer geared for production usage
+
+To apply the `small` configuration, for example:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=gke \
+  --values https://raw.githubusercontent.com/apache/incubator-heron/master/deploy/kubernetes/gcp/small.yaml
+```
+
+#### Amazon Web Services
+
+To run Heron on Kubernetes on Amazon Web Services (AWS), you'll need to 
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=aws
+```
+
+##### Using S3 uploader
+
+You can make Heron to use S3 to distribute the user topologies. First you need to set up a S3 bucket and configure an IAM user with enough permissions over it. Get access keys for the user. Then you can deploy Heron like this:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=aws \
+  --set uploader.class=s3 \
+  --set uploader.s3Bucket=heron \
+  --set uploader.s3PathPrefix=topologies \
+  --set uploader.s3AccessKey=XXXXXXXXXXXXXXXXXXXX \
+  --set uploader.s3SecretKey=XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX \
+  --set uploader.s3Region=us-west-1
+```
+
+#### Bare metal
+
+To run Heron on a bare metal Kubernetes cluster:
+
+```bash
+$ helm install heron-kube heron-charts/heron \
+  --namespace heron \
+  --set platform=baremetal
+```
+
+### Managing topologies
+
+> When setting the `heron` CLI configuration, make sure that the cluster name matches the name of the Helm installation. This can be either the name auto-generated by Helm or the name you supplied via the `--name` flag upon installation (in some of the examples above, the `heron-kubernetes` name was used). Make sure to adjust the name accordingly if necessary.
+
+Once all of the components have been successfully started up, you need to open up a proxy port to your Kubernetes cluster using the [`kubectl proxy`](https://kubernetes.io/docs/tasks/access-kubernetes-api/http-proxy-access-api/) command:
+
+```bash
+$ kubectl proxy -p 8001
+```
+> Note: All of the following Kubernetes specific urls are valid with the Kubernetes 1.10.0 release.
+ 
+Now, verify that the Heron API server running on Minikube is available using curl:
+
+```bash
+$ curl http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy/api/v1/version
+```
+
+
+You should get a JSON response like this:
+
+```json
+{
+  "heron.build.git.revision" : "ddbb98bbf173fb082c6fd575caaa35205abe34df",
+  "heron.build.git.status" : "Clean",
+  "heron.build.host" : "ci-server-01",
+  "heron.build.time" : "Sat Mar 31 09:27:19 UTC 2018",
+  "heron.build.timestamp" : "1522488439000",
+  "heron.build.user" : "release-agent",
+  "heron.build.version" : "0.17.8"
+}
+```
+
+## Running topologies on Heron on Kubernetes
+
+Once you have a Heron cluster up and running on Kubernetes via Helm, you can use the [`heron` CLI tool](user-manuals-heron-cli) like normal if you set the proper URL for the [Heron API server](deployment-api-server). When running Heron on Kubernetes, that URL is:
+
+```bash
+$ http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy
+```
+
+To set that URL:
+
+```bash
+$ heron config heron-kubernetes set service_url \
+  http://localhost:8001/api/v1/namespaces/default/services/heron-kubernetes-apiserver:9000/proxy
+```
+
+To test your cluster, you can submit an example topology:
+
+```bash
+$ heron submit heron-kubernetes \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.streamlet.WindowedWordCountTopology \
+  WindowedWordCount
+```
diff --git a/website2/website/versioned_docs/version-0.20.2-incubating/schedulers-nomad.md b/website2/website/versioned_docs/version-0.20.2-incubating/schedulers-nomad.md
new file mode 100644
index 0000000..8dec423
--- /dev/null
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/schedulers-nomad.md
@@ -0,0 +1,439 @@
+---
+id: version-0.20.2-incubating-schedulers-nomad
+title: Nomad
+sidebar_label: Nomad
+original_id: schedulers-nomad
+---
+<!--
+    Licensed to the Apache Software Foundation (ASF) under one
+    or more contributor license agreements.  See the NOTICE file
+    distributed with this work for additional information
+    regarding copyright ownership.  The ASF licenses this file
+    to you under the Apache License, Version 2.0 (the
+    "License"); you may not use this file except in compliance
+    with the License.  You may obtain a copy of the License at
+      http://www.apache.org/licenses/LICENSE-2.0
+    Unless required by applicable law or agreed to in writing,
+    software distributed under the License is distributed on an
+    "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+    KIND, either express or implied.  See the License for the
+    specific language governing permissions and limitations
+    under the License.
+-->
+
+Heron supports [Hashicorp](https://hashicorp.com)'s [Nomad](https://nomadproject.io) as a scheduler. You can use Nomad for either small- or large-scale Heron deployments or to run Heron locally in [standalone mode](schedulers-standalone).
+
+> Update: Heron now supports running on Nomad via [raw exec driver](https://www.nomadproject.io/docs/drivers/raw_exec.html) and [docker driver](https://www.nomadproject.io/docs/drivers/docker.html)
+
+## Nomad setup
+
+Setting up a nomad cluster will not be covered here. See the [official Nomad docs](https://www.nomadproject.io/intro/getting-started/install.html) for instructions.
+
+**Instructions on running Heron on Nomad via raw execs are located here**:
+
+Below are instructions on how to to run Heron on Nomad via raw execs.  In this mode, Heron executors will run as raw processes on the host machines. 
+
+The advantages of this mode is that it is incredibly lightweight and likely do not require sudo privileges to setup and run.  However in this mode, the setup procedure may be a little more complex compared to running via docker since there are more things to consider.  Also in resource allocation is considered but not enforced.
+
+## Requirements
+
+When setting up your Nomad cluster, the following are required:
+
+* The [Heron CLI tool](user-manuals-heron-cli) must be installed on each machine used to deploy Heron topologies
+* Python 3, Java 7 or 8, and [curl](https://curl.haxx.se/) must be installed on every machine in the cluster
+* A [ZooKeeper cluster](https://zookeeper.apache.org)
+
+## Configuring Heron settings
+
+Before running Heron via Nomad, you'll need to configure some settings. Once you've [installed Heron](getting-started-local-single-node), all of the configurations you'll need to modify will be in the `~/.heron/conf/nomad` diredctory.
+
+First, make sure that the `heron.nomad.driver` is set to "raw_exec" in `~/.heron/conf/nomad/scheduler.yaml` e.g.
+
+```yaml
+heron.nomad.driver: "raw_exec"
+```
+
+You'll need to use a topology uploader to deploy topology packages to nodes in your cluster. You can use one of the following uploaders:
+
+* The HTTP uploader in conjunction with Heron's [API server](deployment-api-server). The Heron API server acts like a file server to which users can upload topology packages. The API server distributes the packages, along with the Heron core package, to the relevant machines. You can also use the API server to submit your Heron topology to Nomad (described [below](#deploying-with-the-api-server)) <!-- TODO: link to upcoming HTTP uploader documentation -->
+* [Amazon S3](uploaders-amazon-s3). Please note that the S3 uploader requires an AWS account.
+* [SCP](uploaders-scp). Please note that the SCP uploader requires SSH access to nodes in the cluster.
+
+You can modify the `heron.class.uploader` parameter in `~/.heron/conf/nomad/uploader.yaml` to choose an uploader.
+
+In addition, you must update the `heron.statemgr.connection.string` parameter in the `statemgr.yaml` file in `~/.heron/conf/nomad` to your ZooKeeper connection string. Here's an example:
+
+```yaml
+heron.statemgr.connection.string: 127.0.0.1:2181
+```
+
+Then, update the `heron.nomad.scheduler.uri` parameter in `scheduler.yaml` to the URL of the Nomad server to which you'll be submitting jobs. Here's an example:
+
+```yaml
+heron.nomad.scheduler.uri: http://127.0.0.1:4646
+```
+
+You may also want to configure where Heron will store files on your machine if you're running Nomad locally (in `scheduler.yaml`). Here's an example:
+
+```yaml
+heron.scheduler.local.working.directory: ${HOME}/.herondata/topologies/${CLUSTER}/${ROLE}/${TOPOLOGY_ID}
+```
+
+> Heron uses string interpolation to fill in the missing values for `CLUSTER`, `ROLE`, etc.
+
+## Distributing Heron core
+
+The Heron core package needs to be made available for every machine in the cluster to download. You'll need to provide a URI for the Heron core package. Here are the currently supported protocols:
+
+* `file://` (local FS)
+* `http://` (HTTP)
+
+You can do this in one of several ways:
+
+* Use the Heron API server to distribute `heron-core.tar.gz` (see [here](deployment-api-server) for more info)
+* Copy `heron-core.tar.gz` onto every node in the cluster
+* Mount a network drive to every machine in the cluster that contains 
+* Upload `heron-core.tar.gz` to an S3 bucket and expose an HTTP endpoint
+* Upload `heron-core.tar.gz` to be hosted on a file server and expose an HTTP endpoint
+
+> A copy of `heron-core.tar.gz` is located at `~/.heron/dist/heron-core.tar.gz` on the machine on which you installed the Heron CLI.
+
+You'll need to set the URL for `heron-core.tar.gz` in the `client.yaml` configuration file in `~/.heron/conf/nomad`. Here are some examples:
+
+```yaml
+# local filesystem
+heron.package.core.uri: file:///path/to/heron/heron-core.tar.gz
+
+# from a web server
+heron.package.core.uri: http://some.webserver.io/heron-core.tar.gz
+```
+
+## Submitting Heron topologies to the Nomad cluster
+
+You can submit Heron topologies to a Nomad cluster via the [Heron CLI tool](user-manuals-heron-cli):
+
+```bash
+$ heron submit nomad \
+  <topology package path> \
+  <topology classpath> \
+  <topology CLI args>
+```
+
+Here's an example:
+
+```bash
+$ heron submit nomad \
+  ~/.heron/examples/heron-streamlet-examples.jar \           # Package path
+  org.apache.heron.examples.api.WindowedWordCountTopology \ # Topology classpath
+  windowed-word-count                                        # Args passed to topology
+```
+
+## Deploying with the API server
+
+The advantage of running the [Heron API Server](deployment-api-server) is that it can act as a file server to help you distribute topology package files and submit jobs to Nomad, so that you don't need to modify the configuration files mentioned above.  By using Heron’s API Server, you can set configurations such as the URI of ZooKeeper and the Nomad server once and not need to configure each machine from which you want to submit Heron topologies.
+
+## Running the API server
+
+You can run the Heron API server on any machine that can be reached by machines in your Nomad cluster via HTTP. Here's a command you can use to run the API server:
+
+```bash
+$ ~/.heron/bin/heron-apiserver \
+  --cluster nomad \
+  --base-template nomad \
+  -D heron.statemgr.connection.string=<ZooKeeper URI> \
+  -D heron.nomad.scheduler.uri=<Nomad URI> \
+  -D heron.class.uploader=org.apache.heron.uploader.http.HttpUploader \
+  --verbose
+```
+
+You can also run the API server in Nomad itself, but you will need to have a local copy of the Heron API server executable on every machine in the cluster. Here's an example Nomad job for the API server:
+
+```hcl
+job "apiserver" {
+  datacenters = ["dc1"]
+  type = "service"
+  group "apiserver" {
+    count = 1
+    task "apiserver" {
+      driver = "raw_exec"
+      config {
+        command = <heron_apiserver_executable>
+        args = [
+        "--cluster", "nomad",
+        "--base-template", "nomad",
+        "-D", "heron.statemgr.connection.string=<zookeeper_uri>",
+        "-D", "heron.nomad.scheduler.uri=<scheduler_uri>",
+        "-D", "heron.class.uploader=org.apache.heron.uploader.http.HttpUploader",
+        "--verbose"]
+      }
+      resources {
+        cpu    = 500 # 500 MHz
+        memory = 256 # 256MB
+      }
+    }
+  }
+}
+```
+
+Make sure to replace the following:
+
+* `<heron_apiserver_executable>` --- The local path to where the [Heron API server](deployment-api-server) executable is located (usually `~/.heron/bin/heron-apiserver`)
+* `<zookeeper_uri>` --- The URI for your ZooKeeper cluster
+* `<scheduler_uri>` --- The URI for your Nomad server
+
+## Using the Heron API server to distribute Heron topology packages
+
+Heron users can upload their Heron topology packages to the Heron API server using the HTTP uploader by modifying the `uploader.yaml` file to including the following:
+
+```yaml
+# uploader class for transferring the topology jar/tar files to storage
+heron.class.uploader:    org.apache.heron.uploader.http.HttpUploader
+heron.uploader.http.uri: http://localhost:9000/api/v1/file/upload
+```
+
+The [Heron CLI](user-manuals-heron-cli) will take care of the upload. When the topology is starting up, the topology package will be automatically downloaded from the API server.
+
+## Using the API server to distribute the Heron core package
+
+Heron users can use the Heron API server to distribute the Heron core package. When running the API server, just add this argument:
+
+```bash
+--heron-core-package-path <path to Heron core>
+```
+
+Here's an example:
+
+```bash
+$ ~/.heron/bin/heron-apiserver \
+  --cluster nomad \
+  --base-template nomad \
+  --download-hostname 127.0.0.1 \
+  --heron-core-package-path ~/.heron/dist/heron-core.tar.gz \
+  -D heron.statemgr.connection.string=127.0.0.1:2181 \
+  -D heron.nomad.scheduler.uri=127.0.0.1:4647 \
+  -D heron.class.uploader=org.apache.heron.uploader.http.HttpUploader \
+  --verbose
+```
+
+Then change the `client.yaml` file in `~/.heron/conf/nomad` to the following:
+
+```yaml
+heron.package.use_core_uri: true
+heron.package.core.uri:     http://localhost:9000/api/v1/file/download/core
+```
+
+## Using the API server to submit Heron topologies
+
+Users can submit topologies using the [Heron CLI](user-manuals-heron-cli) by specifying a service URL to the API server. Here's the format of that command:
+
+```bash
+$ heron submit nomad \
+  --service-url=<Heron API server URL> \
+  <topology package path> \
+  <topology classpath> \
+  <topology args>
+```
+
+Here's an example:
+
+```bash
+$ heron submit nomad \
+  --service-url=http://localhost:9000 \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.api.WindowedWordCountTopology \
+  windowed-word-count
+```
+
+## Integration with Consul for metrics
+Each Heron executor part of a Heron topology serves metrics out of a port randomly generated by Nomad.  Thus, Consul is needed for service discovery for users to determine which port the Heron executor is serving the metrics out of.
+Every Heron executor will automatically register itself as a service with Consul given that there is a Consul cluster running. The port Heron will be serving metrics will be registered with Consul.
+
+The service will be registered with the name with the following format:
+
+```yaml
+metrics-heron-<TOPOLOGY_NAME>-<CONTAINER_INDEX>
+```
+
+Each heron executor registered with Consul will be tagged with
+
+```yaml
+<TOPOLOGY_NAME>-<CONTAINER_INDEX>
+```
+
+To add additional tags, please add specify them in a comma delimited list via
+
+```yaml
+heron.nomad.metrics.service.additional.tags
+```
+
+in `scheduler.yaml`. For example:
+
+```yaml
+heron.nomad.metrics.service.additional.tags: "prometheus,metrics,heron"
+```
+
+Users can then configure Prometheus to scrape metrics for each Heron executor based on these tags
+
+
+Instructions on running Heron on Nomad via docker containers are located here:
+
+**Below are instructions on how to run Heron on Nomad via docker containers.**  In this mode, Heron executors will run as docker containers on host machines.
+
+## Requirements
+
+When setting up your Nomad cluster, the following are required:
+
+* The [Heron CLI tool](user-manuals-heron-cli) must be installed on each machine used to deploy Heron topologies
+* Python 2.7, Java 7 or 8, and [curl](https://curl.haxx.se/) must be installed on every machine in the cluster
+* A [ZooKeeper cluster](https://zookeeper.apache.org)
+* Docker installed and enabled on every machine
+* Each machine must also be able to pull the official Heron docker image from DockerHub or have the image preloaded.
+
+## Configuring Heron settings
+
+Before running Heron via Nomad, you'll need to configure some settings. Once you've [installed Heron](getting-started-local-single-node), all of the configurations you'll need to modify will be in the `~/.heron/conf/nomad` diredctory.
+
+First, make sure that the `heron.nomad.driver` is set to "docker" in `~/.heron/conf/nomad/scheduler.yaml` e.g.
+
+```yaml
+heron.nomad.driver: "docker"
+```
+
+You can also adjust which docker image to use for running Heron via the `heron.executor.docker.image` in `~/.heron/conf/nomad/scheduler.yaml` e.g.
+
+```yaml
+heron.executor.docker.image: 'heron/heron:latest'
+```
+
+You'll need to use a topology uploader to deploy topology packages to nodes in your cluster. You can use one of the following uploaders:
+
+* The HTTP uploader in conjunction with Heron's [API server](deployment-api-server). The Heron API server acts like a file server to which users can upload topology packages. The API server distributes the packages, along with the Heron core package, to the relevant machines. You can also use the API server to submit your Heron topology to Nomad (described [below](#deploying-with-the-api-server)) <!-- TODO: link to upcoming HTTP uploader documentation -->
+* [Amazon S3](uploaders-amazon-s3). Please note that the S3 uploader requires an AWS account.
+* [SCP](uploaders-scp). Please note that the SCP uploader requires SSH access to nodes in the cluster.
+
+You can modify the `heron.class.uploader` parameter in `~/.heron/conf/nomad/uploader.yaml` to choose an uploader.
+
+In addition, you must update the `heron.statemgr.connection.string` parameter in the `statemgr.yaml` file in `~/.heron/conf/nomad` to your ZooKeeper connection string. Here's an example:
+
+```yaml
+heron.statemgr.connection.string: 127.0.0.1:2181
+```
+
+Then, update the `heron.nomad.scheduler.uri` parameter in `scheduler.yaml` to the URL of the Nomad server to which you'll be submitting jobs. Here's an example:
+
+```yaml
+heron.nomad.scheduler.uri: http://127.0.0.1:4646
+```
+
+## Submitting Heron topologies to the Nomad cluster
+
+You can submit Heron topologies to a Nomad cluster via the [Heron CLI tool](user-manuals-heron-cli):
+
+```bash
+$ heron submit nomad \
+  <topology package path> \
+  <topology classpath> \
+  <topology CLI args>
+```
+
+Here's an example:
+
+```bash
+$ heron submit nomad \
+  ~/.heron/examples/heron-streamlet-examples.jar \           # Package path
+  org.apache.heron.examples.api.WindowedWordCountTopology \ # Topology classpath
+  windowed-word-count                                        # Args passed to topology
+```
+
+## Deploying with the API server
+
+The advantage of running the [Heron API Server](deployment-api-server) is that it can act as a file server to help you distribute topology package files and submit jobs to Nomad, so that you don't need to modify the configuration files mentioned above.  By using Heron’s API Server, you can set configurations such as the URI of ZooKeeper and the Nomad server once and not need to configure each machine from which you want to submit Heron topologies.
+
+## Running the API server
+
+You can run the Heron API server on any machine that can be reached by machines in your Nomad cluster via HTTP. Here's a command you can use to run the API server:
+
+```bash
+$ ~/.heron/bin/heron-apiserver \
+  --cluster nomad \
+  --base-template nomad \
+  -D heron.statemgr.connection.string=<ZooKeeper URI> \
+  -D heron.nomad.scheduler.uri=<Nomad URI> \
+  -D heron.class.uploader=org.apache.heron.uploader.http.HttpUploader \
+  --verbose
+```
+
+You can also run the API server in Nomad itself, but you will need to have a local copy of the Heron API server executable on every machine in the cluster. Here's an example Nomad job for the API server:
+
+```hcl
+job "apiserver" {
+  datacenters = ["dc1"]
+  type = "service"
+  group "apiserver" {
+    count = 1
+    task "apiserver" {
+      driver = "raw_exec"
+      config {
+        command = <heron_apiserver_executable>
+        args = [
+        "--cluster", "nomad",
+        "--base-template", "nomad",
+        "-D", "heron.statemgr.connection.string=<zookeeper_uri>",
+        "-D", "heron.nomad.scheduler.uri=<scheduler_uri>",
+        "-D", "heron.class.uploader=org.apache.heron.uploader.http.HttpUploader",
+        "--verbose"]
+      }
+      resources {
+        cpu    = 500 # 500 MHz
+        memory = 256 # 256MB
+      }
+    }
+  }
+}
+```
+
+Make sure to replace the following:
+
+* `<heron_apiserver_executable>` --- The local path to where the [Heron API server](deployment-api-server) executable is located (usually `~/.heron/bin/heron-apiserver`)
+* `<zookeeper_uri>` --- The URI for your ZooKeeper cluster
+* `<scheduler_uri>` --- The URI for your Nomad server
+
+## Using the Heron API server to distribute Heron topology packages
+
+Heron users can upload their Heron topology packages to the Heron API server using the HTTP uploader by modifying the `uploader.yaml` file to including the following:
+
+```yaml
+# uploader class for transferring the topology jar/tar files to storage
+heron.class.uploader:    org.apache.heron.uploader.http.HttpUploader
+heron.uploader.http.uri: http://localhost:9000/api/v1/file/upload
+```
+
+## Integration with Consul for metrics
+Each container part of a Heron topology serves metrics out of a port randomly generated by Nomad.  Thus, Consul is needed for service discovery for users to determine which port the container is serving the metrics out of.
+Every Heron executor running in a docker container will automatically register itself as a service with Consul given that there is a Consul cluster running. The port Heron will be serving metrics will be registered with Consul.
+  
+The service will be registered with the name with the following format:
+
+```yaml
+metrics-heron-<TOPOLOGY_NAME>-<CONTAINER_INDEX>
+```
+
+Each heron executor registered with Consul will be tagged with
+
+```yaml
+<TOPOLOGY_NAME>-<CONTAINER_INDEX>
+```
+
+To add additional tags, please add specify them in a comma delimited list via
+
+```yaml
+heron.nomad.metrics.service.additional.tags
+```
+
+in `scheduler.yaml`. For example:
+
+```yaml
+heron.nomad.metrics.service.additional.tags: "prometheus,metrics,heron"
+```
+
+Users can then configure Prometheus to scrape metrics for each container based on these tags
diff --git a/website2/docs/topology-development-streamlet-api.md b/website2/website/versioned_docs/version-0.20.2-incubating/topology-development-streamlet-api.md
similarity index 95%
copy from website2/docs/topology-development-streamlet-api.md
copy to website2/website/versioned_docs/version-0.20.2-incubating/topology-development-streamlet-api.md
index 87396af..a121424 100644
--- a/website2/docs/topology-development-streamlet-api.md
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/topology-development-streamlet-api.md
@@ -1,7 +1,8 @@
 ---
-id: topology-development-streamlet-api
+id: version-0.20.2-incubating-topology-development-streamlet-api
 title: The Heron Streamlet API for Java
 sidebar_label: The Heron Streamlet API for Java
+original_id: topology-development-streamlet-api
 ---
 <!--
     Licensed to the Apache Software Foundation (ASF) under one
@@ -113,34 +114,26 @@ $ heron submit local \
 
 ### Java Streamlet API starter project
 
-If you'd like to up and running quickly with the Heron Streamlet API for Java, you can clone [this repository](https://github.com/streamlio/heron-java-streamlet-api-example), which includes an example topology built using the Streamlet API as well as the necessary Maven configuration. To build a JAR with dependencies of this example topology:
-
-```bash
-$ git clone https://github.com/streamlio/heron-java-streamlet-api-example
-$ cd heron-java-streamlet-api-example
-$ mvn assembly:assembly
-$ ls target/*.jar
-target/heron-java-streamlet-api-example-latest-jar-with-dependencies.jar
-target/heron-java-streamlet-api-example-latest.jar
-```
+If you'd like to up and running quickly with the Heron Streamlet API for Java,  you can view the example topologies [here](https://github.com/apache/incubator-heron/tree/master/examples/src/java/org/apache/heron/examples/streamlet)
 
 If you're running a [local Heron cluster](getting-started-local-single-node), you can submit the built example topology like this:
 
 ```bash
-$ heron submit local target/heron-java-streamlet-api-example-latest-jar-with-dependencies.jar \
-  io.streaml.heron.streamlet.WordCountStreamletTopology \
-  WordCountStreamletTopology
+$ heron submit local \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.streamlet.WindowedWordCountTopology \
+  streamletWindowedWordCount
 ```
 
 #### Selecting delivery semantics
 
-Heron enables you to apply one of three [delivery semantics](heron-delivery-semantics) to any Heron topology. For the [example topology](#java-streamlet-api-starter-project) above, you can select the delivery semantics when you submit the topology with the topology's second argument. This command, for example, would apply [effectively-once](heron-delivery-semantics) to the example topology:
+Heron enables you to apply one of three [delivery semantics](heron-delivery-semantics) to any Heron topology. For the example topology above, you can select the delivery semantics when you submit the topology with the topology's second argument. This command, for example, would apply [effectively-once](heron-delivery-semantics) to the example topology:
 
 ```bash
-$ heron submit local target/heron-java-streamlet-api-example-latest-jar-with-dependencies.jar \
-  io.streaml.heron.streamlet.WordCountStreamletTopology \
-  WordCountStreamletTopology \
-  effectively-once
+$ heron submit local \
+  ~/.heron/examples/heron-streamlet-examples.jar \
+  org.apache.heron.examples.streamlet.WireRequestsTopology \
+  wireRequestsTopology
 ```
 
 The other options are `at-most-once` and `at-least-once`. If you don't explicitly select the delivery semantics, at-least-once semantics will be applied.
diff --git a/website2/website/versioned_docs/version-0.20.0-incubating/guides-python-topologies.md b/website2/website/versioned_docs/version-0.20.2-incubating/topology-development-topology-api-python.md
similarity index 51%
rename from website2/website/versioned_docs/version-0.20.0-incubating/guides-python-topologies.md
rename to website2/website/versioned_docs/version-0.20.2-incubating/topology-development-topology-api-python.md
index bc55105..d6c7bdb 100644
--- a/website2/website/versioned_docs/version-0.20.0-incubating/guides-python-topologies.md
+++ b/website2/website/versioned_docs/version-0.20.2-incubating/topology-development-topology-api-python.md
@@ -1,8 +1,8 @@
 ---
-id: version-0.20.0-incubating-guides-python-topologies
-title: Python Topologies
-sidebar_label: Python Topologies
-original_id: guides-python-topologies
+id: version-0.20.2-incubating-topology-development-topology-api-python
+title: The Heron Topology API for Python
+sidebar_label: The Heron Topology API for Python
+original_id: topology-development-topology-api-python
 ---
 <!--
     Licensed to the Apache Software Foundation (ASF) under one
@@ -21,7 +21,7 @@ original_id: guides-python-topologies
     under the License.
 -->
 
-> The current version of `heronpy` is [{{% heronpyVersion %}}](https://pypi.python.org/pypi/heronpy/{{% heronpyVersion %}}).
+> The current version of `heronpy` is [{{heron:version}}](https://pypi.python.org/pypi/heronpy/{{heron:version}}).
 
 Support for developing Heron topologies in Python is provided by a Python library called [`heronpy`](https://pypi.python.org/pypi/heronpy).
 
@@ -47,13 +47,13 @@ from heronpy.api.topology import Topology
 
 ## Writing topologies in Python
 
-Heron [topologies](heron-topology-concepts) are networks of [spouts](topology-development-topology-api-python#spouts) that pull data into a topology and [bolts](topology-development-topology-api-python#bolts) that process that ingested data.
+Heron [topologies](heron-topology-concepts) are networks of [spouts](heron-topology-concepts#spouts) that pull data into a topology and [bolts](heron-topology-concepts#bolts) that process that ingested data.
 
-> You can see how to create Python spouts in the [Implementing Python Spouts](topology-development-topology-api-python#spouts) guide and how to create Python bolts in the [Implementing Python Bolts](topology-development-topology-api-python#bolts) guide.
+> You can see how to create Python spouts in the [Implementing Python Spouts](#spouts) guide and how to create Python bolts in the [Implementing Python Bolts](#bolts) guide.
 
 Once you've defined spouts and bolts for a topology, you can then compose the topology in one of two ways:
 
-* You can use the `TopologyBuilder` class inside of a main function.
+* You can use the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class inside of a main function.
 
     Here's an example:
 
@@ -68,7 +68,7 @@ Once you've defined spouts and bolts for a topology, you can then compose the to
         builder.build_and_submit()
     ```
 
-* You can subclass the `Topology` class.
+* You can subclass the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class.
 
     Here's an example:
 
@@ -82,9 +82,9 @@ Once you've defined spouts and bolts for a topology, you can then compose the to
         my_bolt = CountBolt.spec(par=3, inputs={spout: Grouping.fields("word")})
     ```
 
-## Defining topologies using the `TopologyBuilder` class
+## Defining topologies using the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class
 
-If you create a Python topology using a `TopologyBuilder`, you need to instantiate a `TopologyBuilder` inside of a standard Python main function, like this:
+If you create a Python topology using a [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder), you need to instantiate a `TopologyBuilder` inside of a standard Python main function, like this:
 
 ```python
 from heronpy.api.topology import TopologyBuilder
@@ -94,7 +94,7 @@ if __name__ == "__main__":
     builder = TopologyBuilder("MyTopology")
 ```
 
-Once you've created a `TopologyBuilder` object, you can add bolts using the `add_bolt` method and spouts using the `add_spout` method. Here's an example:
+Once you've created a `TopologyBuilder` object, you can add [bolts](#bolts) using the [`add_bolt`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.add_bolt) method and [spouts](#spouts) using the [`add_spout`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.add_spout) method. Here's an example:
 
 ```python
 builder = TopologyBuilder("MyTopology")
@@ -102,14 +102,14 @@ builder.add_bolt("my_bolt", CountBolt, par=3)
 builder.add_spout("my_spout", WordSpout, par=2)
 ```
 
-Both the `add_bolt` and `add_spout` methods return the corresponding `HeronComponentSpec1 object.
+Both the `add_bolt` and `add_spout` methods return the corresponding [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) object.
 
 The `add_bolt` method takes four arguments and an optional `config` parameter:
 
 Argument | Data type | Description | Default
 :--------|:----------|:------------|:-------
 `name` | `str` | The unique identifier assigned to this bolt | |
-`bolt_cls` | class | The subclass of `Bolt` that defines this bolt | |
+`bolt_cls` | class | The subclass of [`Bolt`](/api/python/bolt/bolt.m.html#heronpy.bolt.bolt.Bolt) that defines this bolt | |
 `par` | `int` | The number of instances of this bolt in the topology | |
 `config` | `dict` | Specifies the configuration for this spout | `None`
 
@@ -118,14 +118,14 @@ The `add_spout` method takes three arguments and an optional `config` parameter:
 Argument | Data type | Description | Default
 :--------|:----------|:------------|:-------
 `name` | `str` | The unique identifier assigned to this spout | |
-`spout_cls` | class | The subclass of `Spout` that defines this spout | |
+`spout_cls` | class | The subclass of [`Spout`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout) that defines this spout | |
 `par` | `int` | The number of instances of this spout in the topology | |
-`inputs` | `dict` or `list` | Either a `dict` mapping from `HeronComponentSpec` to `Grouping` *or* a list of `HeronComponentSpec`, in which case the `shuffle` grouping is used
+`inputs` | `dict` or `list` | Either a `dict` mapping from [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) to [`Grouping`](/api/python/stream.m.html#heronpy.stream.Grouping) *or* a list of [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s, in which case the [`shuffle`](/api/python/stream.m.html#heronpy.stream.Grouping.SHUFFLE) grouping is used
 `config` | `dict` | Specifies the configuration for this spout | `None`
 
 ### Example
 
-The following is an example implementation of a word count topology in Python that subclasses `TopologyBuilder`.
+The following is an example implementation of a word count topology in Python that subclasses [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder).
 
 ```python
 from your_spout import WordSpout
@@ -149,7 +149,7 @@ Note that arguments to the main method can be passed by providing them in the
 
 ### Topology-wide configuration
 
-If you're building a Python topology using a `TopologyBuilder`, you can specify configuration for the topology using the `set_config` method. A topology's config is a `dict` in which the keys are a series constants from the `api_constants` module and values are configuration values for those parameters.
+If you're building a Python topology using a `TopologyBuilder`, you can specify configuration for the topology using the [`set_config`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.set_config) method. A topology's config is a `dict` in which the keys are a series constants from the [`api_constants`](/api/python/api_constants.m.html) module and values are configuration values for those parameters.
 
 Here's an example:
 
@@ -171,7 +171,7 @@ if __name__ == "__main__":
 
 If you want to [submit](../../../operators/heron-cli#submitting-a-topology) Python topologies to a Heron cluster, they need to be packaged as a [PEX](https://pex.readthedocs.io/en/stable/whatispex.html) file. In order to produce PEX files, we recommend using a build tool like [Pants](http://www.pantsbuild.org/python_readme.html) or [Bazel](https://github.com/benley/bazel_rules_pex).
 
-If you defined your topology by subclassing the `TopologyBuilder` class and built a `word_count.pex` file for that topology in the `~/topology` folder. You can submit the topology to a cluster called `local` like this:
+If you defined your topology by subclassing the [`TopologyBuilder`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder) class and built a `word_count.pex` file for that topology in the `~/topology` folder. You can submit the topology to a cluster called `local` like this:
 
 ```bash
 $ heron submit local \
@@ -182,12 +182,11 @@ $ heron submit local \
 Note the `-` in this submission command. If you define a topology by subclassing `TopologyBuilder` you do not need to instruct Heron where your main method is located.
 
 > #### Example topologies buildable as PEXs
-> * See [this repo](https://github.com/streamlio/pants-dev-environment) for an example of a Heron topology written in Python and deployable as a Pants-packaged PEX.
-> * See [this repo](https://github.com/streamlio/bazel-dev-environment) for an example of a Heron topology written in Python and deployable as a Bazel-packaged PEX.
+> * TODO
 
-## Defining a topology by subclassing the `Topology` class
+## Defining a topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class
 
-If you create a Python topology by subclassing the `Topology` class, you need to create a new topology class, like this:
+If you create a Python topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class, you need to create a new topology class, like this:
 
 ```python
 from my_spout import WordSpout
@@ -203,13 +202,13 @@ class MyTopology(Topology):
     my_bolt = CountBolt.spec(par=3, inputs=my_bolt_inputs)
 ```
 
-All you need to do is place `HeronComponentSpec`s as the class attributes
+All you need to do is place [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s as the class attributes
 of your topology class, which are returned by the `spec()` method of
 your spout or bolt class. You do *not* need to run a `build` method or anything like that; the `Topology` class will automatically detect which spouts and bolts are included in the topology.
 
 > If you use this method to define a new Python topology, you do *not* need to have a main function.
 
-For bolts, the `spec` method for spouts takes three optional arguments::
+For bolts, the [`spec`](/api/python/bolt/bolt.m.html#heronpy.bolt.bolt.Bolt.spec) method for spouts takes three optional arguments::
 
 Argument | Data type | Description | Default
 :--------|:----------|:------------|:-------
@@ -218,12 +217,12 @@ Argument | Data type | Description | Default
 `config` | `dict` | Specifies the configuration for this bolt | `None`
 
 
-For spouts, the `spec` method takes four optional arguments:
+For spouts, the [`spec`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout.spec) method takes four optional arguments:
 
 Argument | Data type | Description | Default
 :--------|:----------|:------------|:-------
 `name` | `str` | The unique identifier assigned to this spout or `None` if you want to use the variable name of the return `HeronComponentSpec` as the unique identifier for this spout | `None` |
-`inputs` | `dict` or `list` | Either a `dict` mapping from `HeronComponentSpec`to `Grouping` *or* a list of `HeronComponentSpec`s, in which case the `shuffle` grouping is used
+`inputs` | `dict` or `list` | Either a `dict` mapping from [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec) to [`Grouping`](/api/python/stream.m.html#heronpy.stream.Grouping) *or* a list of [`HeronComponentSpec`](/api/python/component/component_spec.m.html#heronpy.component.component_spec.HeronComponentSpec)s, in which case the [`shuffle`](/api/python/stream.m.html#heronpy.stream.Grouping.SHUFFLE) grouping is used
 `par` | `int` | The number of instances of this spout in the topology | `1` |
 `config` | `dict` | Specifies the configuration for this spout | `None`
 
@@ -246,7 +245,7 @@ class WordCount(Topology):
 
 ### Launching
 
-If you defined your topology by subclassing the `Topology` class,
+If you defined your topology by subclassing the [`Topology`](/api/python/topology.m.html#heronpy.topology.Topology) class,
 your main Python file should *not* contain a main method. You will, however, need to instruct Heron which class contains your topology definition.
 
 Let's say that you've defined a topology by subclassing `Topology` and built a PEX stored in `~/topology/dist/word_count.pex`. The class containing your topology definition is `topology.word_count.WordCount`. You can submit the topology to a cluster called `local` like this:
@@ -260,7 +259,7 @@ $ heron submit local \
 
 ### Topology-wide configuration
 
-If you're building a Python topology by subclassing `Topology`, you can specify configuration for the topology using the `set_config` method. A topology's config is a `dict` in which the keys are a series constants from the `api_constants` module and values are configuration values for those parameters.
+If you're building a Python topology by subclassing `Topology`, you can specify configuration for the topology using the [`set_config`](/api/python/topology.m.html#heronpy.topology.TopologyBuilder.set_config) method. A topology's config is a `dict` in which the keys are a series constants from the [`api_constants`](/api/python/api_constants.m.html) module and values are configuration values for those parameters.
 
 Here's an example:
 
@@ -324,6 +323,181 @@ class DynamicOutputField(Topology):
 You can also declare outputs in the `add_spout()` and the `add_bolt()`
 method for the `TopologyBuilder` in the same way.
 
+## Bolts 
+
+ Bolts must implement the `Bolt` interface, which has the following methods.
+
+```python
+class MyBolt(Bolt):
+    def initialize(self, config, context): pass
+    def process(self, tup): pass
+```
+
+* The `initialize()` method is called when the bolt is first initialized and
+provides the bolt with the executing environment. It is equivalent to `prepare()`
+method of the [`IBolt`](/api/org/apache/heron/api/bolt/IBolt.html) interface in Java.
+Note that you should not override `__init__()` constructor of `Bolt` class
+for initialization of custom variables, since it is used internally by HeronInstance; instead,
+`initialize()` should be used to initialize any custom variables or connections to databases.
+
+* The `process()` method is called to process a single input `tup` of `HeronTuple` type. This method
+is equivalent to `execute()` method of `IBolt` interface in Java. You can use
+`self.emit()` method to emit the result, as described below.
+
+In addition, `BaseBolt` class provides you with the following methods.
+
+```python
+class BaseBolt(BaseComponent):
+    def emit(self, tup, stream="default", anchors=None, direct_task=None, need_task_ids=False): ...
+    def ack(self, tup): ...
+    def fail(self, tup): ...
+    def log(self, message, level=None): ...
+    @staticmethod
+    def is_tick(tup)
+    @classmethod
+    def spec(cls, name=None, inputs=None, par=1, config=None): ...
+```
+
+* The `emit()` method is used to emit a given `tup`, which can be a `list` or `tuple` of
+any python objects. Unlike the Java implementation, `OutputCollector`
+doesn't exist in the Python implementation.
+
+* The `ack()` method is used to indicate that processing of a tuple has succeeded.
+
+* The `fail()` method is used to indicate that processing of a tuple has failed.
+
+* The `is_tick()` method returns whether a given `tup` of `HeronTuple` type is a tick tuple.
+
+* The `log()` method is used to log an arbitrary message, and its outputs are redirected
+  to the log file of the component. It accepts an optional argument
+  which specifies the logging level. By default, its logging level is `info`.
+
+    **Warning:** due to internal issue, you should **NOT** output anything to
+    `sys.stdout` or `sys.stderr`; instead, you should use this method to log anything you want.
+
+* In order to declare the output fields of this bolt, you need to place
+a class attribute `outputs` as a list of `str` or `Stream`. Note that unlike Java,
+`declareOutputFields` does not exist in the Python implementation. Moreover, you can
+optionally specify the output fields from the `spec()` method from the `optional_outputs`.
+
+
+* You will use the `spec()` method to define a topology and specify the location
+of this bolt within the topology, as well as to give component-specific configurations.
+
+The following is an example implementation of a bolt in Python.
+
+```python
+from collections import Counter
+from heronpy.api.bolt.bolt import Bolt
+
+
+class CountBolt(Bolt):
+    outputs = ["word", "count"]
+
+    def initialize(self, config, context):
+        self.counter = Counter()
+
+    def process(self, tup):
+        word = tup.values[0]
+        self.counter[word] += 1
+        self.emit([word, self.counter[word]])
+```
+
+## Spouts
+
+To create a spout for a Heron topology, you need to subclass the [`Spout`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout) class, which has the following methods.
+
+```python
+class MySpout(Spout):
+    def initialize(self, config, context): pass
+    def next_tuple(self): pass
+    def ack(self, tup_id): pass
+    def fail(self, tup_id): pass
+    def activate(self): pass
+    def deactivate(self): pass
+    def close(self): pass
+```
+
+## `Spout` class methods
+
+The [`Spout`](/api/python/spout/spout.m.html#heronpy.spout.spout.Spout) class provides a number of methods that you should implement when subclassing.
+
+* The `initialize()` method is called when the spout is first initialized
+and provides the spout with the executing environment. It is equivalent to
+`open()` method of [`ISpout`](/api/org/apache/heron/api/spout/ISpout.html).
+Note that you should not override `__init__()` constructor of `Spout` class
+for initialization of custom variables, since it is used internally by HeronInstance; instead,
+`initialize()` should be used to initialize any custom variables or connections to databases.
+
+* The `next_tuple()` method is used to fetch tuples from input source. You can
+emit fetched tuples by calling `self.emit()`, as described below.
+
+* The `ack()` method is called when the `HeronTuple` with the `tup_id` emitted
+by this spout is successfully processed.
+
+* The `fail()` method is called when the `HeronTuple` with the `tup_id` emitted
+by this spout is not processed successfully.
+
+* The `activate()` method is called when the spout is asked to back into
+active state.
+
+* The `deactivate()` method is called when the spout is asked to enter deactive
+state.
+
+* The `close()` method is called when when the spout is shutdown. There is no
+guarantee that this method is called due to how the instance is killed.
+
+## `BaseSpout` class methods
+
+The `Spout` class inherits from the [`BaseSpout`](/api/python/spout/base_spout.m.html#heronpy.spout.base_spout.BaseSpout) class, which also provides you methods you can use in your spouts.
+
+```python
+class BaseSpout(BaseComponent):
+    def log(self, message, level=None): ...
+    def emit(self, tup, tup_id=None, stream="default", direct_task=None, need_task_ids=False): ...
+    @classmethod
+    def spec(cls, name=None, par=1, config=None): ...
+```
+
+* The `emit()` method is used to emit a given tuple, which can be a `list` or `tuple` of any Python objects. Unlike in the Java implementation, there is no `OutputCollector` in the Python implementation.
+
+* The `log()` method is used to log an arbitrary message, and its outputs are redirected to the log file of the component. It accepts an optional argument which specifies the logging level. By default, its logging level is `info`.
+
+    **Warning:** due to internal issue, you should **NOT** output anything to
+    `sys.stdout` or `sys.stderr`; instead, you should use this method to log anything you want.
+
+* In order to declare the output fields of this spout, you need to place
+a class attribute `outputs` as a list of `str` or `Stream`. Note that unlike Java,
+`declareOutputFields` does not exist in the Python implementation. Moreover, you can
+optionally specify the output fields from the `spec()` method from the `optional_outputs`.
+.
+
+* You will use the `spec()` method to define a topology and specify the location
+of this spout within the topology, as well as to give component-specific configurations.
+
+## Example spout
+
+The following is an example implementation of a spout in Python.
+
+```python
+from itertools import cycle
+from heronpy.api.spout.spout import Spout
+
+
+class WordSpout(Spout):
+    outputs = ['word']
+
+    def initialize(self, config, context):
+        self.words = cycle(["hello", "world", "heron", "storm"])
+        self.log("Initializing WordSpout...")
+
+    def next_tuple(self):
+        word = next(self.words)
+        self.emit([word])
+```
+
+By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
+
 ## Example topologies
 
 There are a number of example topologies that you can peruse in the [`examples/src/python`]({{% githubMaster %}}/examples/src/python) directory of the [Heron repo]({{% githubMaster %}}):
@@ -362,3 +536,4 @@ $ heron submit local \
 ```
 
 By default, the `submit` command also activates topologies. To disable this behavior, set the `--deploy-deactivated` flag.
+
diff --git a/website2/website/versioned_sidebars/version-0.20.0-incubating-sidebars.json b/website2/website/versioned_sidebars/version-0.20.0-incubating-sidebars.json
index 81d7bef..49cfe7f 100644
--- a/website2/website/versioned_sidebars/version-0.20.0-incubating-sidebars.json
+++ b/website2/website/versioned_sidebars/version-0.20.0-incubating-sidebars.json
@@ -22,7 +22,6 @@
     ],
     "Guides": [
       "version-0.20.0-incubating-guides-effectively-once-java-topologies",
-      "version-0.20.0-incubating-guides-python-topologies",
       "version-0.20.0-incubating-guides-data-model",
       "version-0.20.0-incubating-guides-tuple-serialization",
       "version-0.20.0-incubating-guides-ui-guide",