You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ch...@apache.org on 2019/05/10 06:58:54 UTC

[flink] branch master updated: [FLINK-12444][docs] Fix broken links

This is an automated email from the ASF dual-hosted git repository.

chesnay pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new cc19aef  [FLINK-12444][docs] Fix broken links
cc19aef is described below

commit cc19aef90ca94a397905497086914b13f9048a19
Author: Jark Wu <im...@gmail.com>
AuthorDate: Fri May 10 14:58:33 2019 +0800

    [FLINK-12444][docs] Fix broken links
---
 docs/concepts/programming-model.md              |  16 +-
 docs/concepts/programming-model.zh.md           |  16 +-
 docs/concepts/runtime.md                        |  10 +-
 docs/concepts/runtime.zh.md                     |  10 +-
 docs/dev/batch/index.md                         |   2 +-
 docs/dev/batch/index.zh.md                      |   2 +-
 docs/dev/connectors/cassandra.md                |   2 +-
 docs/dev/connectors/cassandra.zh.md             |   2 +-
 docs/dev/connectors/elasticsearch.md            |   4 +-
 docs/dev/connectors/elasticsearch.zh.md         |   4 +-
 docs/dev/connectors/filesystem_sink.md          |   2 +-
 docs/dev/connectors/filesystem_sink.zh.md       |   2 +-
 docs/dev/connectors/kafka.md                    |   6 +-
 docs/dev/connectors/kafka.zh.md                 |   6 +-
 docs/dev/connectors/kinesis.md                  |   8 +-
 docs/dev/connectors/kinesis.zh.md               |   8 +-
 docs/dev/connectors/nifi.md                     |   2 +-
 docs/dev/connectors/nifi.zh.md                  |   2 +-
 docs/dev/connectors/rabbitmq.md                 |   2 +-
 docs/dev/connectors/rabbitmq.zh.md              |   2 +-
 docs/dev/connectors/twitter.md                  |   2 +-
 docs/dev/connectors/twitter.zh.md               |   2 +-
 docs/dev/libs/cep.md                            |   4 +-
 docs/dev/libs/cep.zh.md                         |   4 +-
 docs/dev/libs/gelly/index.md                    |   2 +-
 docs/dev/libs/gelly/index.zh.md                 |   2 +-
 docs/dev/libs/ml/index.md                       |   4 +-
 docs/dev/libs/ml/index.zh.md                    |   4 +-
 docs/dev/libs/ml/quickstart.md                  |   2 +-
 docs/dev/libs/ml/quickstart.zh.md               |   2 +-
 docs/dev/stream/side_output.zh.md               | 148 +++++++++++++++++
 docs/dev/stream/state/queryable_state.md        |   2 +-
 docs/dev/stream/state/queryable_state.zh.md     |   2 +-
 docs/dev/table/index.md                         |   4 +-
 docs/dev/table/index.zh.md                      |   4 +-
 docs/internals/components.md                    |   4 +-
 docs/internals/components.zh.md                 |   4 +-
 docs/ops/state/large_state_tuning.md            |   4 +-
 docs/ops/state/large_state_tuning.zh.md         |   4 +-
 docs/redirects/linking_with_optional_modules.md |   2 +-
 docs/release-notes/flink-1.5.zh.md              |  92 +++++++++++
 docs/release-notes/flink-1.6.zh.md              |  43 +++++
 docs/release-notes/flink-1.7.zh.md              | 153 ++++++++++++++++++
 docs/release-notes/flink-1.8.zh.md              | 206 ++++++++++++++++++++++++
 44 files changed, 725 insertions(+), 83 deletions(-)

diff --git a/docs/concepts/programming-model.md b/docs/concepts/programming-model.md
index cb127c4..0ddafaa 100644
--- a/docs/concepts/programming-model.md
+++ b/docs/concepts/programming-model.md
@@ -31,7 +31,7 @@ under the License.
 
 Flink offers different levels of abstraction to develop streaming/batch applications.
 
-<img src="../fig/levels_of_abstraction.svg" alt="Programming levels of abstraction" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/levels_of_abstraction.svg" alt="Programming levels of abstraction" class="offset" width="80%" />
 
   - The lowest level abstraction simply offers **stateful streaming**. It is embedded into the [DataStream API](../dev/datastream_api.html)
     via the [Process Function](../dev/stream/operators/process_function.html). It allows users freely process events from one or more streams,
@@ -48,7 +48,7 @@ Flink offers different levels of abstraction to develop streaming/batch applicat
     for certain operations only. The *DataSet API* offers additional primitives on bounded data sets, like loops/iterations.
 
   - The **Table API** is a declarative DSL centered around *tables*, which may be dynamically changing tables (when representing streams).
-    The [Table API](../dev/table_api.html) follows the (extended) relational model: Tables have a schema attached (similar to tables in relational databases)
+    The [Table API](../dev/table/index.html) follows the (extended) relational model: Tables have a schema attached (similar to tables in relational databases)
     and the API offers comparable operations, such as select, project, join, group-by, aggregate, etc.
     Table API programs declaratively define *what logical operation should be done* rather than specifying exactly
    *how the code for the operation looks*. Though the Table API is extensible by various types of user-defined
@@ -60,7 +60,7 @@ Flink offers different levels of abstraction to develop streaming/batch applicat
 
   - The highest level abstraction offered by Flink is **SQL**. This abstraction is similar to the *Table API* both in semantics and
     expressiveness, but represents programs as SQL query expressions.
-    The [SQL](../dev/table_api.html#sql) abstraction closely interacts with the Table API, and SQL queries can be executed over tables defined in the *Table API*.
+    The [SQL](../dev/table/index.html#sql) abstraction closely interacts with the Table API, and SQL queries can be executed over tables defined in the *Table API*.
 
 
 ## Programs and Dataflows
@@ -76,7 +76,7 @@ Each dataflow starts with one or more **sources** and ends in one or more **sink
 arbitrary **directed acyclic graphs** *(DAGs)*. Although special forms of cycles are permitted via
 *iteration* constructs, for the most part we will gloss over this for simplicity.
 
-<img src="../fig/program_dataflow.svg" alt="A DataStream program, and its dataflow." class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/program_dataflow.svg" alt="A DataStream program, and its dataflow." class="offset" width="80%" />
 
 Often there is a one-to-one correspondence between the transformations in the programs and the operators
 in the dataflow. Sometimes, however, one transformation may consist of multiple transformation operators.
@@ -96,7 +96,7 @@ The number of operator subtasks is the **parallelism** of that particular operat
 is always that of its producing operator. Different operators of the same program may have different
 levels of parallelism.
 
-<img src="../fig/parallel_dataflow.svg" alt="A parallel dataflow" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/parallel_dataflow.svg" alt="A parallel dataflow" class="offset" width="80%" />
 
 Streams can transport data between two operators in a *one-to-one* (or *forwarding*) pattern, or in a *redistributing* pattern:
 
@@ -130,7 +130,7 @@ Windows can be *time driven* (example: every 30 seconds) or *data driven* (examp
 One typically distinguishes different types of windows, such as *tumbling windows* (no overlap),
 *sliding windows* (with overlap), and *session windows* (punctuated by a gap of inactivity).
 
-<img src="../fig/windows.svg" alt="Time- and Count Windows" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/windows.svg" alt="Time- and Count Windows" class="offset" width="80%" />
 
 More window examples can be found in this [blog post](https://flink.apache.org/news/2015/12/04/Introducing-windows.html).
 More details are in the [window docs](../dev/stream/operators/windows.html).
@@ -150,7 +150,7 @@ of time:
 
   - **Processing Time** is the local time at each operator that performs a time-based operation.
 
-<img src="../fig/event_ingestion_processing_time.svg" alt="Event Time, Ingestion Time, and Processing Time" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/event_ingestion_processing_time.svg" alt="Event Time, Ingestion Time, and Processing Time" class="offset" width="80%" />
 
 More details on how to handle time are in the [event time docs]({{ site.baseurl }}/dev/event_time.html).
 
@@ -169,7 +169,7 @@ and is restricted to the values associated with the current event's key. Alignin
 makes sure that all state updates are local operations, guaranteeing consistency without transaction overhead.
 This alignment also allows Flink to redistribute the state and adjust the stream partitioning transparently.
 
-<img src="../fig/state_partitioning.svg" alt="State and Partitioning" class="offset" width="50%" />
+<img src="{{ site.baseurl }}/fig/state_partitioning.svg" alt="State and Partitioning" class="offset" width="50%" />
 
 For more information, see the documentation on [state](../dev/stream/state/index.html).
 
diff --git a/docs/concepts/programming-model.zh.md b/docs/concepts/programming-model.zh.md
index ee6748e..99253da 100644
--- a/docs/concepts/programming-model.zh.md
+++ b/docs/concepts/programming-model.zh.md
@@ -31,7 +31,7 @@ under the License.
 
 Flink offers different levels of abstraction to develop streaming/batch applications.
 
-<img src="../fig/levels_of_abstraction.svg" alt="Programming levels of abstraction" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/levels_of_abstraction.svg" alt="Programming levels of abstraction" class="offset" width="80%" />
 
   - The lowest level abstraction simply offers **stateful streaming**. It is embedded into the [DataStream API](../dev/datastream_api.html)
     via the [Process Function](../dev/stream/operators/process_function.html). It allows users freely process events from one or more streams,
@@ -48,7 +48,7 @@ Flink offers different levels of abstraction to develop streaming/batch applicat
     for certain operations only. The *DataSet API* offers additional primitives on bounded data sets, like loops/iterations.
 
   - The **Table API** is a declarative DSL centered around *tables*, which may be dynamically changing tables (when representing streams).
-    The [Table API](../dev/table_api.html) follows the (extended) relational model: Tables have a schema attached (similar to tables in relational databases)
+    The [Table API](../dev/table/index.html) follows the (extended) relational model: Tables have a schema attached (similar to tables in relational databases)
     and the API offers comparable operations, such as select, project, join, group-by, aggregate, etc.
     Table API programs declaratively define *what logical operation should be done* rather than specifying exactly
    *how the code for the operation looks*. Though the Table API is extensible by various types of user-defined
@@ -60,7 +60,7 @@ Flink offers different levels of abstraction to develop streaming/batch applicat
 
   - The highest level abstraction offered by Flink is **SQL**. This abstraction is similar to the *Table API* both in semantics and
     expressiveness, but represents programs as SQL query expressions.
-    The [SQL](../dev/table_api.html#sql) abstraction closely interacts with the Table API, and SQL queries can be executed over tables defined in the *Table API*.
+    The [SQL](../dev/table/index.html#sql) abstraction closely interacts with the Table API, and SQL queries can be executed over tables defined in the *Table API*.
 
 
 ## Programs and Dataflows
@@ -76,7 +76,7 @@ Each dataflow starts with one or more **sources** and ends in one or more **sink
 arbitrary **directed acyclic graphs** *(DAGs)*. Although special forms of cycles are permitted via
 *iteration* constructs, for the most part we will gloss over this for simplicity.
 
-<img src="../fig/program_dataflow.svg" alt="A DataStream program, and its dataflow." class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/program_dataflow.svg" alt="A DataStream program, and its dataflow." class="offset" width="80%" />
 
 Often there is a one-to-one correspondence between the transformations in the programs and the operators
 in the dataflow. Sometimes, however, one transformation may consist of multiple transformation operators.
@@ -96,7 +96,7 @@ The number of operator subtasks is the **parallelism** of that particular operat
 is always that of its producing operator. Different operators of the same program may have different
 levels of parallelism.
 
-<img src="../fig/parallel_dataflow.svg" alt="A parallel dataflow" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/parallel_dataflow.svg" alt="A parallel dataflow" class="offset" width="80%" />
 
 Streams can transport data between two operators in a *one-to-one* (or *forwarding*) pattern, or in a *redistributing* pattern:
 
@@ -130,7 +130,7 @@ Windows can be *time driven* (example: every 30 seconds) or *data driven* (examp
 One typically distinguishes different types of windows, such as *tumbling windows* (no overlap),
 *sliding windows* (with overlap), and *session windows* (punctuated by a gap of inactivity).
 
-<img src="../fig/windows.svg" alt="Time- and Count Windows" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/windows.svg" alt="Time- and Count Windows" class="offset" width="80%" />
 
 More window examples can be found in this [blog post](https://flink.apache.org/news/2015/12/04/Introducing-windows.html).
 More details are in the [window docs](../dev/stream/operators/windows.html).
@@ -150,7 +150,7 @@ of time:
 
   - **Processing Time** is the local time at each operator that performs a time-based operation.
 
-<img src="../fig/event_ingestion_processing_time.svg" alt="Event Time, Ingestion Time, and Processing Time" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/event_ingestion_processing_time.svg" alt="Event Time, Ingestion Time, and Processing Time" class="offset" width="80%" />
 
 More details on how to handle time are in the [event time docs]({{ site.baseurl }}/dev/event_time.html).
 
@@ -169,7 +169,7 @@ and is restricted to the values associated with the current event's key. Alignin
 makes sure that all state updates are local operations, guaranteeing consistency without transaction overhead.
 This alignment also allows Flink to redistribute the state and adjust the stream partitioning transparently.
 
-<img src="../fig/state_partitioning.svg" alt="State and Partitioning" class="offset" width="50%" />
+<img src="{{ site.baseurl }}/fig/state_partitioning.svg" alt="State and Partitioning" class="offset" width="50%" />
 
 For more information, see the documentation on [state](../dev/stream/state/index.html).
 
diff --git a/docs/concepts/runtime.md b/docs/concepts/runtime.md
index 1c7c281..e6a2c51 100644
--- a/docs/concepts/runtime.md
+++ b/docs/concepts/runtime.md
@@ -35,7 +35,7 @@ The chaining behavior can be configured; see the [chaining docs](../dev/stream/o
 
 The sample dataflow in the figure below is executed with five subtasks, and hence with five parallel threads.
 
-<img src="../fig/tasks_chains.svg" alt="Operator chaining into Tasks" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/tasks_chains.svg" alt="Operator chaining into Tasks" class="offset" width="80%" />
 
 {% top %}
 
@@ -62,7 +62,7 @@ The **client** is not part of the runtime and program execution, but is used to
 After that, the client can disconnect, or stay connected to receive progress reports. The client runs either as part of the
 Java/Scala program that triggers the execution, or in the command line process `./bin/flink run ...`.
 
-<img src="../fig/processes.svg" alt="The processes involved in executing a Flink dataflow" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/processes.svg" alt="The processes involved in executing a Flink dataflow" class="offset" width="80%" />
 
 {% top %}
 
@@ -82,7 +82,7 @@ separate container, for example). Having multiple slots
 means more subtasks share the same JVM. Tasks in the same JVM share TCP connections (via multiplexing) and
 heartbeat messages. They may also share data sets and data structures, thus reducing the per-task overhead.
 
-<img src="../fig/tasks_slots.svg" alt="A TaskManager with Task Slots and Tasks" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/tasks_slots.svg" alt="A TaskManager with Task Slots and Tasks" class="offset" width="80%" />
 
 By default, Flink allows subtasks to share slots even if they are subtasks of different tasks, so long as
 they are from the same job. The result is that one slot may hold an entire pipeline of the
@@ -96,7 +96,7 @@ job. Allowing this *slot sharing* has two main benefits:
     With slot sharing, increasing the base parallelism in our example from two to six yields full utilization of the
     slotted resources, while making sure that the heavy subtasks are fairly distributed among the TaskManagers.
 
-<img src="../fig/slot_sharing.svg" alt="TaskManagers with shared Task Slots" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/slot_sharing.svg" alt="TaskManagers with shared Task Slots" class="offset" width="80%" />
 
 The APIs also include a *[resource group](../dev/stream/operators/#task-chaining-and-resource-groups)* mechanism which can be used to prevent undesirable slot sharing. 
 
@@ -112,7 +112,7 @@ stores data in an in-memory hash map, another state backend uses [RocksDB](http:
 In addition to defining the data structure that holds the state, the state backends also implement the logic to
 take a point-in-time snapshot of the key/value state and store that snapshot as part of a checkpoint.
 
-<img src="../fig/checkpoints.svg" alt="checkpoints and snapshots" class="offset" width="60%" />
+<img src="{{ site.baseurl }}/fig/checkpoints.svg" alt="checkpoints and snapshots" class="offset" width="60%" />
 
 {% top %}
 
diff --git a/docs/concepts/runtime.zh.md b/docs/concepts/runtime.zh.md
index 859e345..329de24 100644
--- a/docs/concepts/runtime.zh.md
+++ b/docs/concepts/runtime.zh.md
@@ -35,7 +35,7 @@ The chaining behavior can be configured; see the [chaining docs](../dev/stream/o
 
 The sample dataflow in the figure below is executed with five subtasks, and hence with five parallel threads.
 
-<img src="../fig/tasks_chains.svg" alt="Operator chaining into Tasks" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/tasks_chains.svg" alt="Operator chaining into Tasks" class="offset" width="80%" />
 
 {% top %}
 
@@ -62,7 +62,7 @@ The **client** is not part of the runtime and program execution, but is used to
 After that, the client can disconnect, or stay connected to receive progress reports. The client runs either as part of the
 Java/Scala program that triggers the execution, or in the command line process `./bin/flink run ...`.
 
-<img src="../fig/processes.svg" alt="The processes involved in executing a Flink dataflow" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/processes.svg" alt="The processes involved in executing a Flink dataflow" class="offset" width="80%" />
 
 {% top %}
 
@@ -82,7 +82,7 @@ separate container, for example). Having multiple slots
 means more subtasks share the same JVM. Tasks in the same JVM share TCP connections (via multiplexing) and
 heartbeat messages. They may also share data sets and data structures, thus reducing the per-task overhead.
 
-<img src="../fig/tasks_slots.svg" alt="A TaskManager with Task Slots and Tasks" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/tasks_slots.svg" alt="A TaskManager with Task Slots and Tasks" class="offset" width="80%" />
 
 By default, Flink allows subtasks to share slots even if they are subtasks of different tasks, so long as
 they are from the same job. The result is that one slot may hold an entire pipeline of the
@@ -96,7 +96,7 @@ job. Allowing this *slot sharing* has two main benefits:
     With slot sharing, increasing the base parallelism in our example from two to six yields full utilization of the
     slotted resources, while making sure that the heavy subtasks are fairly distributed among the TaskManagers.
 
-<img src="../fig/slot_sharing.svg" alt="TaskManagers with shared Task Slots" class="offset" width="80%" />
+<img src="{{ site.baseurl }}/fig/slot_sharing.svg" alt="TaskManagers with shared Task Slots" class="offset" width="80%" />
 
 The APIs also include a *[resource group](../dev/stream/operators/#task-chaining-and-resource-groups)* mechanism which can be used to prevent undesirable slot sharing.
 
@@ -112,7 +112,7 @@ stores data in an in-memory hash map, another state backend uses [RocksDB](http:
 In addition to defining the data structure that holds the state, the state backends also implement the logic to
 take a point-in-time snapshot of the key/value state and store that snapshot as part of a checkpoint.
 
-<img src="../fig/checkpoints.svg" alt="checkpoints and snapshots" class="offset" width="60%" />
+<img src="{{ site.baseurl }}/fig/checkpoints.svg" alt="checkpoints and snapshots" class="offset" width="60%" />
 
 {% top %}
 
diff --git a/docs/dev/batch/index.md b/docs/dev/batch/index.md
index 934cb2b..7d18bac 100644
--- a/docs/dev/batch/index.md
+++ b/docs/dev/batch/index.md
@@ -49,7 +49,7 @@ Example Program
 
 The following program is a complete, working example of WordCount. You can copy &amp; paste the code
 to run it locally. You only have to include the correct Flink's library into your project
-(see Section [Linking with Flink]({{ site.baseurl }}/dev/linking_with_flink.html)) and specify the imports. Then you are ready
+(see Section [Linking with Flink]({{ site.baseurl }}/dev/projectsetup/dependencies.html)) and specify the imports. Then you are ready
 to go!
 
 <div class="codetabs" markdown="1">
diff --git a/docs/dev/batch/index.zh.md b/docs/dev/batch/index.zh.md
index f2dbe21..f097af9 100644
--- a/docs/dev/batch/index.zh.md
+++ b/docs/dev/batch/index.zh.md
@@ -49,7 +49,7 @@ Example Program
 
 The following program is a complete, working example of WordCount. You can copy &amp; paste the code
 to run it locally. You only have to include the correct Flink's library into your project
-(see Section [Linking with Flink]({{ site.baseurl }}/dev/linking_with_flink.html)) and specify the imports. Then you are ready
+(see Section [Linking with Flink]({{ site.baseurl }}/dev/projectsetup/dependencies.html)) and specify the imports. Then you are ready
 to go!
 
 <div class="codetabs" markdown="1">
diff --git a/docs/dev/connectors/cassandra.md b/docs/dev/connectors/cassandra.md
index 292314d..9a51387 100644
--- a/docs/dev/connectors/cassandra.md
+++ b/docs/dev/connectors/cassandra.md
@@ -43,7 +43,7 @@ To use this connector, add the following dependency to your project:
 </dependency>
 {% endhighlight %}
 
-Note that the streaming connectors are currently __NOT__ part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/linking.html).
+Note that the streaming connectors are currently __NOT__ part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/projectsetup/dependencies.html).
 
 ## Installing Apache Cassandra
 There are multiple ways to bring up a Cassandra instance on local machine:
diff --git a/docs/dev/connectors/cassandra.zh.md b/docs/dev/connectors/cassandra.zh.md
index 292314d..9a51387 100644
--- a/docs/dev/connectors/cassandra.zh.md
+++ b/docs/dev/connectors/cassandra.zh.md
@@ -43,7 +43,7 @@ To use this connector, add the following dependency to your project:
 </dependency>
 {% endhighlight %}
 
-Note that the streaming connectors are currently __NOT__ part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/linking.html).
+Note that the streaming connectors are currently __NOT__ part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/projectsetup/dependencies.html).
 
 ## Installing Apache Cassandra
 There are multiple ways to bring up a Cassandra instance on local machine:
diff --git a/docs/dev/connectors/elasticsearch.md b/docs/dev/connectors/elasticsearch.md
index 2b3965a..c306bdb 100644
--- a/docs/dev/connectors/elasticsearch.md
+++ b/docs/dev/connectors/elasticsearch.md
@@ -59,7 +59,7 @@ of the Elasticsearch installation:
 </table>
 
 Note that the streaming connectors are currently not part of the binary
-distribution. See [here]({{site.baseurl}}/dev/linking.html) for information
+distribution. See [here]({{site.baseurl}}/dev/projectsetup/dependencies.html) for information
 about how to package the program with the libraries for cluster execution.
 
 ## Installing Elasticsearch
@@ -462,7 +462,7 @@ More information about Elasticsearch can be found [here](https://elastic.co).
 
 For the execution of your Flink program, it is recommended to build a
 so-called uber-jar (executable jar) containing all your dependencies
-(see [here]({{site.baseurl}}/dev/linking.html) for further information).
+(see [here]({{site.baseurl}}/dev/projectsetup/dependencies.html) for further information).
 
 Alternatively, you can put the connector's jar file into Flink's `lib/` folder to make it available
 system-wide, i.e. for all job being run.
diff --git a/docs/dev/connectors/elasticsearch.zh.md b/docs/dev/connectors/elasticsearch.zh.md
index 979fb67..aed915f 100644
--- a/docs/dev/connectors/elasticsearch.zh.md
+++ b/docs/dev/connectors/elasticsearch.zh.md
@@ -59,7 +59,7 @@ of the Elasticsearch installation:
 </table>
 
 Note that the streaming connectors are currently not part of the binary
-distribution. See [here]({{site.baseurl}}/dev/linking.html) for information
+distribution. See [here]({{site.baseurl}}/dev/projectsetup/dependencies.html) for information
 about how to package the program with the libraries for cluster execution.
 
 ## Installing Elasticsearch
@@ -462,7 +462,7 @@ More information about Elasticsearch can be found [here](https://elastic.co).
 
 For the execution of your Flink program, it is recommended to build a
 so-called uber-jar (executable jar) containing all your dependencies
-(see [here]({{site.baseurl}}/dev/linking.html) for further information).
+(see [here]({{site.baseurl}}/dev/projectsetup/dependencies.html) for further information).
 
 Alternatively, you can put the connector's jar file into Flink's `lib/` folder to make it available
 system-wide, i.e. for all job being run.
diff --git a/docs/dev/connectors/filesystem_sink.md b/docs/dev/connectors/filesystem_sink.md
index 9253968..f9a828d 100644
--- a/docs/dev/connectors/filesystem_sink.md
+++ b/docs/dev/connectors/filesystem_sink.md
@@ -37,7 +37,7 @@ following dependency to your project:
 
 Note that the streaming connectors are currently not part of the binary
 distribution. See
-[here]({{site.baseurl}}/dev/linking.html)
+[here]({{site.baseurl}}/dev/projectsetup/dependencies.html)
 for information about how to package the program with the libraries for
 cluster execution.
 
diff --git a/docs/dev/connectors/filesystem_sink.zh.md b/docs/dev/connectors/filesystem_sink.zh.md
index 9253968..f9a828d 100644
--- a/docs/dev/connectors/filesystem_sink.zh.md
+++ b/docs/dev/connectors/filesystem_sink.zh.md
@@ -37,7 +37,7 @@ following dependency to your project:
 
 Note that the streaming connectors are currently not part of the binary
 distribution. See
-[here]({{site.baseurl}}/dev/linking.html)
+[here]({{site.baseurl}}/dev/projectsetup/dependencies.html)
 for information about how to package the program with the libraries for
 cluster execution.
 
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index bfa6d5c..e4e4d5d 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -114,7 +114,7 @@ Then, import the connector in your maven project:
 {% endhighlight %}
 
 Note that the streaming connectors are currently not part of the binary distribution.
-See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/linking.html).
+See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/projectsetup/dependencies.html).
 
 ## Installing Apache Kafka
 
@@ -490,8 +490,8 @@ special records in the Kafka stream that contain the current event-time watermar
 Consumer allows the specification of an `AssignerWithPeriodicWatermarks` or an `AssignerWithPunctuatedWatermarks`.
 
 You can specify your custom timestamp extractor/watermark emitter as described
-[here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html), or use one from the
-[predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so, you
+[here]({{ site.baseurl }}/dev/event_timestamps_watermarks.html), or use one from the
+[predefined ones]({{ site.baseurl }}/dev/event_timestamp_extractors.html). After doing so, you
 can pass it to your consumer in the following way:
 
 <div class="codetabs" markdown="1">
diff --git a/docs/dev/connectors/kafka.zh.md b/docs/dev/connectors/kafka.zh.md
index e9b7c82..2412277 100644
--- a/docs/dev/connectors/kafka.zh.md
+++ b/docs/dev/connectors/kafka.zh.md
@@ -114,7 +114,7 @@ Then, import the connector in your maven project:
 {% endhighlight %}
 
 Note that the streaming connectors are currently not part of the binary distribution.
-See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/linking.html).
+See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/projectsetup/dependencies.html).
 
 ## Installing Apache Kafka
 
@@ -490,8 +490,8 @@ special records in the Kafka stream that contain the current event-time watermar
 Consumer allows the specification of an `AssignerWithPeriodicWatermarks` or an `AssignerWithPunctuatedWatermarks`.
 
 You can specify your custom timestamp extractor/watermark emitter as described
-[here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html), or use one from the
-[predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so, you
+[here]({{ site.baseurl }}/dev/event_timestamps_watermarks.html), or use one from the
+[predefined ones]({{ site.baseurl }}/dev/event_timestamp_extractors.html). After doing so, you
 can pass it to your consumer in the following way:
 
 <div class="codetabs" markdown="1">
diff --git a/docs/dev/connectors/kinesis.md b/docs/dev/connectors/kinesis.md
index f39da3d..3383706 100644
--- a/docs/dev/connectors/kinesis.md
+++ b/docs/dev/connectors/kinesis.md
@@ -63,7 +63,7 @@ mvn clean install -Pinclude-kinesis -Daws.kinesis-kpl.version=0.12.6 -DskipTests
 {% endhighlight %}
 
 The streaming connectors are not part of the binary distribution. See how to link with them for cluster
-execution [here]({{site.baseurl}}/dev/linking.html).
+execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 ## Using the Amazon Kinesis Streams Service
 Follow the instructions from the [Amazon Kinesis Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html)
@@ -195,14 +195,14 @@ env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
 </div>
 </div>
 
-If streaming topologies choose to use the [event time notion]({{site.baseurl}}/apis/streaming/event_time.html) for record
+If streaming topologies choose to use the [event time notion]({{site.baseurl}}/dev/event_time.html) for record
 timestamps, an *approximate arrival timestamp* will be used by default. This timestamp is attached to records by Kinesis once they
 were successfully received and stored by streams. Note that this timestamp is typically referred to as a Kinesis server-side
 timestamp, and there are no guarantees about the accuracy or order correctness (i.e., the timestamps may not always be
 ascending).
 
-Users can choose to override this default with a custom timestamp, as described [here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html),
-or use one from the [predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so,
+Users can choose to override this default with a custom timestamp, as described [here]({{ site.baseurl }}/dev/event_timestamps_watermarks.html),
+or use one from the [predefined ones]({{ site.baseurl }}/dev/event_timestamp_extractors.html). After doing so,
 it can be passed to the consumer in the following way:
 
 <div class="codetabs" markdown="1">
diff --git a/docs/dev/connectors/kinesis.zh.md b/docs/dev/connectors/kinesis.zh.md
index e603e20..59e313f 100644
--- a/docs/dev/connectors/kinesis.zh.md
+++ b/docs/dev/connectors/kinesis.zh.md
@@ -63,7 +63,7 @@ mvn clean install -Pinclude-kinesis -Daws.kinesis-kpl.version=0.12.6 -DskipTests
 {% endhighlight %}
 
 The streaming connectors are not part of the binary distribution. See how to link with them for cluster
-execution [here]({{site.baseurl}}/dev/linking.html).
+execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 ## Using the Amazon Kinesis Streams Service
 Follow the instructions from the [Amazon Kinesis Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html)
@@ -195,14 +195,14 @@ env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime)
 </div>
 </div>
 
-If streaming topologies choose to use the [event time notion]({{site.baseurl}}/apis/streaming/event_time.html) for record
+If streaming topologies choose to use the [event time notion]({{site.baseurl}}/dev/event_time.html) for record
 timestamps, an *approximate arrival timestamp* will be used by default. This timestamp is attached to records by Kinesis once they
 were successfully received and stored by streams. Note that this timestamp is typically referred to as a Kinesis server-side
 timestamp, and there are no guarantees about the accuracy or order correctness (i.e., the timestamps may not always be
 ascending).
 
-Users can choose to override this default with a custom timestamp, as described [here]({{ site.baseurl }}/apis/streaming/event_timestamps_watermarks.html),
-or use one from the [predefined ones]({{ site.baseurl }}/apis/streaming/event_timestamp_extractors.html). After doing so,
+Users can choose to override this default with a custom timestamp, as described [here]({{ site.baseurl }}/dev/event_timestamps_watermarks.html),
+or use one from the [predefined ones]({{ site.baseurl }}/dev/event_timestamp_extractors.html). After doing so,
 it can be passed to the consumer in the following way:
 
 <div class="codetabs" markdown="1">
diff --git a/docs/dev/connectors/nifi.md b/docs/dev/connectors/nifi.md
index 392b173..97fd831 100644
--- a/docs/dev/connectors/nifi.md
+++ b/docs/dev/connectors/nifi.md
@@ -37,7 +37,7 @@ following dependency to your project:
 
 Note that the streaming connectors are currently not part of the binary
 distribution. See
-[here]({{site.baseurl}}/dev/linking.html)
+[here]({{site.baseurl}}/dev/projectsetup/dependencies.html)
 for information about how to package the program with the libraries for
 cluster execution.
 
diff --git a/docs/dev/connectors/nifi.zh.md b/docs/dev/connectors/nifi.zh.md
index 392b173..97fd831 100644
--- a/docs/dev/connectors/nifi.zh.md
+++ b/docs/dev/connectors/nifi.zh.md
@@ -37,7 +37,7 @@ following dependency to your project:
 
 Note that the streaming connectors are currently not part of the binary
 distribution. See
-[here]({{site.baseurl}}/dev/linking.html)
+[here]({{site.baseurl}}/dev/projectsetup/dependencies.html)
 for information about how to package the program with the libraries for
 cluster execution.
 
diff --git a/docs/dev/connectors/rabbitmq.md b/docs/dev/connectors/rabbitmq.md
index 2a698c1..838db2a 100644
--- a/docs/dev/connectors/rabbitmq.md
+++ b/docs/dev/connectors/rabbitmq.md
@@ -47,7 +47,7 @@ This connector provides access to data streams from [RabbitMQ](http://www.rabbit
 </dependency>
 {% endhighlight %}
 
-Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 #### Installing RabbitMQ
 Follow the instructions from the [RabbitMQ download page](http://www.rabbitmq.com/download.html). After the installation the server automatically starts, and the application connecting to RabbitMQ can be launched.
diff --git a/docs/dev/connectors/rabbitmq.zh.md b/docs/dev/connectors/rabbitmq.zh.md
index 2a698c1..838db2a 100644
--- a/docs/dev/connectors/rabbitmq.zh.md
+++ b/docs/dev/connectors/rabbitmq.zh.md
@@ -47,7 +47,7 @@ This connector provides access to data streams from [RabbitMQ](http://www.rabbit
 </dependency>
 {% endhighlight %}
 
-Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 #### Installing RabbitMQ
 Follow the instructions from the [RabbitMQ download page](http://www.rabbitmq.com/download.html). After the installation the server automatically starts, and the application connecting to RabbitMQ can be launched.
diff --git a/docs/dev/connectors/twitter.md b/docs/dev/connectors/twitter.md
index e6fe32a..9ca394b 100644
--- a/docs/dev/connectors/twitter.md
+++ b/docs/dev/connectors/twitter.md
@@ -36,7 +36,7 @@ To use this connector, add the following dependency to your project:
 {% endhighlight %}
 
 Note that the streaming connectors are currently not part of the binary distribution.
-See linking with them for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+See linking with them for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 #### Authentication
 In order to connect to the Twitter stream the user has to register their program and acquire the necessary information for the authentication. The process is described below.
diff --git a/docs/dev/connectors/twitter.zh.md b/docs/dev/connectors/twitter.zh.md
index e6fe32a..9ca394b 100644
--- a/docs/dev/connectors/twitter.zh.md
+++ b/docs/dev/connectors/twitter.zh.md
@@ -36,7 +36,7 @@ To use this connector, add the following dependency to your project:
 {% endhighlight %}
 
 Note that the streaming connectors are currently not part of the binary distribution.
-See linking with them for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+See linking with them for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 #### Authentication
 In order to connect to the Twitter stream the user has to register their program and acquire the necessary information for the authentication. The process is described below.
diff --git a/docs/dev/libs/cep.md b/docs/dev/libs/cep.md
index 2de3d1e..fa11502 100644
--- a/docs/dev/libs/cep.md
+++ b/docs/dev/libs/cep.md
@@ -38,7 +38,7 @@ library makes when [dealing with lateness](#handling-lateness-in-event-time) in
 
 ## Getting Started
 
-If you want to jump right in, [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html) and
+If you want to jump right in, [set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html) and
 add the FlinkCEP dependency to the `pom.xml` of your project.
 
 <div class="codetabs" markdown="1">
@@ -63,7 +63,7 @@ add the FlinkCEP dependency to the `pom.xml` of your project.
 </div>
 </div>
 
-{% info %} FlinkCEP is not part of the binary distribution. See how to link with it for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+{% info %} FlinkCEP is not part of the binary distribution. See how to link with it for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 Now you can start writing your first CEP program using the Pattern API.
 
diff --git a/docs/dev/libs/cep.zh.md b/docs/dev/libs/cep.zh.md
index 5cc7cde..cd3c5ec 100644
--- a/docs/dev/libs/cep.zh.md
+++ b/docs/dev/libs/cep.zh.md
@@ -38,7 +38,7 @@ library makes when [dealing with lateness](#handling-lateness-in-event-time) in
 
 ## Getting Started
 
-If you want to jump right in, [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html) and
+If you want to jump right in, [set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html) and
 add the FlinkCEP dependency to the `pom.xml` of your project.
 
 <div class="codetabs" markdown="1">
@@ -63,7 +63,7 @@ add the FlinkCEP dependency to the `pom.xml` of your project.
 </div>
 </div>
 
-{% info %} FlinkCEP is not part of the binary distribution. See how to link with it for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+{% info %} FlinkCEP is not part of the binary distribution. See how to link with it for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 Now you can start writing your first CEP program using the Pattern API.
 
diff --git a/docs/dev/libs/gelly/index.md b/docs/dev/libs/gelly/index.md
index 2a08ac2..97b91b8 100644
--- a/docs/dev/libs/gelly/index.md
+++ b/docs/dev/libs/gelly/index.md
@@ -63,7 +63,7 @@ Add the following dependency to your `pom.xml` to use Gelly.
 </div>
 </div>
 
-Note that Gelly is not part of the binary distribution. See [linking]({{ site.baseurl }}/dev/linking.html) for
+Note that Gelly is not part of the binary distribution. See [linking]({{ site.baseurl }}/dev/projectsetup/dependencies.html) for
 instructions on packaging Gelly libraries into Flink user programs.
 
 The remaining sections provide a description of available methods and present several examples of how to use Gelly and how to mix it with the Flink DataSet API.
diff --git a/docs/dev/libs/gelly/index.zh.md b/docs/dev/libs/gelly/index.zh.md
index a72a557..4d69411 100644
--- a/docs/dev/libs/gelly/index.zh.md
+++ b/docs/dev/libs/gelly/index.zh.md
@@ -63,7 +63,7 @@ Add the following dependency to your `pom.xml` to use Gelly.
 </div>
 </div>
 
-Note that Gelly is not part of the binary distribution. See [linking]({{ site.baseurl }}/dev/linking.html) for
+Note that Gelly is not part of the binary distribution. See [linking]({{ site.baseurl }}/dev/projectsetup/dependencies.html) for
 instructions on packaging Gelly libraries into Flink user programs.
 
 The remaining sections provide a description of available methods and present several examples of how to use Gelly and how to mix it with the Flink DataSet API.
diff --git a/docs/dev/libs/ml/index.md b/docs/dev/libs/ml/index.md
index f8a75a4..a623a83 100644
--- a/docs/dev/libs/ml/index.md
+++ b/docs/dev/libs/ml/index.md
@@ -72,7 +72,7 @@ FlinkML currently supports the following algorithms:
 You can check out our [quickstart guide](quickstart.html) for a comprehensive getting started
 example.
 
-If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html).
+If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
 
 {% highlight xml %}
@@ -84,7 +84,7 @@ Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
 {% endhighlight %}
 
 Note that FlinkML is currently not part of the binary distribution.
-See linking with it for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+See linking with it for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 Now you can start solving your analysis task.
 The following code snippet shows how easy it is to train a multiple linear regression model.
diff --git a/docs/dev/libs/ml/index.zh.md b/docs/dev/libs/ml/index.zh.md
index fd11767..9ee7ff5 100644
--- a/docs/dev/libs/ml/index.zh.md
+++ b/docs/dev/libs/ml/index.zh.md
@@ -72,7 +72,7 @@ FlinkML currently supports the following algorithms:
 You can check out our [quickstart guide](quickstart.html) for a comprehensive getting started
 example.
 
-If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html).
+If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
 
 {% highlight xml %}
@@ -84,7 +84,7 @@ Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
 {% endhighlight %}
 
 Note that FlinkML is currently not part of the binary distribution.
-See linking with it for cluster execution [here]({{site.baseurl}}/dev/linking.html).
+See linking with it for cluster execution [here]({{site.baseurl}}/dev/projectsetup/dependencies.html).
 
 Now you can start solving your analysis task.
 The following code snippet shows how easy it is to train a multiple linear regression model.
diff --git a/docs/dev/libs/ml/quickstart.md b/docs/dev/libs/ml/quickstart.md
index 2e9a7b9..3f4d980 100644
--- a/docs/dev/libs/ml/quickstart.md
+++ b/docs/dev/libs/ml/quickstart.md
@@ -55,7 +55,7 @@ through [principal components analysis](https://en.wikipedia.org/wiki/Principal_
 ## Linking with FlinkML
 
 In order to use FlinkML in your project, first you have to
-[set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html).
+[set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project:
 
 {% highlight xml %}
diff --git a/docs/dev/libs/ml/quickstart.zh.md b/docs/dev/libs/ml/quickstart.zh.md
index 2e9a7b9..3f4d980 100644
--- a/docs/dev/libs/ml/quickstart.zh.md
+++ b/docs/dev/libs/ml/quickstart.zh.md
@@ -55,7 +55,7 @@ through [principal components analysis](https://en.wikipedia.org/wiki/Principal_
 ## Linking with FlinkML
 
 In order to use FlinkML in your project, first you have to
-[set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html).
+[set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project:
 
 {% highlight xml %}
diff --git a/docs/dev/stream/side_output.zh.md b/docs/dev/stream/side_output.zh.md
new file mode 100644
index 0000000..c7c4f3c
--- /dev/null
+++ b/docs/dev/stream/side_output.zh.md
@@ -0,0 +1,148 @@
+---
+title: "旁路输出"
+nav-title: "Side Outputs"
+nav-parent_id: streaming
+nav-pos: 36
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+* This will be replaced by the TOC
+{:toc}
+
+In addition to the main stream that results from `DataStream` operations, you can also produce any
+number of additional side output result streams. The type of data in the result streams does not
+have to match the type of data in the main stream and the types of the different side outputs can
+also differ. This operation can be useful when you want to split a stream of data where you would
+normally have to replicate the stream and then filter out from each stream the data that you don't
+want to have.
+
+When using side outputs, you first need to define an `OutputTag` that will be used to identify a
+side output stream:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+{% highlight java %}
+// this needs to be an anonymous inner class, so that we can analyze the type
+OutputTag<String> outputTag = new OutputTag<String>("side-output") {};
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val outputTag = OutputTag[String]("side-output")
+{% endhighlight %}
+</div>
+</div>
+
+Notice how the `OutputTag` is typed according to the type of elements that the side output stream
+contains.
+
+Emitting data to a side output is possible from the following functions:
+
+- [ProcessFunction]({{ site.baseurl }}/dev/stream/operators/process_function.html)
+- [KeyedProcessFunction]({{ site.baseurl }}/dev/stream/operators/process_function.html#the-keyedprocessfunction)
+- CoProcessFunction
+- [ProcessWindowFunction]({{ site.baseurl }}/dev/stream/operators/windows.html#processwindowfunction)
+- ProcessAllWindowFunction
+
+You can use the `Context` parameter, which is exposed to users in the above functions, to emit
+data to a side output identified by an `OutputTag`. Here is an example of emitting side output
+data from a `ProcessFunction`:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+{% highlight java %}
+DataStream<Integer> input = ...;
+
+final OutputTag<String> outputTag = new OutputTag<String>("side-output"){};
+
+SingleOutputStreamOperator<Integer> mainDataStream = input
+  .process(new ProcessFunction<Integer, Integer>() {
+
+      @Override
+      public void processElement(
+          Integer value,
+          Context ctx,
+          Collector<Integer> out) throws Exception {
+        // emit data to regular output
+        out.collect(value);
+
+        // emit data to side output
+        ctx.output(outputTag, "sideout-" + String.valueOf(value));
+      }
+    });
+{% endhighlight %}
+
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+
+val input: DataStream[Int] = ...
+val outputTag = OutputTag[String]("side-output")
+
+val mainDataStream = input
+  .process(new ProcessFunction[Int, Int] {
+    override def processElement(
+        value: Int,
+        ctx: ProcessFunction[Int, Int]#Context,
+        out: Collector[Int]): Unit = {
+      // emit data to regular output
+      out.collect(value)
+
+      // emit data to side output
+      ctx.output(outputTag, "sideout-" + String.valueOf(value))
+    }
+  })
+{% endhighlight %}
+</div>
+</div>
+
+For retrieving the side output stream you use `getSideOutput(OutputTag)`
+on the result of the `DataStream` operation. This will give you a `DataStream` that is typed
+to the result of the side output stream:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+
+{% highlight java %}
+final OutputTag<String> outputTag = new OutputTag<String>("side-output"){};
+
+SingleOutputStreamOperator<Integer> mainDataStream = ...;
+
+DataStream<String> sideOutputStream = mainDataStream.getSideOutput(outputTag);
+{% endhighlight %}
+
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val outputTag = OutputTag[String]("side-output")
+
+val mainDataStream = ...
+
+val sideOutputStream: DataStream[String] = mainDataStream.getSideOutput(outputTag)
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
diff --git a/docs/dev/stream/state/queryable_state.md b/docs/dev/stream/state/queryable_state.md
index 127e909..fb14cb4 100644
--- a/docs/dev/stream/state/queryable_state.md
+++ b/docs/dev/stream/state/queryable_state.md
@@ -180,7 +180,7 @@ jar which must be explicitly included as a dependency in the `pom.xml` of your p
 {% endhighlight %}
 </div>
 
-For more on this, you can check how to [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html).
+For more on this, you can check how to [set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html).
 
 The `QueryableStateClient` will submit your query to the internal proxy, which will then process your query and return 
 the final result. The only requirement to initialize the client is to provide a valid `TaskManager` hostname (remember 
diff --git a/docs/dev/stream/state/queryable_state.zh.md b/docs/dev/stream/state/queryable_state.zh.md
index 0c779c6..5115f11 100644
--- a/docs/dev/stream/state/queryable_state.zh.md
+++ b/docs/dev/stream/state/queryable_state.zh.md
@@ -178,7 +178,7 @@ jar which must be explicitly included as a dependency in the `pom.xml` of your p
 {% endhighlight %}
 </div>
 
-For more on this, you can check how to [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink.html).
+For more on this, you can check how to [set up a Flink program]({{ site.baseurl }}/dev/projectsetup/dependencies.html).
 
 The `QueryableStateClient` will submit your query to the internal proxy, which will then process your query and return 
 the final result. The only requirement to initialize the client is to provide a valid `TaskManager` hostname (remember 
diff --git a/docs/dev/table/index.md b/docs/dev/table/index.md
index b23619a..43ea5e2 100644
--- a/docs/dev/table/index.md
+++ b/docs/dev/table/index.md
@@ -87,7 +87,7 @@ Internally, parts of the table ecosystem are implemented in Scala. Therefore, pl
 
 ### Extension Dependencies
 
-If you want to implement a [custom format](({{ site.baseurl }}/dev/table/sourceSinks.html#define-a-tablefactory)) for interacting with Kafka or a set of [user-defined functions]({{ site.baseurl }}/dev/table/functions.html), the following dependency is sufficient and can be used for JAR files for the SQL Client:
+If you want to implement a [custom format]({{ site.baseurl }}/dev/table/sourceSinks.html#define-a-tablefactory) for interacting with Kafka or a set of [user-defined functions]({{ site.baseurl }}/dev/table/functions.html), the following dependency is sufficient and can be used for JAR files for the SQL Client:
 
 {% highlight xml %}
 <dependency>
@@ -117,4 +117,4 @@ Where to go next?
 * [Built-in Functions]({{ site.baseurl }}/dev/table/functions.html): Supported functions in Table API and SQL.
 * [SQL Client]({{ site.baseurl }}/dev/table/sqlClient.html): Play around with Flink SQL and submit a table program to a cluster without programming knowledge.
 
-{% top %}
\ No newline at end of file
+{% top %}
diff --git a/docs/dev/table/index.zh.md b/docs/dev/table/index.zh.md
index b23619a..43ea5e2 100644
--- a/docs/dev/table/index.zh.md
+++ b/docs/dev/table/index.zh.md
@@ -87,7 +87,7 @@ Internally, parts of the table ecosystem are implemented in Scala. Therefore, pl
 
 ### Extension Dependencies
 
-If you want to implement a [custom format](({{ site.baseurl }}/dev/table/sourceSinks.html#define-a-tablefactory)) for interacting with Kafka or a set of [user-defined functions]({{ site.baseurl }}/dev/table/functions.html), the following dependency is sufficient and can be used for JAR files for the SQL Client:
+If you want to implement a [custom format]({{ site.baseurl }}/dev/table/sourceSinks.html#define-a-tablefactory) for interacting with Kafka or a set of [user-defined functions]({{ site.baseurl }}/dev/table/functions.html), the following dependency is sufficient and can be used for JAR files for the SQL Client:
 
 {% highlight xml %}
 <dependency>
@@ -117,4 +117,4 @@ Where to go next?
 * [Built-in Functions]({{ site.baseurl }}/dev/table/functions.html): Supported functions in Table API and SQL.
 * [SQL Client]({{ site.baseurl }}/dev/table/sqlClient.html): Play around with Flink SQL and submit a table program to a cluster without programming knowledge.
 
-{% top %}
\ No newline at end of file
+{% top %}
diff --git a/docs/internals/components.md b/docs/internals/components.md
index bb949f2..d94fcf0 100644
--- a/docs/internals/components.md
+++ b/docs/internals/components.md
@@ -46,10 +46,10 @@ You can click on the components in the figure to learn more.
 
 <map name="overview-stack">
 <area id="lib-datastream-cep" title="CEP: Complex Event Processing" href="{{ site.baseurl }}/dev/libs/cep.html" shape="rect" coords="63,0,143,177" />
-<area id="lib-datastream-table" title="Table: Relational DataStreams" href="{{ site.baseurl }}/dev/table_api.html" shape="rect" coords="143,0,223,177" />
+<area id="lib-datastream-table" title="Table: Relational DataStreams" href="{{ site.baseurl }}/dev/table/index.html" shape="rect" coords="143,0,223,177" />
 <area id="lib-dataset-ml" title="FlinkML: Machine Learning" href="{{ site.baseurl }}/dev/libs/ml/index.html" shape="rect" coords="382,2,462,176" />
 <area id="lib-dataset-gelly" title="Gelly: Graph Processing" href="{{ site.baseurl }}/dev/libs/gelly/index.html" shape="rect" coords="461,0,541,177" />
-<area id="lib-dataset-table" title="Table API and SQL" href="{{ site.baseurl }}/dev/table_api.html" shape="rect" coords="544,0,624,177" />
+<area id="lib-dataset-table" title="Table API and SQL" href="{{ site.baseurl }}/dev/table/index.html" shape="rect" coords="544,0,624,177" />
 <area id="datastream" title="DataStream API" href="{{ site.baseurl }}/dev/datastream_api.html" shape="rect" coords="64,177,379,255" />
 <area id="dataset" title="DataSet API" href="{{ site.baseurl }}/dev/batch/index.html" shape="rect" coords="382,177,697,255" />
 <area id="runtime" title="Runtime" href="{{ site.baseurl }}/concepts/runtime.html" shape="rect" coords="63,257,700,335" />
diff --git a/docs/internals/components.zh.md b/docs/internals/components.zh.md
index 8b6d16f..a3b608e 100644
--- a/docs/internals/components.zh.md
+++ b/docs/internals/components.zh.md
@@ -46,10 +46,10 @@ You can click on the components in the figure to learn more.
 
 <map name="overview-stack">
 <area id="lib-datastream-cep" title="CEP: Complex Event Processing" href="{{ site.baseurl }}/dev/libs/cep.html" shape="rect" coords="63,0,143,177" />
-<area id="lib-datastream-table" title="Table: Relational DataStreams" href="{{ site.baseurl }}/dev/table_api.html" shape="rect" coords="143,0,223,177" />
+<area id="lib-datastream-table" title="Table: Relational DataStreams" href="{{ site.baseurl }}/dev/table/index.html" shape="rect" coords="143,0,223,177" />
 <area id="lib-dataset-ml" title="FlinkML: Machine Learning" href="{{ site.baseurl }}/dev/libs/ml/index.html" shape="rect" coords="382,2,462,176" />
 <area id="lib-dataset-gelly" title="Gelly: Graph Processing" href="{{ site.baseurl }}/dev/libs/gelly/index.html" shape="rect" coords="461,0,541,177" />
-<area id="lib-dataset-table" title="Table API and SQL" href="{{ site.baseurl }}/dev/table_api.html" shape="rect" coords="544,0,624,177" />
+<area id="lib-dataset-table" title="Table API and SQL" href="{{ site.baseurl }}/dev/table/index.html" shape="rect" coords="544,0,624,177" />
 <area id="datastream" title="DataStream API" href="{{ site.baseurl }}/dev/datastream_api.html" shape="rect" coords="64,177,379,255" />
 <area id="dataset" title="DataSet API" href="{{ site.baseurl }}/dev/batch/index.html" shape="rect" coords="382,177,697,255" />
 <area id="runtime" title="Runtime" href="{{ site.baseurl }}/concepts/runtime.html" shape="rect" coords="63,257,700,335" />
diff --git a/docs/ops/state/large_state_tuning.md b/docs/ops/state/large_state_tuning.md
index 9dae3b6..a98bb4a 100644
--- a/docs/ops/state/large_state_tuning.md
+++ b/docs/ops/state/large_state_tuning.md
@@ -85,7 +85,7 @@ To prevent such a situation, applications can define a *minimum duration between
 This duration is the minimum time interval that must pass between the end of the latest checkpoint and the beginning
 of the next. The figure below illustrates how this impacts checkpointing.
 
-<img src="../../fig/checkpoint_tuning.svg" class="center" width="80%" alt="Illustration how the minimum-time-between-checkpoints parameter affects checkpointing behavior."/>
+<img src="{{ site.baseurl }}/fig/checkpoint_tuning.svg" class="center" width="80%" alt="Illustration how the minimum-time-between-checkpoints parameter affects checkpointing behavior."/>
 
 *Note:* Applications can be configured (via the `CheckpointConfig`) to allow multiple checkpoints to be in progress at
 the same time. For applications with large state in Flink, this often ties up too many resources into the checkpointing.
@@ -295,7 +295,7 @@ Please note that this can come at some additional costs per checkpoint for creat
 chosen state backend and checkpointing strategy. For example, in most cases the implementation will simply duplicate the writes to the distributed
 store to a local file.
 
-<img src="../../fig/local_recovery.png" class="center" width="80%" alt="Illustration of checkpointing with task-local recovery."/>
+<img src="{{ site.baseurl }}/fig/local_recovery.png" class="center" width="80%" alt="Illustration of checkpointing with task-local recovery."/>
 
 ### Relationship of primary (distributed store) and secondary (task-local) state snapshots
 
diff --git a/docs/ops/state/large_state_tuning.zh.md b/docs/ops/state/large_state_tuning.zh.md
index 8fa4acb..e6b0498 100644
--- a/docs/ops/state/large_state_tuning.zh.md
+++ b/docs/ops/state/large_state_tuning.zh.md
@@ -85,7 +85,7 @@ To prevent such a situation, applications can define a *minimum duration between
 This duration is the minimum time interval that must pass between the end of the latest checkpoint and the beginning
 of the next. The figure below illustrates how this impacts checkpointing.
 
-<img src="../../fig/checkpoint_tuning.svg" class="center" width="80%" alt="Illustration how the minimum-time-between-checkpoints parameter affects checkpointing behavior."/>
+<img src="{{ site.baseurl }}/fig/checkpoint_tuning.svg" class="center" width="80%" alt="Illustration how the minimum-time-between-checkpoints parameter affects checkpointing behavior."/>
 
 *Note:* Applications can be configured (via the `CheckpointConfig`) to allow multiple checkpoints to be in progress at
 the same time. For applications with large state in Flink, this often ties up too many resources into the checkpointing.
@@ -295,7 +295,7 @@ Please note that this can come at some additional costs per checkpoint for creat
 chosen state backend and checkpointing strategy. For example, in most cases the implementation will simply duplicate the writes to the distributed
 store to a local file.
 
-<img src="../../fig/local_recovery.png" class="center" width="80%" alt="Illustration of checkpointing with task-local recovery."/>
+<img src="{{ site.baseurl }}/fig/local_recovery.png" class="center" width="80%" alt="Illustration of checkpointing with task-local recovery."/>
 
 ### Relationship of primary (distributed store) and secondary (task-local) state snapshots
 
diff --git a/docs/redirects/linking_with_optional_modules.md b/docs/redirects/linking_with_optional_modules.md
index 2eba074..6ee68d3 100644
--- a/docs/redirects/linking_with_optional_modules.md
+++ b/docs/redirects/linking_with_optional_modules.md
@@ -2,7 +2,7 @@
 title: "Linking with Optional Modules"
 layout: redirect
 redirect: /dev/projectsetup/dependencies.html
-permalink: /dev/linking.html
+permalink: /dev/projectsetup/dependencies.html
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
diff --git a/docs/release-notes/flink-1.5.zh.md b/docs/release-notes/flink-1.5.zh.md
new file mode 100644
index 0000000..ed5f2c2
--- /dev/null
+++ b/docs/release-notes/flink-1.5.zh.md
@@ -0,0 +1,92 @@
+---
+title: "Release Notes - Flink 1.5"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.4 and Flink 1.5. Please read these notes carefully if you are planning to upgrade your Flink version to 1.5.
+
+### Update Configuration for Reworked Job Deployment
+
+Flink’s reworked cluster and job deployment component improves the integration with resource managers and enables dynamic resource allocation. One result of these changes is, that you no longer have to specify the number of containers when submitting applications to YARN and Mesos. Flink will automatically determine the number of containers from the parallelism of the application.
+
+Although the deployment logic was completely reworked, we aimed to not unnecessarily change the previous behavior to enable a smooth transition. Nonetheless, there are a few options that you should update in your `conf/flink-conf.yaml` or know about. 
+
+* The allocation of TaskManagers with multiple slots is not fully supported yet. Therefore, we recommend to configure TaskManagers with a single slot, i.e., set `taskmanager.numberOfTaskSlots: 1`
+* If you observed any problems with the new deployment mode, you can always switch back to the pre-1.5 behavior by configuring `mode: legacy`. 
+
+Please report any problems or possible improvements that you notice to the Flink community, either by posting to a mailing list or by opening a JIRA issue.
+
+*Note*: We plan to remove the legacy mode in the next release. 
+
+### Update Configuration for Reworked Network Stack
+
+The changes on the networking stack for credit-based flow control and improved latency affect the configuration of network buffers. In a nutshell, the networking stack can require more memory to run applications. Hence, you might need to adjust the network configuration of your Flink setup. 
+
+There are two ways to address problems of job submissions that fail due to lack of network buffers.
+
+* Reduce the number of buffers per channel, i.e., `taskmanager.network.memory.buffers-per-channel` or
+* Increase the amount of TaskManager memory that is used by the network stack, i.e., increase `taskmanager.network.memory.fraction` and/or `taskmanager.network.memory.max`.
+
+Please consult the section about [network buffer configuration]({{ site.baseurl }}/ops/config.html#configuring-the-network-buffers) in the Flink documentation for details. In case you experience issues with the new credit-based flow control mode, you can disable flow control by setting `taskmanager.network.credit-model: false`. 
+
+*Note*: We plan to remove the old model and this configuration in the next release.
+
+### Hadoop Classpath Discovery
+
+We removed the automatic Hadoop classpath discovery via the Hadoop binary. If you want Flink to pick up the Hadoop classpath you have to export `HADOOP_CLASSPATH`. On cloud environments and most Hadoop distributions you would do 
+
+```
+export HADOOP_CLASSPATH=`hadoop classpath`.
+```
+
+### Breaking Changes of the REST API
+
+In an effort to harmonize, extend, and improve the REST API, a few handlers and return values were changed.
+
+* The jobs overview handler is now registered under `/jobs/overview` (before `/joboverview`) and returns a list of job details instead of the pre-grouped view of running, finished, cancelled and failed jobs. 
+* The REST API to cancel a job was changed.
+* The REST API to cancel a job with savepoint was changed. 
+
+Please check the [REST API documentation]({{ site.baseurl }}/monitoring/rest_api.html#available-requests) for details.
+
+### Kafka Producer Flushes on Checkpoint by Default
+
+The Flink Kafka Producer now flushes on checkpoints by default. Prior to version 1.5, the behaviour was disabled by default and users had to explicitly call `setFlushOnCheckpoints(true)` on the producer to enable it.
+
+### Updated Kinesis Dependency
+
+The Kinesis dependencies of Flink’s Kinesis connector have been updated to the following versions.
+
+```
+<aws.sdk.version>1.11.319</aws.sdk.version>
+<aws.kinesis-kcl.version>1.9.0</aws.kinesis-kcl.version>
+<aws.kinesis-kpl.version>0.12.9</aws.kinesis-kcl.version>
+```
+
+<!-- Remove once FLINK-10712 has been fixed -->
+### Limitations of failover strategies
+Flink's non-default failover strategies are still a very experimental feature which come with a set of limitations.
+You should only use this feature if you are executing a stateless streaming job.
+In any other cases, it is highly recommended to remove the config option `jobmanager.execution.failover-strategy` from your `flink-conf.yaml` or set it to `"full"`.
+
+In order to avoid future problems, this feature has been removed from the documentation until it will be fixed.
+See [FLINK-10880](https://issues.apache.org/jira/browse/FLINK-10880) for more details.
+
+{% top %}
diff --git a/docs/release-notes/flink-1.6.zh.md b/docs/release-notes/flink-1.6.zh.md
new file mode 100644
index 0000000..7c22b3f
--- /dev/null
+++ b/docs/release-notes/flink-1.6.zh.md
@@ -0,0 +1,43 @@
+---
+title: "Release Notes - Flink 1.6"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.5 and Flink 1.6. Please read these notes carefully if you are planning to upgrade your Flink version to 1.6.
+
+### Changed Configuration Default Values
+
+The default value of the slot idle timeout `slot.idle.timeout` is set to the default value of the heartbeat timeout (`50 s`). 
+
+### Changed ElasticSearch 5.x Sink API
+
+Previous APIs in the Flink ElasticSearch 5.x Sink's `RequestIndexer` interface have been deprecated in favor of new signatures. 
+When adding requests to the `RequestIndexer`, the requests now must be of type `IndexRequest`, `DeleteRequest`, or `UpdateRequest`, instead of the base `ActionRequest`.
+
+<!-- Remove once FLINK-10712 has been fixed -->
+### Limitations of failover strategies
+Flink's non-default failover strategies are still a very experimental feature which come with a set of limitations.
+You should only use this feature if you are executing a stateless streaming job.
+In any other cases, it is highly recommended to remove the config option `jobmanager.execution.failover-strategy` from your `flink-conf.yaml` or set it to `"full"`.
+
+In order to avoid future problems, this feature has been removed from the documentation until it will be fixed.
+See [FLINK-10880](https://issues.apache.org/jira/browse/FLINK-10880) for more details. 
+
+{% top %}
diff --git a/docs/release-notes/flink-1.7.zh.md b/docs/release-notes/flink-1.7.zh.md
new file mode 100644
index 0000000..507436b
--- /dev/null
+++ b/docs/release-notes/flink-1.7.zh.md
@@ -0,0 +1,153 @@
+---
+title: "Release Notes - Flink 1.7"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+These release notes discuss important aspects, such as configuration, behavior, or dependencies, that changed between Flink 1.6 and Flink 1.7. Please read these notes carefully if you are planning to upgrade your Flink version to 1.7.
+
+### Scala 2.12 support
+
+When using Scala `2.12` you might have to add explicit type annotations in places where they were not required when using Scala `2.11`.
+This is an excerpt from the `TransitiveClosureNaive.scala` example in the Flink code base that shows the changes that could be required.
+
+Previous code:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+   (prev, next, out: Collector[(Long, Long)]) => {
+     val prevPaths = prev.toSet
+     for (n <- next)
+       if (!prevPaths.contains(n)) out.collect(n)
+   }
+}
+```
+
+With Scala `2.12` you have to change it to:
+```
+val terminate = prevPaths
+ .coGroup(nextPaths)
+ .where(0).equalTo(0) {
+   (prev: Iterator[(Long, Long)], next: Iterator[(Long, Long)], out: Collector[(Long, Long)]) => {
+       val prevPaths = prev.toSet
+       for (n <- next)
+         if (!prevPaths.contains(n)) out.collect(n)
+     }
+}
+```
+
+The reason for this is that Scala `2.12` changes how lambdas are implemented.
+They now use the lambda support using SAM interfaces introduced in Java 8.
+This makes some method calls ambiguous because now both Scala-style lambdas and SAMs are candidates for methods were it was previously clear which method would be invoked.
+
+### State evolution
+
+Before Flink 1.7, serializer snapshots were implemented as a `TypeSerializerConfigSnapshot` (which is now deprecated, and will eventually be removed in the future to be fully replaced by the new `TypeSerializerSnapshot` interface introduced in 1.7).
+Moreover, the responsibility of serializer schema compatibility checks lived within the `TypeSerializer`, implemented in the `TypeSerializer#ensureCompatibility(TypeSerializerConfigSnapshot)` method. 
+
+To be future-proof and to have flexibility to migrate your state serializers and schema, it is highly recommended to migrate from the old abstractions. 
+Details and migration guides can be found [here](https://ci.apache.org/projects/flink/flink-docs-master/dev/stream/state/custom_serialization.html).
+
+### Removal of the legacy mode
+
+Flink no longer supports the legacy mode.
+If you depend on this, then please use Flink `1.6.x`.
+
+### Savepoints being used for recovery
+
+Savepoints are now used while recovering.
+Previously when using exactly-once sink one could get into problems with duplicate output data when a failure occurred after a savepoint was taken but before the next checkpoint occurred.
+This results in the fact that savepoints are no longer exclusively under the control of the user.
+Savepoint should not be moved nor deleted if there was no newer checkpoint or savepoint taken.
+
+### MetricQueryService runs in separate thread pool
+
+The metric query service runs now in its own `ActorSystem`.
+It needs consequently to open a new port for the query services to communicate with each other.
+The [query service port]({{site.baseurl}}/ops/config.html#metrics-internal-query-service-port) can be configured in `flink-conf.yaml`.
+
+### Granularity of latency metrics
+
+The default granularity for latency metrics has been modified.
+To restore the previous behavior users have to explicitly set the [granularity]({{site.baseurl}}/ops/config.html#metrics-latency-granularity) to `subtask`.
+
+### Latency marker activation
+
+Latency metrics are now disabled by default, which will affect all jobs that do not explicitly set the `latencyTrackingInterval` via `ExecutionConfig#setLatencyTrackingInterval`.
+To restore the previous default behavior users have to configure the [latency interval]({{site.baseurl}}/ops/config.html#metrics-latency-interval) in `flink-conf.yaml`.
+
+### Relocation of Hadoop's Netty dependency
+
+We now also relocate Hadoop's Netty dependency from `io.netty` to `org.apache.flink.hadoop.shaded.io.netty`.
+You can now bundle your own version of Netty into your job but may no longer assume that `io.netty` is present in the `flink-shaded-hadoop2-uber-*.jar` file.
+
+### Local recovery fixed
+
+With the improvements to Flink's scheduling, it can no longer happen that recoveries require more slots than before if local recovery is enabled.
+Consequently, we encourage our users to enable [local recovery]({{site.baseurl}}/ops/config.html#state-backend-local-recovery) in `flink-conf.yaml`.
+
+### Support for multi slot TaskManagers
+
+Flink now properly supports `TaskManagers` with multiple slots.
+Consequently, `TaskManagers` can now be started with an arbitrary number of slots and it is no longer recommended to start them with a single slot.
+
+### StandaloneJobClusterEntrypoint generates JobGraph with fixed JobID
+
+The `StandaloneJobClusterEntrypoint`, which is launched by the script `standalone-job.sh` and used for the job-mode container images, now starts all jobs with a fixed `JobID`.
+Thus, in order to run a cluster in HA mode, one needs to set a different [cluster id]({{site.baseurl}}/ops/config.html#high-availability-cluster-id) for each job/cluster. 
+
+<!-- Should be removed once FLINK-10911 is fixed -->
+### Scala shell does not work with Scala 2.12
+
+Flink's Scala shell does not work with Scala 2.12.
+Therefore, the module `flink-scala-shell` is not being released for Scala 2.12.
+
+See [FLINK-10911](https://issues.apache.org/jira/browse/FLINK-10911) for more details.  
+
+<!-- Remove once FLINK-10712 has been fixed -->
+### Limitations of failover strategies
+Flink's non-default failover strategies are still a very experimental feature which come with a set of limitations.
+You should only use this feature if you are executing a stateless streaming job.
+In any other cases, it is highly recommended to remove the config option `jobmanager.execution.failover-strategy` from your `flink-conf.yaml` or set it to `"full"`.
+
+In order to avoid future problems, this feature has been removed from the documentation until it will be fixed.
+See [FLINK-10880](https://issues.apache.org/jira/browse/FLINK-10880) for more details.
+
+### SQL over window preceding clause
+
+The over window `preceding` clause is now optional.
+It defaults to `UNBOUNDED` if not specified.
+
+### OperatorSnapshotUtil writes v2 snapshots
+
+Snapshots created with `OperatorSnapshotUtil` are now written in the savepoint format `v2`.
+
+### SBT projects and the MiniClusterResource
+
+If you have a `sbt` project which uses the `MiniClusterResource`, you now have to add the `flink-runtime` test-jar dependency explicitly via:
+
+`libraryDependencies += "org.apache.flink" %% "flink-runtime" % flinkVersion % Test classifier "tests"`
+
+The reason for this is that the `MiniClusterResource` has been moved from `flink-test-utils` to `flink-runtime`.
+The module `flink-test-utils` has correctly a `test-jar` dependency on `flink-runtime`.
+However, `sbt` does not properly pull in transitive `test-jar` dependencies as described in this [sbt issue](https://github.com/sbt/sbt/issues/2964).
+Consequently, it is necessary to specify the `test-jar` dependency explicitly.
+
+{% top %}
diff --git a/docs/release-notes/flink-1.8.zh.md b/docs/release-notes/flink-1.8.zh.md
new file mode 100644
index 0000000..c9fbdad
--- /dev/null
+++ b/docs/release-notes/flink-1.8.zh.md
@@ -0,0 +1,206 @@
+---
+title: "Release Notes - Flink 1.8"
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+These release notes discuss important aspects, such as configuration, behavior,
+or dependencies, that changed between Flink 1.7 and Flink 1.8. Please read
+these notes carefully if you are planning to upgrade your Flink version to 1.8.
+
+* This will be replaced by the TOC
+{:toc}
+
+### State
+
+#### Continuous incremental cleanup of old Keyed State with TTL
+
+We introduced TTL (time-to-live) for Keyed state in Flink 1.6
+([FLINK-9510](https://issues.apache.org/jira/browse/FLINK-9510)).  This feature
+allowed to clean up and make inaccessible keyed state entries when accessing
+them. In addition state would now also being cleaned up when writing a
+savepoint/checkpoint.
+
+Flink 1.8 introduces continous cleanup of old entries for both the RocksDB
+state backend
+([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471)) and the heap
+state backend
+([FLINK-10473](https://issues.apache.org/jira/browse/FLINK-10473)). This means
+that old entries (according to the ttl setting) are continously being cleanup
+up.
+
+#### New Support for Schema Migration when restoring Savepoints
+
+With Flink 1.7.0 we added support for changing the schema of state when using
+the `AvroSerializer`
+([FLINK-10605](https://issues.apache.org/jira/browse/FLINK-10605)). With Flink
+1.8.0 we made great progress migrating all built-in `TypeSerializers` to a new
+serializer snapshot abstraction that theoretically allows schema migration. Of
+the serializers that come with Flink, we now support schema migration for the
+`PojoSerializer`
+([FLINK-11485](https://issues.apache.org/jira/browse/FLINK-11485)), and Java
+`EnumSerializer`
+([FLINK-11334](https://issues.apache.org/jira/browse/FLINK-11334)), As well as
+for Kryo in limited cases
+([FLINK-11323](https://issues.apache.org/jira/browse/FLINK-11323)).
+
+#### Savepoint compatibility
+
+Savepoints from Flink 1.2 that contain a Scala `TraversableSerializer`
+are not compatible with Flink 1.8 anymore because of an update in this
+serializer
+([FLINK-11539](https://issues.apache.org/jira/browse/FLINK-11539)). You
+can get around this restriction by first upgrading to a version
+between Flink 1.3 and Flink 1.7 and then updating to Flink 1.8.
+
+#### RocksDB version bump and switch to FRocksDB ([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471))
+
+We needed to switch to a custom build of RocksDB called FRocksDB because we
+need certain changes in RocksDB for supporting continuous state cleanup with
+TTL. The used build of FRocksDB is based on the upgraded version 5.17.2 of
+RocksDB. For Mac OS X, RocksDB version 5.17.2 is supported only for OS X
+version >= 10.13. See also: https://github.com/facebook/rocksdb/issues/4862.
+
+### Maven Dependencies
+
+#### Changes to bundling of Hadoop libraries with Flink ([FLINK-11266](https://issues.apache.org/jira/browse/FLINK-11266))
+
+Convenience binaries that include hadoop are no longer released.
+
+If a deployment relies on `flink-shaded-hadoop2` being included in
+`flink-dist`, then you must manually download a pre-packaged Hadoop
+jar from the optional components section of the [download
+page](https://flink.apache.org/downloads.html) and copy it into the
+`/lib` directory.  Alternatively, a Flink distribution that includes
+hadoop can be built by packaging `flink-dist` and activating the
+`include-hadoop` maven profile.
+
+As hadoop is no longer included in `flink-dist` by default, specifying
+`-DwithoutHadoop` when packaging `flink-dist` no longer impacts the build.
+
+### Configuration
+
+#### TaskManager configuration ([FLINK-11716](https://issues.apache.org/jira/browse/FLINK-11716))
+
+`TaskManagers` now bind to the host IP address instead of the hostname
+by default . This behaviour can be controlled by the configuration
+option `taskmanager.network.bind-policy`. If your Flink cluster should
+experience inexplicable connection problems after upgrading, try to
+set `taskmanager.network.bind-policy: name` in your `flink-conf.yaml`
+to return to the pre-1.8 behaviour.
+
+### Table API
+
+#### Deprecation of direct `Table` constructor usage ([FLINK-11447](https://issues.apache.org/jira/browse/FLINK-11447))
+
+Flink 1.8 deprecates direct usage of the constructor of the `Table` class in
+the Table API. This constructor would previously be used to perform a join with
+a _lateral table_. You should now use `table.joinLateral()` or
+`table.leftOuterJoinLateral()` instead.
+
+This change is necessary for converting the Table class into an interface,
+which will make the API more maintainable and cleaner in the future.
+
+#### Introduction of new CSV format descriptor ([FLINK-9964](https://issues.apache.org/jira/browse/FLINK-9964))
+
+This release introduces a new format descriptor for CSV files that is compliant
+with RFC 4180. The new descriptor is available as
+`org.apache.flink.table.descriptors.Csv`. For now, this can only be used
+together with the Kafka connector. The old descriptor is available as
+`org.apache.flink.table.descriptors.OldCsv` for use with file system
+connectors.
+
+#### Deprecation of static builder methods on TableEnvironment ([FLINK-11445](https://issues.apache.org/jira/browse/FLINK-11445))
+
+In order to separate API from actual implementation, the static methods
+`TableEnvironment.getTableEnvironment()` are deprecated. You should now use
+`Batch/StreamTableEnvironment.create()` instead.
+
+#### Change in the Maven modules of Table API ([FLINK-11064](https://issues.apache.org/jira/browse/FLINK-11064))
+
+Users that had a `flink-table` dependency before, need to update their
+dependencies to `flink-table-planner` and the correct dependency of
+`flink-table-api-*`, depending on whether Java or Scala is used: one of
+`flink-table-api-java-bridge` or `flink-table-api-scala-bridge`.
+
+#### Change to External Catalog Table Builders ([FLINK-11522](https://issues.apache.org/jira/browse/FLINK-11522))
+
+`ExternalCatalogTable.builder()` is deprecated in favour of
+`ExternalCatalogTableBuilder()`.
+
+#### Change to naming of Table API connector jars ([FLINK-11026](https://issues.apache.org/jira/browse/FLINK-11026))
+
+The naming scheme for kafka/elasticsearch6 sql-jars has been changed.
+
+In maven terms, they no longer have the `sql-jar` qualifier and the artifactId
+is now prefixed with `flink-sql` instead of `flink`, e.g.,
+`flink-sql-connector-kafka...`.
+
+#### Change to how Null Literals are specified ([FLINK-11785](https://issues.apache.org/jira/browse/FLINK-11785))
+
+Null literals in the Table API need to be defined with `nullOf(type)` instead
+of `Null(type)` from now on. The old approach is deprecated.
+
+### Connectors
+
+#### Introduction of a new KafkaDeserializationSchema that give direct access to ConsumerRecord ([FLINK-8354](https://issues.apache.org/jira/browse/FLINK-8354))
+
+For the Flink `KafkaConsumers`, we introduced a new `KafkaDeserializationSchema`
+that gives direct access to the Kafka `ConsumerRecord`. This subsumes the
+`KeyedSerializationSchema` functionality, which is deprecated but still available
+for now.
+
+#### FlinkKafkaConsumer will now filter restored partitions based on topic specification ([FLINK-10342](https://issues.apache.org/jira/browse/FLINK-10342))
+
+Starting from Flink 1.8.0, the `FlinkKafkaConsumer` now always filters out
+restored partitions that are no longer associated with a specified topic to
+subscribe to in the restored execution. This behaviour did not exist in
+previous versions of the `FlinkKafkaConsumer`. If you wish to retain the
+previous behaviour, please use the
+`disableFilterRestoredPartitionsWithSubscribedTopics()` configuration method on
+the `FlinkKafkaConsumer`.
+
+Consider this example: if you had a Kafka Consumer that was consuming
+from topic `A`, you did a savepoint, then changed your Kafka consumer
+to instead consume from topic `B`, and then restarted your job from
+the savepoint. Before this change, your consumer would now consume
+from both topic `A` and `B` because it was stored in state that the
+consumer was consuming from topic `A`. With the change, your consumer
+would only consume from topic `B` after restore because we filter the
+topics that are stored in state using the configured topics.
+
+### Miscellaneous Interface changes
+
+#### The canEqual() method was dropped from the TypeSerializer interface ([FLINK-9803](https://issues.apache.org/jira/browse/FLINK-9803))
+
+The `canEqual()` methods are usually used to make proper equality checks across
+hierarchies of types. The `TypeSerializer` actually doesn't require this
+property, so the method is now removed.
+
+#### Removal of the CompositeSerializerSnapshot utility class ([FLINK-11073](https://issues.apache.org/jira/browse/FLINK-11073))
+
+The `CompositeSerializerSnapshot` utility class has been removed. You should
+now use `CompositeTypeSerializerSnapshot` instead, for snapshots of composite
+serializers that delegate serialization to multiple nested serializers. Please
+see
+[here](/dev/stream/state/custom_serialization.html#implementing-a-compositetypeserializersnapshot)
+for instructions on using `CompositeTypeSerializerSnapshot`.
+
+{% top %}