You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by se...@apache.org on 2016/12/14 14:10:37 UTC

[3/4] flink git commit: [FLINK-5258] [docs] Reorganize the docs to improve navigation and reduce duplication

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/guarantees.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/guarantees.md b/docs/dev/connectors/guarantees.md
new file mode 100644
index 0000000..a75f0e0
--- /dev/null
+++ b/docs/dev/connectors/guarantees.md
@@ -0,0 +1,143 @@
+---
+title: "Fault Tolerance Guarantees of Data Sources and Sinks"
+nav-title: Fault Tolerance Guarantees
+nav-parent_id: connectors
+nav-pos: 0
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Flink's fault tolerance mechanism recovers programs in the presence of failures and
+continues to execute them. Such failures include machine hardware failures, network failures,
+transient program failures, etc.
+
+Flink can guarantee exactly-once state updates to user-defined state only when the source participates in the
+snapshotting mechanism. The following table lists the state update guarantees of Flink coupled with the bundled connectors.
+
+Please read the documentation of each connector to understand the details of the fault tolerance guarantees.
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 25%">Source</th>
+      <th class="text-left" style="width: 25%">Guarantees</th>
+      <th class="text-left">Notes</th>
+    </tr>
+   </thead>
+   <tbody>
+        <tr>
+            <td>Apache Kafka</td>
+            <td>exactly once</td>
+            <td>Use the appropriate Kafka connector for your version</td>
+        </tr>
+        <tr>
+            <td>AWS Kinesis Streams</td>
+            <td>exactly once</td>
+            <td></td>
+        </tr>
+        <tr>
+            <td>RabbitMQ</td>
+            <td>at most once (v 0.10) / exactly once (v 1.0) </td>
+            <td></td>
+        </tr>
+        <tr>
+            <td>Twitter Streaming API</td>
+            <td>at most once</td>
+            <td></td>
+        </tr>
+        <tr>
+            <td>Collections</td>
+            <td>exactly once</td>
+            <td></td>
+        </tr>
+        <tr>
+            <td>Files</td>
+            <td>exactly once</td>
+            <td></td>
+        </tr>
+        <tr>
+            <td>Sockets</td>
+            <td>at most once</td>
+            <td></td>
+        </tr>
+  </tbody>
+</table>
+
+To guarantee end-to-end exactly-once record delivery (in addition to exactly-once state semantics), the data sink needs
+to take part in the checkpointing mechanism. The following table lists the delivery guarantees (assuming exactly-once
+state updates) of Flink coupled with bundled sinks:
+
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 25%">Sink</th>
+      <th class="text-left" style="width: 25%">Guarantees</th>
+      <th class="text-left">Notes</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+        <td>HDFS rolling sink</td>
+        <td>exactly once</td>
+        <td>Implementation depends on Hadoop version</td>
+    </tr>
+    <tr>
+        <td>Elasticsearch</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+    <tr>
+        <td>Kafka producer</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+    <tr>
+        <td>Cassandra sink</td>
+        <td>at least once / exactly once</td>
+        <td>exactly once only for idempotent updates</td>
+    </tr>
+    <tr>
+        <td>AWS Kinesis Streams</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+    <tr>
+        <td>File sinks</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+    <tr>
+        <td>Socket sinks</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+    <tr>
+        <td>Standard output</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+    <tr>
+        <td>Redis sink</td>
+        <td>at least once</td>
+        <td></td>
+    </tr>
+  </tbody>
+</table>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/index.md b/docs/dev/connectors/index.md
index 59b5e7b..5de5300 100644
--- a/docs/dev/connectors/index.md
+++ b/docs/dev/connectors/index.md
@@ -2,8 +2,8 @@
 title: "Streaming Connectors"
 nav-id: connectors
 nav-title: Connectors
-nav-parent_id: dev
-nav-pos: 7
+nav-parent_id: streaming
+nav-pos: 30
 nav-show_overview: true
 ---
 <!--

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index e3dc821..0798f0b 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -82,7 +82,7 @@ Then, import the connector in your maven project:
 </dependency>
 {% endhighlight %}
 
-Note that the streaming connectors are currently not part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+Note that the streaming connectors are currently not part of the binary distribution. See how to link with them for cluster execution [here]({{ site.baseurl}}/dev/linking).
 
 ### Installing Apache Kafka
 

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/kinesis.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kinesis.md b/docs/dev/connectors/kinesis.md
index c54239d..480a97d 100644
--- a/docs/dev/connectors/kinesis.md
+++ b/docs/dev/connectors/kinesis.md
@@ -51,7 +51,7 @@ mvn clean install -Pinclude-kinesis -DskipTests
 
 
 The streaming connectors are not part of the binary distribution. See how to link with them for cluster
-execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+execution [here]({{site.baseurl}}/dev/linking).
 
 ### Using the Amazon Kinesis Streams Service
 Follow the instructions from the [Amazon Kinesis Streams Developer Guide](https://docs.aws.amazon.com/streams/latest/dev/learning-kinesis-module-one-create-stream.html)

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/nifi.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/nifi.md b/docs/dev/connectors/nifi.md
index 924a80b..bdbd808 100644
--- a/docs/dev/connectors/nifi.md
+++ b/docs/dev/connectors/nifi.md
@@ -37,7 +37,7 @@ following dependency to your project:
 
 Note that the streaming connectors are currently not part of the binary
 distribution. See
-[here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution)
+[here]({{site.baseurl}}/dev/linking)
 for information about how to package the program with the libraries for
 cluster execution.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/rabbitmq.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/rabbitmq.md b/docs/dev/connectors/rabbitmq.md
index 02def40..1b621c0 100644
--- a/docs/dev/connectors/rabbitmq.md
+++ b/docs/dev/connectors/rabbitmq.md
@@ -33,7 +33,7 @@ This connector provides access to data streams from [RabbitMQ](http://www.rabbit
 </dependency>
 {% endhighlight %}
 
-Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+Note that the streaming connectors are currently not part of the binary distribution. See linking with them for cluster execution [here]({{site.baseurl}}/dev/linking).
 
 #### Installing RabbitMQ
 Follow the instructions from the [RabbitMQ download page](http://www.rabbitmq.com/download.html). After the installation the server automatically starts, and the application connecting to RabbitMQ can be launched.

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/redis.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/redis.md b/docs/dev/connectors/redis.md
index a987b90..0e3287d 100644
--- a/docs/dev/connectors/redis.md
+++ b/docs/dev/connectors/redis.md
@@ -35,7 +35,7 @@ following dependency to your project:
 {% endhighlight %}
 Version Compatibility: This module is compatible with Redis 2.8.5.
 
-Note that the streaming connectors are currently not part of the binary distribution. You need to link them for cluster execution [explicitly]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+Note that the streaming connectors are currently not part of the binary distribution. You need to link them for cluster execution [explicitly]({{site.baseurl}}/dev/linking).
 
 #### Installing Redis
 Follow the instructions from the [Redis download page](http://redis.io/download).

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/connectors/twitter.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/twitter.md b/docs/dev/connectors/twitter.md
index e92e51d..0ccbbff 100644
--- a/docs/dev/connectors/twitter.md
+++ b/docs/dev/connectors/twitter.md
@@ -36,7 +36,7 @@ To use this connector, add the following dependency to your project:
 {% endhighlight %}
 
 Note that the streaming connectors are currently not part of the binary distribution.
-See linking with them for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+See linking with them for cluster execution [here]({{site.baseurl}}/dev/linking).
 
 #### Authentication
 In order to connect to the Twitter stream the user has to register their program and acquire the necessary information for the authentication. The process is described below.

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/custom_serializers.md
----------------------------------------------------------------------
diff --git a/docs/dev/custom_serializers.md b/docs/dev/custom_serializers.md
new file mode 100644
index 0000000..2b72ca0
--- /dev/null
+++ b/docs/dev/custom_serializers.md
@@ -0,0 +1,112 @@
+---
+title: Register a custom serializer for your Flink program
+nav-title: Custom Serializers
+nav-parent_id: types
+nav-pos: 10
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+If you use a custom type in your Flink program which cannot be serialized by the
+Flink type serializer, Flink falls back to using the generic Kryo
+serializer. You may register your own serializer or a serialization system like
+Google Protobuf or Apache Thrift with Kryo. To do that, simply register the type
+class and the serializer in the `ExecutionConfig` of your Flink program.
+
+
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// register the class of the serializer as serializer for a type
+env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class, MyCustomSerializer.class);
+
+// register an instance as serializer for a type
+MySerializer mySerializer = new MySerializer();
+env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class, mySerializer);
+{% endhighlight %}
+
+Note that your custom serializer has to extend Kryo's Serializer class. In the
+case of Google Protobuf or Apache Thrift, this has already been done for
+you:
+
+{% highlight java %}
+
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+// register the Google Protobuf serializer with Kryo
+env.getConfig().registerTypeWithKryoSerializer(MyCustomType.class, ProtobufSerializer.class);
+
+// register the serializer included with Apache Thrift as the standard serializer
+// TBaseSerializer states it should be initialized as a default Kryo serializer
+env.getConfig().addDefaultKryoSerializer(MyCustomType.class, TBaseSerializer.class);
+
+{% endhighlight %}
+
+For the above example to work, you need to include the necessary dependencies in
+your Maven project file (pom.xml). In the dependency section, add the following
+for Apache Thrift:
+
+{% highlight xml %}
+
+<dependency>
+	<groupId>com.twitter</groupId>
+	<artifactId>chill-thrift</artifactId>
+	<version>0.5.2</version>
+</dependency>
+<!-- libthrift is required by chill-thrift -->
+<dependency>
+	<groupId>org.apache.thrift</groupId>
+	<artifactId>libthrift</artifactId>
+	<version>0.6.1</version>
+	<exclusions>
+		<exclusion>
+			<groupId>javax.servlet</groupId>
+			<artifactId>servlet-api</artifactId>
+		</exclusion>
+		<exclusion>
+			<groupId>org.apache.httpcomponents</groupId>
+			<artifactId>httpclient</artifactId>
+		</exclusion>
+	</exclusions>
+</dependency>
+
+{% endhighlight %}
+
+For Google Protobuf you need the following Maven dependency:
+
+{% highlight xml %}
+
+<dependency>
+	<groupId>com.twitter</groupId>
+	<artifactId>chill-protobuf</artifactId>
+	<version>0.5.2</version>
+</dependency>
+<!-- We need protobuf for chill-protobuf -->
+<dependency>
+	<groupId>com.google.protobuf</groupId>
+	<artifactId>protobuf-java</artifactId>
+	<version>2.5.0</version>
+</dependency>
+
+{% endhighlight %}
+
+
+Please adjust the versions of both libraries as needed.
+
+

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/datastream_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/datastream_api.md b/docs/dev/datastream_api.md
index 1b167ac..85866f7 100644
--- a/docs/dev/datastream_api.md
+++ b/docs/dev/datastream_api.md
@@ -1,8 +1,10 @@
 ---
 title: "Flink DataStream API Programming Guide"
 nav-title: Streaming (DataStream API)
-nav-parent_id: apis
-nav-pos: 2
+nav-id: streaming
+nav-parent_id: dev
+nav-show_overview: true
+nav-pos: 10
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -208,7 +210,7 @@ dataStream.filter(new FilterFunction<Integer>() {
           <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
           <td>
             <p>Logically partitions a stream into disjoint partitions, each partition containing elements of the same key.
-            Internally, this is implemented with hash partitioning. See <a href="#specifying-keys">keys</a> on how to specify keys.
+            Internally, this is implemented with hash partitioning. See <a href="/dev/api_concepts#specifying-keys">keys</a> on how to specify keys.
             This transformation returns a KeyedDataStream.</p>
     {% highlight java %}
 dataStream.keyBy("someKey") // Key by field "someKey"
@@ -595,7 +597,7 @@ dataStream.filter { _ != 0 }
           <td><strong>KeyBy</strong><br>DataStream &rarr; KeyedStream</td>
           <td>
             <p>Logically partitions a stream into disjoint partitions, each partition containing elements of the same key.
-            Internally, this is implemented with hash partitioning. See <a href="#specifying-keys">keys</a> on how to specify keys.
+            Internally, this is implemented with hash partitioning. See <a href="/dev/api_concepts#specifying-keys">keys</a> on how to specify keys.
             This transformation returns a KeyedDataStream.</p>
     {% highlight scala %}
 dataStream.keyBy("someKey") // Key by field "someKey"
@@ -1408,8 +1410,8 @@ Collection-based:
 
 Custom:
 
-- `addSource` - Attache a new source function. For example, to read from Apache Kafka you can use
-    `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors]({{ site.baseurl }}/apis/streaming/connectors/) for more details.
+- `addSource` - Attach a new source function. For example, to read from Apache Kafka you can use
+    `addSource(new FlinkKafkaConsumer08<>(...))`. See [connectors]({{ site.baseurl }}/dev/connectors/) for more details.
 
 </div>
 </div>
@@ -1608,7 +1610,7 @@ Execution Parameters
 
 The `StreamExecutionEnvironment` contains the `ExecutionConfig` which allows to set job specific configuration values for the runtime.
 
-Please refer to [execution configuration]({{ site.baseurl }}/dev/api_concepts.html#execution-configuration)
+Please refer to [execution configuration]({{ site.baseurl }}/dev/execution_configuration)
 for an explanation of most parameters. These parameters pertain specifically to the DataStream API:
 
 - `enableTimestamps()` / **`disableTimestamps()`**: Attach a timestamp to each event emitted from a source.

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/event_time.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_time.md b/docs/dev/event_time.md
index 7375a0f..5ab5feb 100644
--- a/docs/dev/event_time.md
+++ b/docs/dev/event_time.md
@@ -2,8 +2,8 @@
 title: "Event Time"
 nav-id: event_time
 nav-show_overview: true
-nav-parent_id: dev
-nav-pos: 4
+nav-parent_id: streaming
+nav-pos: 20
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/execution.md
----------------------------------------------------------------------
diff --git a/docs/dev/execution.md b/docs/dev/execution.md
new file mode 100644
index 0000000..4f613e0
--- /dev/null
+++ b/docs/dev/execution.md
@@ -0,0 +1,24 @@
+---
+title: "Managing Execution"
+nav-id: execution
+nav-parent_id: dev
+nav-pos: 60
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/execution_configuration.md
----------------------------------------------------------------------
diff --git a/docs/dev/execution_configuration.md b/docs/dev/execution_configuration.md
new file mode 100644
index 0000000..1f66058
--- /dev/null
+++ b/docs/dev/execution_configuration.md
@@ -0,0 +1,86 @@
+---
+title: "Execution Configuration"
+nav-parent_id: execution
+nav-pos: 10
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The `StreamExecutionEnvironment` contains the `ExecutionConfig` which allows to set job specific configuration values for the runtime.
+To change the defaults that affect all jobs, see [Configuration]({{ site.baseurl }}/setup/config).
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+ExecutionConfig executionConfig = env.getConfig();
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+var executionConfig = env.getConfig
+{% endhighlight %}
+</div>
+</div>
+
+The following configuration options are available: (the default is bold)
+
+- **`enableClosureCleaner()`** / `disableClosureCleaner()`. The closure cleaner is enabled by default. The closure cleaner removes unneeded references to the surrounding class of anonymous functions inside Flink programs.
+With the closure cleaner disabled, it might happen that an anonymous user function is referencing the surrounding class, which is usually not Serializable. This will lead to exceptions by the serializer.
+
+- `getParallelism()` / `setParallelism(int parallelism)` Set the default parallelism for the job.
+
+- `getMaxParallelism()` / `setMaxParallelism(int parallelism)` Set the default maximum parallelism for the job. This setting determines the maximum degree of parallelism and specifies the upper limit for dynamic scaling.
+
+- `getNumberOfExecutionRetries()` / `setNumberOfExecutionRetries(int numberOfExecutionRetries)` Sets the number of times that failed tasks are re-executed. A value of zero effectively disables fault tolerance. A value of `-1` indicates that the system default value (as defined in the configuration) should be used.
+
+- `getExecutionRetryDelay()` / `setExecutionRetryDelay(long executionRetryDelay)` Sets the delay in milliseconds that the system waits after a job has failed, before re-executing it. The delay starts after all tasks have been successfully been stopped on the TaskManagers, and once the delay is past, the tasks are re-started. This parameter is useful to delay re-execution in order to let certain time-out related failures surface fully (like broken connections that have not fully timed out), before attempting a re-execution and immediately failing again due to the same problem. This parameter only has an effect if the number of execution re-tries is one or more.
+
+- `getExecutionMode()` / `setExecutionMode()`. The default execution mode is PIPELINED. Sets the execution mode to execute the program. The execution mode defines whether data exchanges are performed in a batch or on a pipelined manner.
+
+- `enableForceKryo()` / **`disableForceKryo`**. Kryo is not forced by default. Forces the GenericTypeInformation to use the Kryo serializer for POJOS even though we could analyze them as a POJO. In some cases this might be preferable. For example, when Flink's internal serializers fail to handle a POJO properly.
+
+- `enableForceAvro()` / **`disableForceAvro()`**. Avro is not forced by default. Forces the Flink AvroTypeInformation to use the Avro serializer instead of Kryo for serializing Avro POJOs.
+
+- `enableObjectReuse()` / **`disableObjectReuse()`** By default, objects are not reused in Flink. Enabling the object reuse mode will instruct the runtime to reuse user objects for better performance. Keep in mind that this can lead to bugs when the user-code function of an operation is not aware of this behavior.
+
+- **`enableSysoutLogging()`** / `disableSysoutLogging()` JobManager status updates are printed to `System.out` by default. This setting allows to disable this behavior.
+
+- `getGlobalJobParameters()` / `setGlobalJobParameters()` This method allows users to set custom objects as a global configuration for the job. Since the `ExecutionConfig` is accessible in all user defined functions, this is an easy method for making configuration globally available in a job.
+
+- `addDefaultKryoSerializer(Class<?> type, Serializer<?> serializer)` Register a Kryo serializer instance for the given `type`.
+
+- `addDefaultKryoSerializer(Class<?> type, Class<? extends Serializer<?>> serializerClass)` Register a Kryo serializer class for the given `type`.
+
+- `registerTypeWithKryoSerializer(Class<?> type, Serializer<?> serializer)` Register the given type with Kryo and specify a serializer for it. By registering a type with Kryo, the serialization of the type will be much more efficient.
+
+- `registerKryoType(Class<?> type)` If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags (integer IDs) are written. If a type is not registered with Kryo, its entire class-name will be serialized with every instance, leading to much higher I/O costs.
+
+- `registerPojoType(Class<?> type)` Registers the given type with the serialization stack. If the type is eventually serialized as a POJO, then the type is registered with the POJO serializer. If the type ends up being serialized with Kryo, then it will be registered at Kryo to make sure that only tags are written. If a type is not registered with Kryo, its entire class-name will be serialized with every instance, leading to much higher I/O costs.
+
+Note that types registered with `registerKryoType()` are not available to Flink's Kryo serializer instance.
+
+- `disableAutoTypeRegistration()` Automatic type registration is enabled by default. The automatic type registration is registering all types (including sub-types) used by usercode with Kryo and the POJO serializer.
+
+- `setTaskCancellationInterval(long interval)` Sets the the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.
+
+The `RuntimeContext` which is accessible in `Rich*` functions through the `getRuntimeContext()` method also allows to access the `ExecutionConfig` in all user defined functions.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/execution_plans.md
----------------------------------------------------------------------
diff --git a/docs/dev/execution_plans.md b/docs/dev/execution_plans.md
new file mode 100644
index 0000000..881c54e
--- /dev/null
+++ b/docs/dev/execution_plans.md
@@ -0,0 +1,80 @@
+---
+title: "Execution Plans"
+nav-parent_id: execution
+nav-pos: 40
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+Depending on various parameters such as data size or number of machines in the cluster, Flink's
+optimizer automatically chooses an execution strategy for your program. In many cases, it can be
+useful to know how exactly Flink will execute your program.
+
+__Plan Visualization Tool__
+
+Flink comes packaged with a visualization tool for execution plans. The HTML document containing
+the visualizer is located under ```tools/planVisualizer.html```. It takes a JSON representation of
+the job execution plan and visualizes it as a graph with complete annotations of execution
+strategies.
+
+The following code shows how to print the execution plan JSON from your program:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
+
+...
+
+System.out.println(env.getExecutionPlan());
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = ExecutionEnvironment.getExecutionEnvironment
+
+...
+
+println(env.getExecutionPlan())
+{% endhighlight %}
+</div>
+</div>
+
+
+To visualize the execution plan, do the following:
+
+1. **Open** ```planVisualizer.html``` with your web browser,
+2. **Paste** the JSON string into the text field, and
+3. **Press** the draw button.
+
+After these steps, a detailed execution plan will be visualized.
+
+<img alt="A flink job execution graph." src="{{ site.baseurl }}/fig/plan_visualizer.png" width="80%">
+
+
+__Web Interface__
+
+Flink offers a web interface for submitting and executing jobs. The interface is part of the JobManager's
+web interface for monitoring, per default running on port 8081. Job submission via this interfaces requires
+that you have set `jobmanager.web.submit.enable: true` in `flink-conf.yaml`.
+
+You may specify program arguments before the job is executed. The plan visualization enables you to show
+the execution plan before executing the Flink job.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/index.md b/docs/dev/index.md
index 67916c1..8b96672 100644
--- a/docs/dev/index.md
+++ b/docs/dev/index.md
@@ -1,9 +1,9 @@
 ---
 title: "Application Development"
 nav-id: dev
-nav-title: '<i class="fa fa-code" aria-hidden="true"></i> Application Development'
+nav-title: '<i class="fa fa-code title maindish" aria-hidden="true"></i> Application Development'
 nav-parent_id: root
-nav-pos: 3
+nav-pos: 5
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/java8.md
----------------------------------------------------------------------
diff --git a/docs/dev/java8.md b/docs/dev/java8.md
index 3792e27..e98f748 100644
--- a/docs/dev/java8.md
+++ b/docs/dev/java8.md
@@ -1,7 +1,7 @@
 ---
 title: "Java 8"
-nav-parent_id: apis
-nav-pos: 105
+nav-parent_id: api-concepts
+nav-pos: 20
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/libraries.md
----------------------------------------------------------------------
diff --git a/docs/dev/libraries.md b/docs/dev/libraries.md
index dc22e97..586637b 100644
--- a/docs/dev/libraries.md
+++ b/docs/dev/libraries.md
@@ -2,7 +2,7 @@
 title: "Libraries"
 nav-id: libs
 nav-parent_id: dev
-nav-pos: 8
+nav-pos: 80
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/libs/cep.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/cep.md b/docs/dev/libs/cep.md
index d27cf9f..c30d37b 100644
--- a/docs/dev/libs/cep.md
+++ b/docs/dev/libs/cep.md
@@ -37,7 +37,7 @@ because these are used for comparing and matching events.
 
 ## Getting Started
 
-If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink).
+If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink).
 Next, you have to add the FlinkCEP dependency to the `pom.xml` of your project.
 
 <div class="codetabs" markdown="1">
@@ -63,7 +63,7 @@ Next, you have to add the FlinkCEP dependency to the `pom.xml` of your project.
 </div>
 
 Note that FlinkCEP is currently not part of the binary distribution.
-See linking with it for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+See linking with it for cluster execution [here]({{site.baseurl}}/dev/linking).
 
 Now you can start writing your first CEP program using the pattern API.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/libs/gelly/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/index.md b/docs/dev/libs/gelly/index.md
index db7073f..0877e2f 100644
--- a/docs/dev/libs/gelly/index.md
+++ b/docs/dev/libs/gelly/index.md
@@ -62,7 +62,7 @@ Add the following dependency to your `pom.xml` to use Gelly.
 </div>
 </div>
 
-Note that Gelly is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+Note that Gelly is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/dev/linking).
 
 The remaining sections provide a description of available methods and present several examples of how to use Gelly and how to mix it with the Flink DataSet API.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/libs/ml/index.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/index.md b/docs/dev/libs/ml/index.md
index d01e18e..dcd3e0a 100644
--- a/docs/dev/libs/ml/index.md
+++ b/docs/dev/libs/ml/index.md
@@ -68,7 +68,7 @@ FlinkML currently supports the following algorithms:
 You can check out our [quickstart guide](quickstart.html) for a comprehensive getting started
 example.
 
-If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink).
+If you want to jump right in, you have to [set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
 
 {% highlight xml %}
@@ -80,7 +80,7 @@ Next, you have to add the FlinkML dependency to the `pom.xml` of your project.
 {% endhighlight %}
 
 Note that FlinkML is currently not part of the binary distribution.
-See linking with it for cluster execution [here]({{site.baseurl}}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).
+See linking with it for cluster execution [here]({{site.baseurl}}/dev/linking).
 
 Now you can start solving your analysis task.
 The following code snippet shows how easy it is to train a multiple linear regression model.

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/libs/ml/quickstart.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/quickstart.md b/docs/dev/libs/ml/quickstart.md
index 7ba3ed5..29f2fec 100644
--- a/docs/dev/libs/ml/quickstart.md
+++ b/docs/dev/libs/ml/quickstart.md
@@ -55,7 +55,7 @@ through [principal components analysis](https://en.wikipedia.org/wiki/Principal_
 ## Linking with FlinkML
 
 In order to use FlinkML in your project, first you have to
-[set up a Flink program]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink).
+[set up a Flink program]({{ site.baseurl }}/dev/linking_with_flink).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project:
 
 {% highlight xml %}

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/linking.md
----------------------------------------------------------------------
diff --git a/docs/dev/linking.md b/docs/dev/linking.md
new file mode 100644
index 0000000..0592617
--- /dev/null
+++ b/docs/dev/linking.md
@@ -0,0 +1,94 @@
+---
+nav-title: "Linking with Optional Modules"
+title: "Linking with modules not contained in the binary distribution"
+nav-parent_id: start
+nav-pos: 10
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+The binary distribution contains jar packages in the `lib` folder that are automatically
+provided to the classpath of your distributed programs. Almost all of Flink classes are
+located there with a few exceptions, for example the streaming connectors and some freshly
+added modules. To run code depending on these modules you need to make them accessible
+during runtime, for which we suggest two options:
+
+1. Either copy the required jar files to the `lib` folder onto all of your TaskManagers.
+Note that you have to restart your TaskManagers after this.
+2. Or package them with your code.
+
+The latter version is recommended as it respects the classloader management in Flink.
+
+### Packaging dependencies with your usercode with Maven
+
+To provide these dependencies not included by Flink we suggest two options with Maven.
+
+1. The maven assembly plugin builds a so-called uber-jar (executable jar) containing all your dependencies.
+The assembly configuration is straight-forward, but the resulting jar might become bulky.
+See [maven-assembly-plugin](http://maven.apache.org/plugins/maven-assembly-plugin/usage.html) for further information.
+2. The maven unpack plugin unpacks the relevant parts of the dependencies and
+then packages it with your code.
+
+Using the latter approach in order to bundle the Kafka connector, `flink-connector-kafka`
+you would need to add the classes from both the connector and the Kafka API itself. Add
+the following to your plugins section.
+
+~~~xml
+<plugin>
+    <groupId>org.apache.maven.plugins</groupId>
+    <artifactId>maven-dependency-plugin</artifactId>
+    <version>2.9</version>
+    <executions>
+        <execution>
+            <id>unpack</id>
+            <!-- executed just before the package phase -->
+            <phase>prepare-package</phase>
+            <goals>
+                <goal>unpack</goal>
+            </goals>
+            <configuration>
+                <artifactItems>
+                    <!-- For Flink connector classes -->
+                    <artifactItem>
+                        <groupId>org.apache.flink</groupId>
+                        <artifactId>flink-connector-kafka</artifactId>
+                        <version>{{ site.version }}</version>
+                        <type>jar</type>
+                        <overWrite>false</overWrite>
+                        <outputDirectory>${project.build.directory}/classes</outputDirectory>
+                        <includes>org/apache/flink/**</includes>
+                    </artifactItem>
+                    <!-- For Kafka API classes -->
+                    <artifactItem>
+                        <groupId>org.apache.kafka</groupId>
+                        <artifactId>kafka_<YOUR_SCALA_VERSION></artifactId>
+                        <version><YOUR_KAFKA_VERSION></version>
+                        <type>jar</type>
+                        <overWrite>false</overWrite>
+                        <outputDirectory>${project.build.directory}/classes</outputDirectory>
+                        <includes>kafka/**</includes>
+                    </artifactItem>
+                </artifactItems>
+            </configuration>
+        </execution>
+    </executions>
+</plugin>
+~~~
+
+Now when running `mvn clean package` the produced jar includes the required dependencies.

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/linking_with_flink.md
----------------------------------------------------------------------
diff --git a/docs/dev/linking_with_flink.md b/docs/dev/linking_with_flink.md
new file mode 100644
index 0000000..73ca677
--- /dev/null
+++ b/docs/dev/linking_with_flink.md
@@ -0,0 +1,146 @@
+---
+title: "Linking with Flink"
+nav-parent_id: start
+nav-pos: 2
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+To write programs with Flink, you need to include the Flink library corresponding to
+your programming language in your project.
+
+The simplest way to do this is to use one of the quickstart scripts: either for
+[Java]({{ site.baseurl }}/quickstart/java_api_quickstart.html) or for [Scala]({{ site.baseurl }}/quickstart/scala_api_quickstart.html). They
+create a blank project from a template (a Maven Archetype), which sets up everything for you. To
+manually create the project, you can use the archetype and create a project by calling:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight bash %}
+mvn archetype:generate \
+    -DarchetypeGroupId=org.apache.flink \
+    -DarchetypeArtifactId=flink-quickstart-java \
+    -DarchetypeVersion={{site.version }}
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight bash %}
+mvn archetype:generate \
+    -DarchetypeGroupId=org.apache.flink \
+    -DarchetypeArtifactId=flink-quickstart-scala \
+    -DarchetypeVersion={{site.version }}
+{% endhighlight %}
+</div>
+</div>
+
+The archetypes are working for stable releases and preview versions (`-SNAPSHOT`).
+
+If you want to add Flink to an existing Maven project, add the following entry to your
+*dependencies* section in the *pom.xml* file of your project:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight xml %}
+<!-- Use this dependency if you are using the DataStream API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-streaming-java{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<!-- Use this dependency if you are using the DataSet API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-java</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight xml %}
+<!-- Use this dependency if you are using the DataStream API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<!-- Use this dependency if you are using the DataSet API -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-scala{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version }}</version>
+</dependency>
+{% endhighlight %}
+
+**Important:** When working with the Scala API you must have one of these two imports:
+{% highlight scala %}
+import org.apache.flink.api.scala._
+{% endhighlight %}
+
+or
+
+{% highlight scala %}
+import org.apache.flink.api.scala.createTypeInformation
+{% endhighlight %}
+
+The reason is that Flink analyzes the types that are used in a program and generates serializers
+and comparaters for them. By having either of those imports you enable an implicit conversion
+that creates the type information for Flink operations.
+
+If you would rather use SBT, see [here]({{ site.baseurl }}/quickstart/scala_api_quickstart.html#sbt).
+</div>
+</div>
+
+#### Scala Dependency Versions
+
+Because Scala 2.10 binary is not compatible with Scala 2.11 binary, we provide multiple artifacts
+to support both Scala versions.
+
+Starting from the 0.10 line, we cross-build all Flink modules for both 2.10 and 2.11. If you want
+to run your program on Flink with Scala 2.11, you need to add a `_2.11` suffix to the `artifactId`
+values of the Flink modules in your dependencies section.
+
+If you are looking for building Flink with Scala 2.11, please check
+[build guide]({{ site.baseurl }}/setup/building.html#scala-versions).
+
+#### Hadoop Dependency Versions
+
+If you are using Flink together with Hadoop, the version of the dependency may vary depending on the
+version of Hadoop (or more specifically, HDFS) that you want to use Flink with. Please refer to the
+[downloads page](http://flink.apache.org/downloads.html) for a list of available versions, and instructions
+on how to link with custom versions of Hadoop.
+
+In order to link against the latest SNAPSHOT versions of the code, please follow
+[this guide](http://flink.apache.org/how-to-contribute.html#snapshots-nightly-builds).
+
+The *flink-clients* dependency is only necessary to invoke the Flink program locally (for example to
+run it standalone for testing and debugging).  If you intend to only export the program as a JAR
+file and [run it on a cluster]({{ site.baseurl }}/dev/cluster_execution.html), you can skip that dependency.
+
+{% top %}
+

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/local_execution.md
----------------------------------------------------------------------
diff --git a/docs/dev/local_execution.md b/docs/dev/local_execution.md
index a348951..45a39e3 100644
--- a/docs/dev/local_execution.md
+++ b/docs/dev/local_execution.md
@@ -1,7 +1,7 @@
 ---
 title:  "Local Execution"
-nav-parent_id: dev
-nav-pos: 11
+nav-parent_id: batch
+nav-pos: 8
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/packaging.md
----------------------------------------------------------------------
diff --git a/docs/dev/packaging.md b/docs/dev/packaging.md
new file mode 100644
index 0000000..ee351ae
--- /dev/null
+++ b/docs/dev/packaging.md
@@ -0,0 +1,77 @@
+---
+title: "Program Packaging and Distributed Execution"
+nav-title: Program Packaging
+nav-parent_id: execution
+nav-pos: 20
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+
+As described earlier, Flink programs can be executed on
+clusters by using a `remote environment`. Alternatively, programs can be packaged into JAR Files
+(Java Archives) for execution. Packaging the program is a prerequisite to executing them through the
+[command line interface]({{ site.baseurl }}/setup/cli.html).
+
+### Packaging Programs
+
+To support execution from a packaged JAR file via the command line or web interface, a program must
+use the environment obtained by `StreamExecutionEnvironment.getExecutionEnvironment()`. This environment
+will act as the cluster's environment when the JAR is submitted to the command line or web
+interface. If the Flink program is invoked differently than through these interfaces, the
+environment will act like a local environment.
+
+To package the program, simply export all involved classes as a JAR file. The JAR file's manifest
+must point to the class that contains the program's *entry point* (the class with the public
+`main` method). The simplest way to do this is by putting the *main-class* entry into the
+manifest (such as `main-class: org.apache.flinkexample.MyProgram`). The *main-class* attribute is
+the same one that is used by the Java Virtual Machine to find the main method when executing a JAR
+files through the command `java -jar pathToTheJarFile`. Most IDEs offer to include that attribute
+automatically when exporting JAR files.
+
+
+### Packaging Programs through Plans
+
+Additionally, we support packaging programs as *Plans*. Instead of defining a progam in the main
+method and calling
+`execute()` on the environment, plan packaging returns the *Program Plan*, which is a description of
+the program's data flow. To do that, the program must implement the
+`org.apache.flink.api.common.Program` interface, defining the `getPlan(String...)` method. The
+strings passed to that method are the command line arguments. The program's plan can be created from
+the environment via the `ExecutionEnvironment#createProgramPlan()` method. When packaging the
+program's plan, the JAR manifest must point to the class implementing the
+`org.apache.flinkapi.common.Program` interface, instead of the class with the main method.
+
+
+### Summary
+
+The overall procedure to invoke a packaged program is as follows:
+
+1. The JAR's manifest is searched for a *main-class* or *program-class* attribute. If both
+attributes are found, the *program-class* attribute takes precedence over the *main-class*
+attribute. Both the command line and the web interface support a parameter to pass the entry point
+class name manually for cases where the JAR manifest contains neither attribute.
+
+2. If the entry point class implements the `org.apache.flinkapi.common.Program`, then the system
+calls the `getPlan(String...)` method to obtain the program plan to execute.
+
+3. If the entry point class does not implement the `org.apache.flinkapi.common.Program` interface,
+the system will invoke the main method of the class.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/parallel.md
----------------------------------------------------------------------
diff --git a/docs/dev/parallel.md b/docs/dev/parallel.md
new file mode 100644
index 0000000..8d38884
--- /dev/null
+++ b/docs/dev/parallel.md
@@ -0,0 +1,175 @@
+---
+title: "Parallel Execution"
+nav-parent_id: execution
+nav-pos: 30
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+This section describes how the parallel execution of programs can be configured in Flink. A Flink
+program consists of multiple tasks (transformations/operators, data sources, and sinks). A task is split into
+several parallel instances for execution and each parallel instance processes a subset of the task's
+input data. The number of parallel instances of a task is called its *parallelism*.
+
+
+The parallelism of a task can be specified in Flink on different levels.
+
+## Operator Level
+
+The parallelism of an individual operator, data source, or data sink can be defined by calling its
+`setParallelism()` method.  For example, like this:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+DataStream<String> text = [...]
+DataStream<Tuple2<String, Integer>> wordCounts = text
+    .flatMap(new LineSplitter())
+    .keyBy(0)
+    .timeWindow(Time.seconds(5))
+    .sum(1).setParallelism(5);
+
+wordCounts.print();
+
+env.execute("Word Count Example");
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+
+val text = [...]
+val wordCounts = text
+    .flatMap{ _.split(" ") map { (_, 1) } }
+    .keyBy(0)
+    .timeWindow(Time.seconds(5))
+    .sum(1).setParallelism(5)
+wordCounts.print()
+
+env.execute("Word Count Example")
+{% endhighlight %}
+</div>
+</div>
+
+## Execution Environment Level
+
+As mentioned [here](#anatomy-of-a-flink-program) Flink programs are executed in the context
+of an execution environment. An
+execution environment defines a default parallelism for all operators, data sources, and data sinks
+it executes. Execution environment parallelism can be overwritten by explicitly configuring the
+parallelism of an operator.
+
+The default parallelism of an execution environment can be specified by calling the
+`setParallelism()` method. To execute all operators, data sources, and data sinks with a parallelism
+of `3`, set the default parallelism of the execution environment as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+final StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+env.setParallelism(3);
+
+DataStream<String> text = [...]
+DataStream<Tuple2<String, Integer>> wordCounts = [...]
+wordCounts.print();
+
+env.execute("Word Count Example");
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+env.setParallelism(3)
+
+val text = [...]
+val wordCounts = text
+    .flatMap{ _.split(" ") map { (_, 1) } }
+    .keyBy(0)
+    .timeWindow(Time.seconds(5))
+    .sum(1)
+wordCounts.print()
+
+env.execute("Word Count Example")
+{% endhighlight %}
+</div>
+</div>
+
+## Client Level
+
+The parallelism can be set at the Client when submitting jobs to Flink. The
+Client can either be a Java or a Scala program. One example of such a Client is
+Flink's Command-line Interface (CLI).
+
+For the CLI client, the parallelism parameter can be specified with `-p`. For
+example:
+
+    ./bin/flink run -p 10 ../examples/*WordCount-java*.jar
+
+
+In a Java/Scala program, the parallelism is set as follows:
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+
+try {
+    PackagedProgram program = new PackagedProgram(file, args);
+    InetSocketAddress jobManagerAddress = RemoteExecutor.getInetFromHostport("localhost:6123");
+    Configuration config = new Configuration();
+
+    Client client = new Client(jobManagerAddress, config, program.getUserCodeClassLoader());
+
+    // set the parallelism to 10 here
+    client.run(program, 10, true);
+
+} catch (ProgramInvocationException e) {
+    e.printStackTrace();
+}
+
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+try {
+    PackagedProgram program = new PackagedProgram(file, args)
+    InetSocketAddress jobManagerAddress = RemoteExecutor.getInetFromHostport("localhost:6123")
+    Configuration config = new Configuration()
+
+    Client client = new Client(jobManagerAddress, new Configuration(), program.getUserCodeClassLoader())
+
+    // set the parallelism to 10 here
+    client.run(program, 10, true)
+
+} catch {
+    case e: Exception => e.printStackTrace
+}
+{% endhighlight %}
+</div>
+</div>
+
+
+## System Level
+
+A system-wide default parallelism for all execution environments can be defined by setting the
+`parallelism.default` property in `./conf/flink-conf.yaml`. See the
+[Configuration]({{ site.baseurl }}/setup/config.html) documentation for details.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/quickstarts.md
----------------------------------------------------------------------
diff --git a/docs/dev/quickstarts.md b/docs/dev/quickstarts.md
deleted file mode 100644
index ef21ca6..0000000
--- a/docs/dev/quickstarts.md
+++ /dev/null
@@ -1,24 +0,0 @@
----
-title: "Quickstarts"
-nav-id: quickstarts
-nav-parent_id: dev
-nav-pos: 1
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/scala_api_extensions.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_api_extensions.md b/docs/dev/scala_api_extensions.md
index ffa6145..0e54ef1 100644
--- a/docs/dev/scala_api_extensions.md
+++ b/docs/dev/scala_api_extensions.md
@@ -1,7 +1,7 @@
 ---
 title: "Scala API Extensions"
-nav-parent_id: apis
-nav-pos: 104
+nav-parent_id: api-concepts
+nav-pos: 10
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/scala_shell.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_shell.md b/docs/dev/scala_shell.md
index 0728812..a8d1b74 100644
--- a/docs/dev/scala_shell.md
+++ b/docs/dev/scala_shell.md
@@ -1,7 +1,7 @@
 ---
-title: "Scala Shell"
-nav-parent_id: dev
-nav-pos: 10
+title: "Scala REPL"
+nav-parent_id: start
+nav-pos: 5
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/state.md
----------------------------------------------------------------------
diff --git a/docs/dev/state.md b/docs/dev/state.md
index 37de0a8..6ed20ae 100644
--- a/docs/dev/state.md
+++ b/docs/dev/state.md
@@ -1,7 +1,8 @@
 ---
-title: "Working with State"
-nav-parent_id: dev
-nav-pos: 3
+title: "State & Checkpointing"
+nav-parent_id: streaming
+nav-id: state
+nav-pos: 40
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -45,6 +46,73 @@ about the available state backends and how to configure them.
 * ToC
 {:toc}
 
+Enabling Checkpointing
+-------------------------
+
+Flink has a checkpointing mechanism that recovers streaming jobs after failures. The checkpointing mechanism requires a *persistent* (or *durable*) source that
+can be asked for prior records again (Apache Kafka is a good example of such a source).
+
+The checkpointing mechanism stores the progress in the data sources and data sinks, the state of windows, as well as the user-defined state (see [Working with State]({{ site.baseurl }}/dev/state.html)) consistently to provide *exactly once* processing semantics. Where the checkpoints are stored (e.g., JobManager memory, file system, database) depends on the configured [state backend]({{ site.baseurl }}/dev/state_backends.html).
+
+The [docs on streaming fault tolerance]({{ site.baseurl }}/internals/stream_checkpointing.html) describe in detail the technique behind Flink's streaming fault tolerance mechanism.
+
+By default, checkpointing is disabled. To enable checkpointing, call `enableCheckpointing(n)` on the `StreamExecutionEnvironment`, where *n* is the checkpoint interval in milliseconds.
+
+Other parameters for checkpointing include:
+
+- *Number of retries*: The `setNumberOfExecutionRerties()` method defines how many times the job is restarted after a failure.
+  When checkpointing is activated, but this value is not explicitly set, the job is restarted infinitely often.
+
+- *exactly-once vs. at-least-once*: You can optionally pass a mode to the `enableCheckpointing(n)` method to choose between the two guarantee levels.
+  Exactly-once is preferrable for most applications. At-least-once may be relevant for certain super-low-latency (consistently few milliseconds) applications.
+
+- *number of concurrent checkpoints*: By default, the system will not trigger another checkpoint while one is still in progress. This ensures that the topology does not spend too much time on checkpoints and not make progress with processing the streams. It is possible to allow for multiple overlapping checkpoints, which is interesting for pipelines that have a certain processing delay (for example because the functions call external services that need some time to respond) but that still want to do very frequent checkpoints (100s of milliseconds) to re-process very little upon failures.
+
+- *checkpoint timeout*: The time after which a checkpoint-in-progress is aborted, if it did not complete by then.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
+
+// start a checkpoint every 1000 ms
+env.enableCheckpointing(1000);
+
+// advanced options:
+
+// set mode to exactly-once (this is the default)
+env.getCheckpointConfig().setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE);
+
+// checkpoints have to complete within one minute, or are discarded
+env.getCheckpointConfig().setCheckpointTimeout(60000);
+
+// allow only one checkpoint to be in progress at the same time
+env.getCheckpointConfig().setMaxConcurrentCheckpoints(1);
+{% endhighlight %}
+</div>
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+val env = StreamExecutionEnvironment.getExecutionEnvironment()
+
+// start a checkpoint every 1000 ms
+env.enableCheckpointing(1000)
+
+// advanced options:
+
+// set mode to exactly-once (this is the default)
+env.getCheckpointConfig.setCheckpointingMode(CheckpointingMode.EXACTLY_ONCE)
+
+// checkpoints have to complete within one minute, or are discarded
+env.getCheckpointConfig.setCheckpointTimeout(60000)
+
+// allow only one checkpoint to be in progress at the same time
+env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
+{% endhighlight %}
+</div>
+</div>
+
+{% top %}
+
 ## Using the Key/Value State Interface
 
 The Key/Value state interface provides access to different types of state that are all scoped to
@@ -84,7 +152,7 @@ want to retrieve, you create either a `ValueStateDescriptor`, a `ListStateDescri
 a `ReducingStateDescriptor`.
 
 State is accessed using the `RuntimeContext`, so it is only possible in *rich functions*.
-Please see [here]({{ site.baseurl }}/apis/common/#specifying-transformation-functions) for
+Please see [here]({{ site.baseurl }}/dev/api_concepts#rich-functions) for
 information about that, but we will also see an example shortly. The `RuntimeContext` that
 is available in a `RichFunction` has these methods for accessing state:
 

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/state_backends.md
----------------------------------------------------------------------
diff --git a/docs/dev/state_backends.md b/docs/dev/state_backends.md
index 31ebb6f..af9934d 100644
--- a/docs/dev/state_backends.md
+++ b/docs/dev/state_backends.md
@@ -1,7 +1,7 @@
 ---
 title: "State Backends"
-nav-parent_id: dev
-nav-pos: 5
+nav-parent_id: setup
+nav-pos: 11
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/table_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/table_api.md b/docs/dev/table_api.md
index 9271803..6ffc23e 100644
--- a/docs/dev/table_api.md
+++ b/docs/dev/table_api.md
@@ -1,8 +1,8 @@
 ---
 title: "Table and SQL"
 is_beta: true
-nav-parent_id: apis
-nav-pos: 3
+nav-parent_id: libs
+nav-pos: 0
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -47,7 +47,7 @@ The following dependency must be added to your project in order to use the Table
 </dependency>
 {% endhighlight %}
 
-*Note: The Table API is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/dev/cluster_execution.html#linking-with-modules-not-contained-in-the-binary-distribution).*
+*Note: The Table API is currently not part of the binary distribution. See linking with it for cluster execution [here]({{ site.baseurl }}/dev/linking.html).*
 
 
 Registering Tables

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/types_serialization.md
----------------------------------------------------------------------
diff --git a/docs/dev/types_serialization.md b/docs/dev/types_serialization.md
index 4b8e25f..ea02df0 100644
--- a/docs/dev/types_serialization.md
+++ b/docs/dev/types_serialization.md
@@ -2,7 +2,8 @@
 title: "Data Types & Serialization"
 nav-id: types
 nav-parent_id: dev
-nav-pos: 9
+nav-show_overview: true
+nav-pos: 50
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -61,7 +62,7 @@ The most frequent issues where users need to interact with Flink's data type han
   by itself. Not all types are seamlessly handled by Kryo (and thus by Flink). For example, many Google Guava collection types do not work well
   by default. The solution is to register additional serializers for the types that cause problems.
   Call `.getConfig().addDefaultKryoSerializer(clazz, serializer)` on the `StreamExecutionEnvironment` or `ExecutionEnvironment`.
-  Additional Kryo serializers are available in many libraries.
+  Additional Kryo serializers are available in many libraries. See [Custom Serializers]({{ site.baseurl }}/dev/custom_serializers) for more details on working with custom serializers.
 
 * **Adding Type Hints:** Sometimes, when Flink cannot infer the generic types despits all tricks, a user must pass a *type hint*. That is generally
   only necessary in the Java API. The [Type Hints Section](#type-hints-in-the-java-api) describes that in more detail.

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/dev/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/windows.md b/docs/dev/windows.md
index d6189d4..1170d0d 100644
--- a/docs/dev/windows.md
+++ b/docs/dev/windows.md
@@ -1,8 +1,8 @@
 ---
 title: "Windows"
-nav-parent_id: dev
+nav-parent_id: streaming
 nav-id: windows
-nav-pos: 3
+nav-pos: 10
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/examples/index.md
----------------------------------------------------------------------
diff --git a/docs/examples/index.md b/docs/examples/index.md
new file mode 100644
index 0000000..d04a1e9
--- /dev/null
+++ b/docs/examples/index.md
@@ -0,0 +1,39 @@
+---
+title: Examples
+nav-id: examples
+nav-title: '<i class="fa fa-file-code-o title appetizer" aria-hidden="true"></i> Examples'
+nav-parent_id: root
+nav-pos: 3
+nav-show_overview: true
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+[Sample Project in Java]({{ site.baseurl }}/quickstart/java_api_quickstart) and [Sample Project in Scala]({{ site.baseurl }}/quickstart/scala_api_quickstart) are guides to setting up Maven and SBT projects and include simple implementations of a word count application.
+
+[Monitoring Wikipedia Edits]({{ site.baseurl }}/quickstart/run_example_quickstart) is a more complete example of a streaming analytics application.
+
+[Building real-time dashboard applications with Apache Flink, Elasticsearch, and Kibana](https://www.elastic.co/blog/building-real-time-dashboard-applications-with-apache-flink-elasticsearch-and-kibana) is a blog post at elastic.co showing how to build a real-time dashboard solution for streaming data analytics using Apache Flink, Elasticsearch, and Kibana.
+
+The [Flink training website](http://dataartisans.github.io/flink-training) from data Artisans has a number of examples. See the hands-on sections, and the exercises.
+
+## Bundled Examples
+
+The Flink sources include a number of examples for both **streaming** ( [java](https://github.com/apache/flink/tree/master/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples) / [scala](https://github.com/apache/flink/tree/master/flink-examples/flink-examples-streaming/src/main/scala/org/apache/flink/streaming/scala/examples) ) and **batch** ( [java](https://github.com/apache/flink/tree/master/flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java) / [scala](https://github.com/apache/flink/tree/master/flink-examples/flink-examples-batch/src/main/scala/org/apache/flink/examples/scala) ). These [instructions]({{ site.baseurl }}/dev/batch/examples.html#running-an-example) explain how to run the examples.
+

http://git-wip-us.apache.org/repos/asf/flink/blob/79d7e301/docs/index.md
----------------------------------------------------------------------
diff --git a/docs/index.md b/docs/index.md
index c40b17a..75b5328 100644
--- a/docs/index.md
+++ b/docs/index.md
@@ -1,7 +1,7 @@
 ---
 title: "Apache Flink Documentation"
 nav-pos: 0
-nav-title: '<i class="fa fa-home" aria-hidden="true"></i> Home'
+nav-title: '<i class="fa fa-home title" aria-hidden="true"></i> Home'
 nav-parent_id: root
 ---
 <!--
@@ -29,32 +29,8 @@ Apache Flink is an open source platform for distributed stream and batch data pr
 
 ## First Steps
 
-- **Concepts**: Start with the [basic concepts]({{ site.baseurl }}/concepts/index.html) of Flink. This will help you to fully understand the other parts of the documentation, including the setup and programming guides. It is highly recommended to read this first.
+- **Concepts**: Start with the basic concepts of Flink's [Dataflow Programming Model]({{ site.baseurl }}/concepts/programming-model.html) and [Distributed Runtime Environment]({{ site.baseurl }}/concepts/runtime.html). This will help you to fully understand the other parts of the documentation, including the setup and programming guides. It is highly recommended to read these sections first.
 
 - **Quickstarts**: [Run an example program](quickstart/setup_quickstart.html) on your local machine or [write a simple program](quickstart/run_example_quickstart.html) working on live Wikipedia edits.
 
-- **Setup:** The [local]({{ site.baseurl }}/setup/local_setup.html), [cluster](setup/cluster_setup.html), and [cloud](setup/gce_setup.html) setup guides show you how to deploy Flink.
-
 - **Programming Guides**: You can check out our guides about [basic concepts](dev/api_concepts.html) and the [DataStream API](dev/datastream_api.html) or [DataSet API](dev/batch/index.html) to learn how to write your first Flink programs.
-
-## Stack
-
-This is an overview of Flink's stack. Click on any component to go to the respective documentation page.
-
-<center>
-  <img src="{{ site.baseurl }}/fig/stack.png" width="700px" alt="Apache Flink: Stack" usemap="#overview-stack">
-</center>
-
-<map name="overview-stack">
-<area id="lib-datastream-cep" title="CEP: Complex Event Processing" href="{{ site.baseurl }}/dev/libs/cep.html" shape="rect" coords="63,0,143,177" />
-<area id="lib-datastream-table" title="Table: Relational DataStreams" href="{{ site.baseurl }}/dev/table_api.html" shape="rect" coords="143,0,223,177" />
-<area id="lib-dataset-ml" title="FlinkML: Machine Learning" href="{{ site.baseurl }}/dev/libs/ml/index.html" shape="rect" coords="382,2,462,176" />
-<area id="lib-dataset-gelly" title="Gelly: Graph Processing" href="{{ site.baseurl }}/dev/libs/gelly/index.html" shape="rect" coords="461,0,541,177" />
-<area id="lib-dataset-table" title="Table API and SQL" href="{{ site.baseurl }}/dev/table_api.html" shape="rect" coords="544,0,624,177" />
-<area id="datastream" title="DataStream API" href="{{ site.baseurl }}/dev/datastream_api.html" shape="rect" coords="64,177,379,255" />
-<area id="dataset" title="DataSet API" href="{{ site.baseurl }}/dev/batch/index.html" shape="rect" coords="382,177,697,255" />
-<area id="runtime" title="Runtime" href="{{ site.baseurl }}/internals/general_arch.html" shape="rect" coords="63,257,700,335" />
-<area id="local" title="Local" href="{{ site.baseurl }}/setup/local_setup.html" shape="rect" coords="62,337,275,414" />
-<area id="cluster" title="Cluster" href="{{ site.baseurl }}/setup/cluster_setup.html" shape="rect" coords="273,336,486,413" />
-<area id="cloud" title="Cloud" href="{{ site.baseurl }}/setup/gce_setup.html" shape="rect" coords="485,336,700,414" />
-</map>