You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ch...@apache.org on 2017/10/10 12:05:38 UTC

flink git commit: [FLINK-7744][docs] Add missing top links to documentation

Repository: flink
Updated Branches:
  refs/heads/master ad380463d -> 8c239ac33


[FLINK-7744][docs] Add missing top links to documentation

This closes #4756.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/8c239ac3
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/8c239ac3
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/8c239ac3

Branch: refs/heads/master
Commit: 8c239ac33b40134fc98ddd60f6dafdaa788aa542
Parents: ad38046
Author: sirko bretschneider <si...@innogames.com>
Authored: Sun Oct 1 19:44:20 2017 +0200
Committer: zentol <ch...@apache.org>
Committed: Tue Oct 10 14:05:26 2017 +0200

----------------------------------------------------------------------
 docs/dev/batch/connectors.md                   | 2 ++
 docs/dev/batch/dataset_transformations.md      | 2 ++
 docs/dev/batch/examples.md                     | 2 ++
 docs/dev/batch/hadoop_compatibility.md         | 2 ++
 docs/dev/batch/iterations.md                   | 2 ++
 docs/dev/batch/zip_elements_guide.md           | 2 ++
 docs/dev/best_practices.md                     | 2 +-
 docs/dev/cluster_execution.md                  | 2 ++
 docs/dev/connectors/cassandra.md               | 2 ++
 docs/dev/connectors/elasticsearch.md           | 2 ++
 docs/dev/connectors/filesystem_sink.md         | 2 ++
 docs/dev/connectors/kafka.md                   | 2 ++
 docs/dev/connectors/kinesis.md                 | 2 ++
 docs/dev/connectors/nifi.md                    | 2 ++
 docs/dev/connectors/rabbitmq.md                | 2 ++
 docs/dev/connectors/twitter.md                 | 2 ++
 docs/dev/custom_serializers.md                 | 2 ++
 docs/dev/event_time.md                         | 2 ++
 docs/dev/event_timestamp_extractors.md         | 2 ++
 docs/dev/event_timestamps_watermarks.md        | 2 +-
 docs/dev/java8.md                              | 2 ++
 docs/dev/libs/cep.md                           | 2 ++
 docs/dev/libs/gelly/bipartite_graph.md         | 2 ++
 docs/dev/libs/ml/als.md                        | 2 ++
 docs/dev/libs/ml/contribution_guide.md         | 2 ++
 docs/dev/libs/ml/cross_validation.md           | 2 ++
 docs/dev/libs/ml/distance_metrics.md           | 2 ++
 docs/dev/libs/ml/knn.md                        | 2 ++
 docs/dev/libs/ml/min_max_scaler.md             | 2 ++
 docs/dev/libs/ml/multiple_linear_regression.md | 2 ++
 docs/dev/libs/ml/optimization.md               | 2 ++
 docs/dev/libs/ml/pipelines.md                  | 4 +++-
 docs/dev/libs/ml/polynomial_features.md        | 2 ++
 docs/dev/libs/ml/quickstart.md                 | 2 ++
 docs/dev/libs/ml/sos.md                        | 2 ++
 docs/dev/libs/ml/standard_scaler.md            | 2 ++
 docs/dev/libs/ml/svm.md                        | 2 ++
 docs/dev/libs/storm_compatibility.md           | 2 ++
 docs/dev/linking.md                            | 2 ++
 docs/dev/local_execution.md                    | 2 ++
 docs/dev/migration.md                          | 2 ++
 docs/dev/scala_api_extensions.md               | 2 ++
 docs/dev/scala_shell.md                        | 2 ++
 docs/dev/stream/operators/asyncio.md           | 1 +
 docs/dev/stream/operators/windows.md           | 2 ++
 docs/dev/stream/side_output.md                 | 2 ++
 docs/dev/stream/state/custom_serialization.md  | 2 ++
 docs/dev/stream/state/queryable_state.md       | 2 ++
 docs/dev/stream/state/state.md                 | 4 +++-
 docs/dev/stream/state/state_backends.md        | 2 ++
 docs/dev/stream/testing.md                     | 2 ++
 docs/dev/types_serialization.md                | 2 ++
 docs/internals/components.md                   | 2 ++
 docs/internals/filesystems.md                  | 1 +
 docs/internals/ide_setup.md                    | 2 ++
 docs/internals/job_scheduling.md               | 2 ++
 docs/internals/stream_checkpointing.md         | 2 ++
 docs/internals/task_lifecycle.md               | 2 ++
 docs/monitoring/application_profiling.md       | 2 ++
 docs/monitoring/back_pressure.md               | 2 ++
 docs/monitoring/checkpoint_monitoring.md       | 2 ++
 docs/monitoring/debugging_classloading.md      | 1 +
 docs/monitoring/debugging_event_time.md        | 1 +
 docs/monitoring/historyserver.md               | 2 ++
 docs/monitoring/logging.md                     | 2 ++
 docs/monitoring/rest_api.md                    | 2 ++
 docs/ops/cli.md                                | 2 ++
 docs/ops/config.md                             | 2 ++
 docs/ops/deployment/aws.md                     | 2 ++
 docs/ops/deployment/gce_setup.md               | 2 ++
 docs/ops/deployment/mapr_setup.md              | 2 ++
 docs/ops/deployment/mesos.md                   | 2 ++
 docs/ops/deployment/yarn_setup.md              | 2 ++
 docs/ops/jobmanager_high_availability.md       | 2 ++
 docs/ops/production_ready.md                   | 2 ++
 docs/ops/security-kerberos.md                  | 2 ++
 docs/ops/security-ssl.md                       | 1 +
 docs/ops/state/checkpoints.md                  | 2 ++
 docs/ops/state/large_state_tuning.md           | 2 ++
 docs/ops/state/savepoints.md                   | 2 ++
 docs/ops/state/state_backends.md               | 2 ++
 docs/ops/upgrading.md                          | 2 ++
 docs/quickstart/java_api_quickstart.md         | 2 ++
 docs/quickstart/run_example_quickstart.md      | 2 ++
 docs/quickstart/scala_api_quickstart.md        | 2 ++
 docs/quickstart/setup_quickstart.md            | 2 ++
 docs/search-results.md                         | 2 ++
 87 files changed, 169 insertions(+), 4 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/batch/connectors.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/connectors.md b/docs/dev/batch/connectors.md
index 93bbf72..388b599 100644
--- a/docs/dev/batch/connectors.md
+++ b/docs/dev/batch/connectors.md
@@ -224,3 +224,5 @@ The example shows how to access an Azure table and turn data into Flink's `DataS
 ## Access MongoDB
 
 This [GitHub repository documents how to use MongoDB with Apache Flink (starting from 0.7-incubating)](https://github.com/okkam-it/flink-mongodb-test).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/batch/dataset_transformations.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/dataset_transformations.md b/docs/dev/batch/dataset_transformations.md
index c322f22..d63ee88 100644
--- a/docs/dev/batch/dataset_transformations.md
+++ b/docs/dev/batch/dataset_transformations.md
@@ -2338,3 +2338,5 @@ Not supported.
 
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/batch/examples.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/examples.md b/docs/dev/batch/examples.md
index fe478f8..508beef 100644
--- a/docs/dev/batch/examples.md
+++ b/docs/dev/batch/examples.md
@@ -517,3 +517,5 @@ CC       = gcc
 ~~~bash
 ./dbgen -T o -s 1
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/batch/hadoop_compatibility.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/hadoop_compatibility.md b/docs/dev/batch/hadoop_compatibility.md
index 9548c29..bbeea09 100644
--- a/docs/dev/batch/hadoop_compatibility.md
+++ b/docs/dev/batch/hadoop_compatibility.md
@@ -246,3 +246,5 @@ result.output(hadoopOF);
 // Execute Program
 env.execute("Hadoop WordCount");
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/batch/iterations.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/iterations.md b/docs/dev/batch/iterations.md
index 73a1d57..67f2615 100644
--- a/docs/dev/batch/iterations.md
+++ b/docs/dev/batch/iterations.md
@@ -208,3 +208,5 @@ We referred to each execution of the step function of an iteration operator as *
 <p class="text-center">
     <img alt="Supersteps" width="50%" src="{{site.baseurl}}/fig/iterations_supersteps.png" />
 </p>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/batch/zip_elements_guide.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/zip_elements_guide.md b/docs/dev/batch/zip_elements_guide.md
index a5c65c5..0ec5f84 100644
--- a/docs/dev/batch/zip_elements_guide.md
+++ b/docs/dev/batch/zip_elements_guide.md
@@ -124,3 +124,5 @@ env.execute()
 may yield the tuples: (0,G), (1,A), (2,H), (3,B), (5,C), (7,D), (9,E), (11,F)
 
 [Back to top](#top)
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/best_practices.md
----------------------------------------------------------------------
diff --git a/docs/dev/best_practices.md b/docs/dev/best_practices.md
index d01ea0f..2a1d32e 100644
--- a/docs/dev/best_practices.md
+++ b/docs/dev/best_practices.md
@@ -317,4 +317,4 @@ Note that you need to explicitly set the `lib/` directory when using a per-job Y
 
 The command to submit Flink on YARN with a custom logger is: `./bin/flink run -yt $FLINK_HOME/lib <... remaining arguments ...>`
 
-
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/cluster_execution.md
----------------------------------------------------------------------
diff --git a/docs/dev/cluster_execution.md b/docs/dev/cluster_execution.md
index 03af637..f1d84e1 100644
--- a/docs/dev/cluster_execution.md
+++ b/docs/dev/cluster_execution.md
@@ -81,3 +81,5 @@ public static void main(String[] args) throws Exception {
 Note that the program contains custom user code and hence requires a JAR file with
 the classes of the code attached. The constructor of the remote environment
 takes the path(s) to the JAR file(s).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/cassandra.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/cassandra.md b/docs/dev/connectors/cassandra.md
index c897779..12b7ce7 100644
--- a/docs/dev/connectors/cassandra.md
+++ b/docs/dev/connectors/cassandra.md
@@ -152,3 +152,5 @@ public class Pojo implements Serializable {
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/elasticsearch.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/elasticsearch.md b/docs/dev/connectors/elasticsearch.md
index 3fba7f0..b6ee63c 100644
--- a/docs/dev/connectors/elasticsearch.md
+++ b/docs/dev/connectors/elasticsearch.md
@@ -471,3 +471,5 @@ adding the following to the Maven POM file in the plugins section:
     </executions>
 </plugin>
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/filesystem_sink.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/filesystem_sink.md b/docs/dev/connectors/filesystem_sink.md
index 2d48876..4e1f68a 100644
--- a/docs/dev/connectors/filesystem_sink.md
+++ b/docs/dev/connectors/filesystem_sink.md
@@ -135,3 +135,5 @@ because of the batch size.
 
 For in-depth information, please refer to the JavaDoc for
 [BucketingSink](http://flink.apache.org/docs/latest/api/java/org/apache/flink/streaming/connectors/fs/bucketing/BucketingSink.html).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index aabb1ba..5785ceb 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -676,3 +676,5 @@ When using standalone Flink deployment, you can also use `SASL_SSL`; please see
 
 For more information on Flink configuration for Kerberos security, please see [here]({{ site.baseurl}}/ops/config.html).
 You can also find [here]({{ site.baseurl}}/ops/security-kerberos.html) further details on how Flink internally setups Kerberos-based security.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/kinesis.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kinesis.md b/docs/dev/connectors/kinesis.md
index aa20d3f..2c8b88a 100644
--- a/docs/dev/connectors/kinesis.md
+++ b/docs/dev/connectors/kinesis.md
@@ -367,3 +367,5 @@ producerConfig.put(AWSConfigConstants.AWS_ENDPOINT, "http://localhost:4567");
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/nifi.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/nifi.md b/docs/dev/connectors/nifi.md
index dbc1e8a..392b173 100644
--- a/docs/dev/connectors/nifi.md
+++ b/docs/dev/connectors/nifi.md
@@ -136,3 +136,5 @@ streamExecEnv.addSink(nifiSink)
 </div>      
 
 More information about [Apache NiFi](https://nifi.apache.org) Site-to-Site Protocol can be found [here](https://nifi.apache.org/docs/nifi-docs/html/user-guide.html#site-to-site)
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/rabbitmq.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/rabbitmq.md b/docs/dev/connectors/rabbitmq.md
index 5780892..c3ad4b7 100644
--- a/docs/dev/connectors/rabbitmq.md
+++ b/docs/dev/connectors/rabbitmq.md
@@ -171,3 +171,5 @@ stream.addSink(new RMQSink[String](
 </div>
 
 More about RabbitMQ can be found [here](http://www.rabbitmq.com/).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/connectors/twitter.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/twitter.md b/docs/dev/connectors/twitter.md
index 0cded6a..a563be6 100644
--- a/docs/dev/connectors/twitter.md
+++ b/docs/dev/connectors/twitter.md
@@ -83,3 +83,5 @@ The `TwitterExample` class in the `flink-examples-streaming` package shows a ful
 
 By default, the `TwitterSource` uses the `StatusesSampleEndpoint`. This endpoint returns a random sample of Tweets.
 There is a `TwitterSource.EndpointInitializer` interface allowing users to provide a custom endpoint.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/custom_serializers.md
----------------------------------------------------------------------
diff --git a/docs/dev/custom_serializers.md b/docs/dev/custom_serializers.md
index ddfc2ee..2c12e6e 100644
--- a/docs/dev/custom_serializers.md
+++ b/docs/dev/custom_serializers.md
@@ -122,3 +122,5 @@ that makes sure the user code classloader is used.
 
 Please refer to [FLINK-6025](https://issues.apache.org/jira/browse/FLINK-6025)
 for more details.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/event_time.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_time.md b/docs/dev/event_time.md
index 70a7812..a3e697d 100644
--- a/docs/dev/event_time.md
+++ b/docs/dev/event_time.md
@@ -213,3 +213,5 @@ with late elements in event time windows.
 
 Please refer to the [Debugging Windows & Event Time]({{ site.baseurl }}/monitoring/debugging_event_time.html) section for debugging
 watermarks at runtime.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/event_timestamp_extractors.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_timestamp_extractors.md b/docs/dev/event_timestamp_extractors.md
index b270491..01b3634 100644
--- a/docs/dev/event_timestamp_extractors.md
+++ b/docs/dev/event_timestamp_extractors.md
@@ -105,3 +105,5 @@ val withTimestampsAndWatermarks = stream.assignTimestampsAndWatermarks(new Bound
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/event_timestamps_watermarks.md
----------------------------------------------------------------------
diff --git a/docs/dev/event_timestamps_watermarks.md b/docs/dev/event_timestamps_watermarks.md
index f58f705..961948f 100644
--- a/docs/dev/event_timestamps_watermarks.md
+++ b/docs/dev/event_timestamps_watermarks.md
@@ -371,4 +371,4 @@ val stream: DataStream[MyType] = env.addSource(kafkaSource)
 
 <img src="{{ site.baseurl }}/fig/parallel_kafka_watermarks.svg" alt="Generating Watermarks with awareness for Kafka-partitions" class="center" width="80%" />
 
-
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/java8.md
----------------------------------------------------------------------
diff --git a/docs/dev/java8.md b/docs/dev/java8.md
index 1912fb1..df1e088 100644
--- a/docs/dev/java8.md
+++ b/docs/dev/java8.md
@@ -194,3 +194,5 @@ final ExecutionEnvironment env = ExecutionEnvironment.getExecutionEnvironment();
 env.fromElements(1, 2, 3).map((in) -> new Tuple1<String>(" " + in)).print();
 env.execute();
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/cep.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/cep.md b/docs/dev/libs/cep.md
index 8b16ac4..0373155 100644
--- a/docs/dev/libs/cep.md
+++ b/docs/dev/libs/cep.md
@@ -1639,3 +1639,5 @@ the looping patterns, multiple input events can match a single (looping) pattern
 3. The `followedBy()` in Flink 1.1 and 1.2 implied `non-deterministic relaxed contiguity` (see
 [here](#conditions-on-contiguity)). In Flink 1.3 this has changed and `followedBy()` implies `relaxed contiguity`,
 while `followedByAny()` should be used if `non-deterministic relaxed contiguity` is required.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/gelly/bipartite_graph.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/gelly/bipartite_graph.md b/docs/dev/libs/gelly/bipartite_graph.md
index ac57e3b..3aec8ec 100644
--- a/docs/dev/libs/gelly/bipartite_graph.md
+++ b/docs/dev/libs/gelly/bipartite_graph.md
@@ -183,3 +183,5 @@ Graph<String, String, Projection<Long, String, String, String>> graph bipartiteG
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/als.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/als.md b/docs/dev/libs/ml/als.md
index a0ef78a..87c80f8 100644
--- a/docs/dev/libs/ml/als.md
+++ b/docs/dev/libs/ml/als.md
@@ -173,3 +173,5 @@ val testingDS: DataSet[(Int, Int)] = env.readCsvFile[(Int, Int)](pathToData)
 // Calculate the ratings according to the matrix factorization
 val predictedRatings = als.predict(testingDS)
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/contribution_guide.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/contribution_guide.md b/docs/dev/libs/ml/contribution_guide.md
index 992232f..b30c53e 100644
--- a/docs/dev/libs/ml/contribution_guide.md
+++ b/docs/dev/libs/ml/contribution_guide.md
@@ -104,3 +104,5 @@ See `docs/_include/latex_commands.html` for the complete list of predefined late
 
 Once you have implemented the algorithm with adequate test coverage and added documentation, you are ready to open a pull request.
 Details of how to open a pull request can be found [here](http://flink.apache.org/how-to-contribute.html#contributing-code--documentation).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/cross_validation.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/cross_validation.md b/docs/dev/libs/ml/cross_validation.md
index 943c492..ef3d2ff 100644
--- a/docs/dev/libs/ml/cross_validation.md
+++ b/docs/dev/libs/ml/cross_validation.md
@@ -169,3 +169,5 @@ val dataKFolded: Array[TrainTestDataSet] =  Splitter.kFoldSplit(data, 10)
 // create an array of 5 datasets
 val dataMultiRandom: Array[DataSet[T]] = Splitter.multiRandomSplit(data, Array(0.5, 0.1, 0.1, 0.1, 0.1))
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/distance_metrics.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/distance_metrics.md b/docs/dev/libs/ml/distance_metrics.md
index 1dbd002..3119479 100644
--- a/docs/dev/libs/ml/distance_metrics.md
+++ b/docs/dev/libs/ml/distance_metrics.md
@@ -105,3 +105,5 @@ object MyDistance {
 
 val myMetric = MyDistance()
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/knn.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/knn.md b/docs/dev/libs/ml/knn.md
index 0d3ca9a..43f8d13 100644
--- a/docs/dev/libs/ml/knn.md
+++ b/docs/dev/libs/ml/knn.md
@@ -142,3 +142,5 @@ val result = knn.predict(testingSet).collect()
 {% endhighlight %}
 
 For more details on the computing KNN with and without and quadtree, here is a presentation: [http://danielblazevski.github.io/](http://danielblazevski.github.io/)
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/min_max_scaler.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/min_max_scaler.md b/docs/dev/libs/ml/min_max_scaler.md
index 35376c3..c44a875 100644
--- a/docs/dev/libs/ml/min_max_scaler.md
+++ b/docs/dev/libs/ml/min_max_scaler.md
@@ -110,3 +110,5 @@ minMaxscaler.fit(dataSet)
 // Scale the provided data set to have min=-1.0 and max=1.0
 val scaledDS = minMaxscaler.transform(dataSet)
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/multiple_linear_regression.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/multiple_linear_regression.md b/docs/dev/libs/ml/multiple_linear_regression.md
index a5737eb..c6b7ed6 100644
--- a/docs/dev/libs/ml/multiple_linear_regression.md
+++ b/docs/dev/libs/ml/multiple_linear_regression.md
@@ -150,3 +150,5 @@ mlr.fit(trainingDS)
 // Calculate the predictions for the test data
 val predictions = mlr.predict(testingDS)
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/optimization.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/optimization.md b/docs/dev/libs/ml/optimization.md
index 739d912..1e3bd2a 100644
--- a/docs/dev/libs/ml/optimization.md
+++ b/docs/dev/libs/ml/optimization.md
@@ -417,3 +417,5 @@ val trainingDS: DataSet[LabeledVector] = ...
 // Optimize the weights, according to the provided data
 val weightDS = sgd.optimize(trainingDS)
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/pipelines.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/pipelines.md b/docs/dev/libs/ml/pipelines.md
index e0f7d82..514d557 100644
--- a/docs/dev/libs/ml/pipelines.md
+++ b/docs/dev/libs/ml/pipelines.md
@@ -438,4 +438,6 @@ object MeanTransformer {
 {% endhighlight %}
 
 If we wanted to implement a `Predictor` instead of a `Transformer`, then we would have to provide a `FitOperation`, too.
-Moreover, a `Predictor` requires a `PredictOperation` which implements how predictions are calculated from testing data.  
+Moreover, a `Predictor` requires a `PredictOperation` which implements how predictions are calculated from testing data.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/polynomial_features.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/polynomial_features.md b/docs/dev/libs/ml/polynomial_features.md
index 676c132..5654ec7 100644
--- a/docs/dev/libs/ml/polynomial_features.md
+++ b/docs/dev/libs/ml/polynomial_features.md
@@ -106,3 +106,5 @@ val pipeline = polyFeatures.chainPredictor(mlr)
 // train the model
 pipeline.fit(trainingDS)
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/quickstart.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/quickstart.md b/docs/dev/libs/ml/quickstart.md
index 5dff6bb..ea6f804 100644
--- a/docs/dev/libs/ml/quickstart.md
+++ b/docs/dev/libs/ml/quickstart.md
@@ -241,3 +241,5 @@ coordinate ascent.* Advances in Neural Information Processing Systems. 2014.
 
 <a name="hsu"></a>[3] Hsu, Chih-Wei, Chih-Chung Chang, and Chih-Jen Lin.
  *A practical guide to support vector classification.* 2003.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/sos.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/sos.md b/docs/dev/libs/ml/sos.md
index 22f4c30..6f117e0 100644
--- a/docs/dev/libs/ml/sos.md
+++ b/docs/dev/libs/ml/sos.md
@@ -118,3 +118,5 @@ outputVector.foreach(output => expectedOutputVector(output._1) should be(output.
 
 <a name="janssens"></a>[1]J.H.M. Janssens, F. Huszar, E.O. Postma, and H.J. van den Herik. 
 *Stochastic Outlier Selection*. Technical Report TiCC TR 2012-001, Tilburg University, Tilburg, the Netherlands, 2012.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/standard_scaler.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/standard_scaler.md b/docs/dev/libs/ml/standard_scaler.md
index 5104d3c..cdfc6a0 100644
--- a/docs/dev/libs/ml/standard_scaler.md
+++ b/docs/dev/libs/ml/standard_scaler.md
@@ -111,3 +111,5 @@ scaler.fit(dataSet)
 // Scale the provided data set to have mean=10.0 and std=2.0
 val scaledDS = scaler.transform(dataSet)
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/ml/svm.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/svm.md b/docs/dev/libs/ml/svm.md
index 34fa1ec..2fa9e0a 100644
--- a/docs/dev/libs/ml/svm.md
+++ b/docs/dev/libs/ml/svm.md
@@ -218,3 +218,5 @@ val testingDS: DataSet[Vector] = env.readLibSVM(pathToTestingFile).map(_.vector)
 val predictionDS: DataSet[(Vector, Double)] = svm.predict(testingDS)
 
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/libs/storm_compatibility.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/storm_compatibility.md b/docs/dev/libs/storm_compatibility.md
index 6b24dc0..4f499f1 100644
--- a/docs/dev/libs/storm_compatibility.md
+++ b/docs/dev/libs/storm_compatibility.md
@@ -285,3 +285,5 @@ Compare `pom.xml` to see how both jars are built.
 Furthermore, there is one example for whole Storm topologies (`WordCount-StormTopology.jar`).
 
 You can run each of those examples via `bin/flink run <jarname>.jar`. The correct entry point class is contained in each jar's manifest file.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/linking.md
----------------------------------------------------------------------
diff --git a/docs/dev/linking.md b/docs/dev/linking.md
index 0592617..78ef544 100644
--- a/docs/dev/linking.md
+++ b/docs/dev/linking.md
@@ -92,3 +92,5 @@ the following to your plugins section.
 ~~~
 
 Now when running `mvn clean package` the produced jar includes the required dependencies.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/local_execution.md
----------------------------------------------------------------------
diff --git a/docs/dev/local_execution.md b/docs/dev/local_execution.md
index cf89956..326d515 100644
--- a/docs/dev/local_execution.md
+++ b/docs/dev/local_execution.md
@@ -123,3 +123,5 @@ public static void main(String[] args) throws Exception {
 The `flink-examples-batch` module contains a full example, called `CollectionExecutionExample`.
 
 Please note that the execution of the collection-based Flink programs is only possible on small data, which fits into the JVM heap. The execution on collections is not multi-threaded, only one thread is used.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/migration.md
----------------------------------------------------------------------
diff --git a/docs/dev/migration.md b/docs/dev/migration.md
index 3369a2c..5ac6961 100644
--- a/docs/dev/migration.md
+++ b/docs/dev/migration.md
@@ -476,3 +476,5 @@ val window2 = source
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/scala_api_extensions.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_api_extensions.md b/docs/dev/scala_api_extensions.md
index 283f50b..41836f9 100644
--- a/docs/dev/scala_api_extensions.md
+++ b/docs/dev/scala_api_extensions.md
@@ -406,3 +406,5 @@ object Main {
   }
 }
 {% endhighlight %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/scala_shell.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_shell.md b/docs/dev/scala_shell.md
index a8d1b74..bfd3133 100644
--- a/docs/dev/scala_shell.md
+++ b/docs/dev/scala_shell.md
@@ -191,3 +191,5 @@ Starts Flink scala shell connecting to a yarn cluster
   -h | --help
         Prints this usage text
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/operators/asyncio.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/operators/asyncio.md b/docs/dev/stream/operators/asyncio.md
index c5bafa1..32945e4 100644
--- a/docs/dev/stream/operators/asyncio.md
+++ b/docs/dev/stream/operators/asyncio.md
@@ -251,3 +251,4 @@ For example, the following patterns result in a blocking `asyncInvoke(...)` func
 
   - Blocking/waiting on the future-type objects returned by an aynchronous client inside the `asyncInvoke(...)` method
 
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/operators/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/operators/windows.md b/docs/dev/stream/operators/windows.md
index b825876..0ecae0c 100644
--- a/docs/dev/stream/operators/windows.md
+++ b/docs/dev/stream/operators/windows.md
@@ -1165,3 +1165,5 @@ Windows can be defined over long periods of time (such as days, weeks, or months
 2. `FoldFunction` and `ReduceFunction` can significantly reduce the storage requirements, as they eagerly aggregate elements and store only one value per window. In contrast, just using a `WindowFunction` requires accumulating all elements.
 
 3. Using an `Evictor` prevents any pre-aggregation, as all the elements of a window have to be passed through the evictor before applying the computation (see [Evictors](#evictors)).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/side_output.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/side_output.md b/docs/dev/stream/side_output.md
index da76af4..0f144d4 100644
--- a/docs/dev/stream/side_output.md
+++ b/docs/dev/stream/side_output.md
@@ -136,3 +136,5 @@ val sideOutputStream: DataStream[String] = mainDataStream.getSideOutput(outputTa
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/state/custom_serialization.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/state/custom_serialization.md b/docs/dev/stream/state/custom_serialization.md
index fbb7b83..ca6b07d 100644
--- a/docs/dev/stream/state/custom_serialization.md
+++ b/docs/dev/stream/state/custom_serialization.md
@@ -186,3 +186,5 @@ fundamental components to compatibility checks on upgraded serializers and would
 is not present. Since configuration snapshots are written to checkpoints using custom serialization, the implementation
 of the class is free to be changed, as long as compatibility of the configuration change is handled using the versioning
 mechanisms in `TypeSerializerConfigSnapshot`.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/state/queryable_state.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/state/queryable_state.md b/docs/dev/stream/state/queryable_state.md
index bd0d7fb..4bbc043 100644
--- a/docs/dev/stream/state/queryable_state.md
+++ b/docs/dev/stream/state/queryable_state.md
@@ -290,3 +290,5 @@ more robust with asks and acknowledgements.
 * The server and client keep track of statistics for queries. These are currently disabled by
 default as they would not be exposed anywhere. As soon as there is better support to publish these
 numbers via the Metrics system, we should enable the stats.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/state/state.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/state/state.md b/docs/dev/stream/state/state.md
index f280ceb..0f80a9d 100644
--- a/docs/dev/stream/state/state.md
+++ b/docs/dev/stream/state/state.md
@@ -593,4 +593,6 @@ class CounterSource
 </div>
 </div>
 
-Some operators might need the information when a checkpoint is fully acknowledged by Flink to communicate that with the outside world. In this case see the `org.apache.flink.runtime.state.CheckpointListener` interface.
\ No newline at end of file
+Some operators might need the information when a checkpoint is fully acknowledged by Flink to communicate that with the outside world. In this case see the `org.apache.flink.runtime.state.CheckpointListener` interface.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/state/state_backends.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/state/state_backends.md b/docs/dev/stream/state/state_backends.md
index 1357f2e..8e32f8e 100644
--- a/docs/dev/stream/state/state_backends.md
+++ b/docs/dev/stream/state/state_backends.md
@@ -44,3 +44,5 @@ env.setStateBackend(...)
 {% endhighlight %}
 </div>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/stream/testing.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/testing.md b/docs/dev/stream/testing.md
index 44f5cfd..e5bc024 100644
--- a/docs/dev/stream/testing.md
+++ b/docs/dev/stream/testing.md
@@ -261,3 +261,5 @@ Another approach is to write a unit test using the Flink internal testing utilit
 For an example of how to do that please have a look at the `org.apache.flink.streaming.runtime.operators.windowing.WindowOperatorTest` also in the `flink-streaming-java` module.
 
 Be aware that `AbstractStreamOperatorTestHarness` is currently not a part of public API and can be subject to change.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/dev/types_serialization.md
----------------------------------------------------------------------
diff --git a/docs/dev/types_serialization.md b/docs/dev/types_serialization.md
index 0d68a51..00d0363 100644
--- a/docs/dev/types_serialization.md
+++ b/docs/dev/types_serialization.md
@@ -360,3 +360,5 @@ The parameters provide additional information about the type itself as well as t
 If your type contains generic parameters that might need to be derived from the input type of a Flink function, make sure to also 
 implement `org.apache.flink.api.common.typeinfo.TypeInformation#getGenericParameters` for a bidirectional mapping of generic 
 parameters to type information.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/internals/components.md
----------------------------------------------------------------------
diff --git a/docs/internals/components.md b/docs/internals/components.md
index cf3b659..e85183b 100644
--- a/docs/internals/components.md
+++ b/docs/internals/components.md
@@ -57,3 +57,5 @@ You can click on the components in the figure to learn more.
 <area id="cluster" title="Cluster" href="{{ site.baseurl }}/ops/deployment/cluster_setup.html" shape="rect" coords="273,336,486,413" />
 <area id="cloud" title="Cloud" href="{{ site.baseurl }}/ops/deployment/gce_setup.html" shape="rect" coords="485,336,700,414" />
 </map>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/internals/filesystems.md
----------------------------------------------------------------------
diff --git a/docs/internals/filesystems.md b/docs/internals/filesystems.md
index 427251a..5ffd766 100644
--- a/docs/internals/filesystems.md
+++ b/docs/internals/filesystems.md
@@ -136,3 +136,4 @@ The `FSDataOutputStream` and `FSDataOutputStream` implementations are strictly *
 Instances of the streams should also not be passed between threads in between read or write operations, because there are no guarantees
 about the visibility of operations across threads (many operations do not create memory fences).
 
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/internals/ide_setup.md
----------------------------------------------------------------------
diff --git a/docs/internals/ide_setup.md b/docs/internals/ide_setup.md
index 31ad6b8..02d54e7 100644
--- a/docs/internals/ide_setup.md
+++ b/docs/internals/ide_setup.md
@@ -119,3 +119,5 @@ due to deficiencies of the old Eclipse version bundled with Scala IDE 3.0.3 or
 due to version incompatibilities with the bundled Scala version in Scala IDE 4.4.1.
 
 **We recommend to use IntelliJ instead (see [above](#intellij-idea))**
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/internals/job_scheduling.md
----------------------------------------------------------------------
diff --git a/docs/internals/job_scheduling.md b/docs/internals/job_scheduling.md
index 74062c6..668dfa3 100644
--- a/docs/internals/job_scheduling.md
+++ b/docs/internals/job_scheduling.md
@@ -101,3 +101,5 @@ For that reason, the execution of an ExecutionVertex is tracked in an {% gh_link
 <div style="text-align: center;">
 <img src="{{ site.baseurl }}/fig/state_machine.svg" alt="States and Transitions of Task Executions" height="300px" style="text-align: center;"/>
 </div>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/internals/stream_checkpointing.md
----------------------------------------------------------------------
diff --git a/docs/internals/stream_checkpointing.md b/docs/internals/stream_checkpointing.md
index 330b0aa..8fc96cc 100644
--- a/docs/internals/stream_checkpointing.md
+++ b/docs/internals/stream_checkpointing.md
@@ -169,3 +169,5 @@ Operators that checkpoint purely synchronously return an already completed `Futu
 If an asynchronous operation needs to be performed, it is executed in the `run()` method of that `FutureTask`.
 
 The tasks are cancelable, so that streams and other resource consuming handles can be released.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/internals/task_lifecycle.md
----------------------------------------------------------------------
diff --git a/docs/internals/task_lifecycle.md b/docs/internals/task_lifecycle.md
index fed2cb9..182c99c 100644
--- a/docs/internals/task_lifecycle.md
+++ b/docs/internals/task_lifecycle.md
@@ -190,3 +190,5 @@ In the previous sections we described the lifecycle of a task that runs till com
 at any point, then the normal execution is interrupted and the only operations performed from that point on are the timer 
 service shutdown, the task-specific cleanup, the disposal of the operators, and the general task cleanup, as described 
 above.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/application_profiling.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/application_profiling.md b/docs/monitoring/application_profiling.md
index 65ef45e..721bc31 100644
--- a/docs/monitoring/application_profiling.md
+++ b/docs/monitoring/application_profiling.md
@@ -52,3 +52,5 @@ compiler used to inspect inlining decisions, hot methods, bytecode, and assembly
 ~~~
 env.java.opts: "-XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation -XX:LogFile=${FLINK_LOG_PREFIX}.jit -XX:+PrintAssembly"
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/back_pressure.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/back_pressure.md b/docs/monitoring/back_pressure.md
index f047066..28a4dd2 100644
--- a/docs/monitoring/back_pressure.md
+++ b/docs/monitoring/back_pressure.md
@@ -79,3 +79,5 @@ If you see status **OK** for the tasks, there is no indication of back pressure.
 <img src="{{ site.baseurl }}/fig/back_pressure_sampling_ok.png" class="img-responsive">
 
 <img src="{{ site.baseurl }}/fig/back_pressure_sampling_high.png" class="img-responsive">
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/checkpoint_monitoring.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/checkpoint_monitoring.md b/docs/monitoring/checkpoint_monitoring.md
index 7cf06d1..6c2c289 100644
--- a/docs/monitoring/checkpoint_monitoring.md
+++ b/docs/monitoring/checkpoint_monitoring.md
@@ -113,3 +113,5 @@ When you click on a *More details* link for a checkpoint, you get a Minumum/Aver
 <center>
   <img src="{{ site.baseurl }}/fig/checkpoint_monitoring-details_subtasks.png" width="700px" alt="Checkpoint Monitoring: Subtasks">
 </center>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/debugging_classloading.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/debugging_classloading.md b/docs/monitoring/debugging_classloading.md
index c072777..4f57c10 100644
--- a/docs/monitoring/debugging_classloading.md
+++ b/docs/monitoring/debugging_classloading.md
@@ -140,3 +140,4 @@ This documentation page explains [relocating classes using the shade plugin](htt
 
 Note that some of Flink's dependencies, such as `guava` are shaded away by the maintainers of Flink, so users usually don't have to worry about it.
 
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/debugging_event_time.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/debugging_event_time.md b/docs/monitoring/debugging_event_time.md
index 8355b62..10a3fb2 100644
--- a/docs/monitoring/debugging_event_time.md
+++ b/docs/monitoring/debugging_event_time.md
@@ -54,3 +54,4 @@ For local setups, we recommend using the JMX metric reporter and a tool like [Vi
   - Approach 1: Watermark stays late (indicated completeness), windows fire early
   - Approach 2: Watermark heuristic with maximum lateness, windows accept late data
 
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/historyserver.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/historyserver.md b/docs/monitoring/historyserver.md
index e109512..61660a5 100644
--- a/docs/monitoring/historyserver.md
+++ b/docs/monitoring/historyserver.md
@@ -95,3 +95,5 @@ Values in angle brackets are variables, for example `http://hostname:port/jobs/<
   - `/jobs/<jobid>/vertices/<vertexid>/subtasks/<subtasknum>/attempts/<attempt>`
   - `/jobs/<jobid>/vertices/<vertexid>/subtasks/<subtasknum>/attempts/<attempt>/accumulators`
   - `/jobs/<jobid>/plan`
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/logging.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/logging.md b/docs/monitoring/logging.md
index ee6a316..b548d41 100644
--- a/docs/monitoring/logging.md
+++ b/docs/monitoring/logging.md
@@ -96,3 +96,5 @@ catch(Exception exception){
 	LOG.error("An {} occurred.", "error", exception);
 }
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/monitoring/rest_api.md
----------------------------------------------------------------------
diff --git a/docs/monitoring/rest_api.md b/docs/monitoring/rest_api.md
index c5efcc2..4202886 100644
--- a/docs/monitoring/rest_api.md
+++ b/docs/monitoring/rest_api.md
@@ -700,3 +700,5 @@ Response:
 ~~~
 {"jobid": "869a9868d49c679e7355700e0857af85"}
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/cli.md
----------------------------------------------------------------------
diff --git a/docs/ops/cli.md b/docs/ops/cli.md
index 7b36177..11c8caf 100644
--- a/docs/ops/cli.md
+++ b/docs/ops/cli.md
@@ -353,3 +353,5 @@ Action "savepoint" triggers savepoints for a running job or disposes existing on
   Options for yarn-cluster mode:
      -yid,--yarnapplicationId <arg>   Attach to running YARN session
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/config.md
----------------------------------------------------------------------
diff --git a/docs/ops/config.md b/docs/ops/config.md
index 9d2405e..64ef48f 100644
--- a/docs/ops/config.md
+++ b/docs/ops/config.md
@@ -723,3 +723,5 @@ Each Flink TaskManager provides processing slots in the cluster. The number of s
 When starting a Flink application, users can supply the default number of slots to use for that job. The command line value therefore is called `-p` (for parallelism). In addition, it is possible to [set the number of slots in the programming APIs]({{site.baseurl}}/dev/parallel.html) for the whole application and for individual operators.
 
 <img src="{{ site.baseurl }}/fig/slots_parallelism.svg" class="img-responsive" />
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/deployment/aws.md
----------------------------------------------------------------------
diff --git a/docs/ops/deployment/aws.md b/docs/ops/deployment/aws.md
index 9c6e302..1a05bfd 100644
--- a/docs/ops/deployment/aws.md
+++ b/docs/ops/deployment/aws.md
@@ -372,3 +372,5 @@ o.a.f.runtime.fs.hdfs.HadoopFileSystem.create(HadoopFileSystem.java:404) at
 o.a.f.runtime.fs.hdfs.HadoopFileSystem.create(HadoopFileSystem.java:48) at
 ... 25 more
 ```
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/deployment/gce_setup.md
----------------------------------------------------------------------
diff --git a/docs/ops/deployment/gce_setup.md b/docs/ops/deployment/gce_setup.md
index 2925737..0b9482c 100644
--- a/docs/ops/deployment/gce_setup.md
+++ b/docs/ops/deployment/gce_setup.md
@@ -91,3 +91,5 @@ To bring up the Flink cluster on Google Compute Engine, execute:
 Shutting down a cluster is as simple as executing
 
     ./bdutil -e extensions/flink/flink_env.sh delete
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/deployment/mapr_setup.md
----------------------------------------------------------------------
diff --git a/docs/ops/deployment/mapr_setup.md b/docs/ops/deployment/mapr_setup.md
index 7575bdc..19920ad 100644
--- a/docs/ops/deployment/mapr_setup.md
+++ b/docs/ops/deployment/mapr_setup.md
@@ -130,3 +130,5 @@ java.lang.Exception: unable to establish the security context
 Caused by: o.a.f.r.security.modules.SecurityModule$SecurityInstallException: Unable to set the Hadoop login user
 Caused by: java.io.IOException: failure to login: Unable to obtain MapR credentials
 ```
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/deployment/mesos.md
----------------------------------------------------------------------
diff --git a/docs/ops/deployment/mesos.md b/docs/ops/deployment/mesos.md
index 2fa340d..36b6df7 100644
--- a/docs/ops/deployment/mesos.md
+++ b/docs/ops/deployment/mesos.md
@@ -267,3 +267,5 @@ May be set to -1 to disable this feature.
 `mesos.resourcemanager.tasks.hostname`: Optional value to define the TaskManager's hostname. The pattern `_TASK_` is replaced by the actual id of the Mesos task. This can be used to configure the TaskManager to use Mesos DNS (e.g. `_TASK_.flink-service.mesos`) for name lookups. (**NO DEFAULT**)
 
 `mesos.resourcemanager.tasks.bootstrap-cmd`: A command which is executed before the TaskManager is started (**NO DEFAULT**).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/deployment/yarn_setup.md
----------------------------------------------------------------------
diff --git a/docs/ops/deployment/yarn_setup.md b/docs/ops/deployment/yarn_setup.md
index 8c435f7..0fb5bf6 100644
--- a/docs/ops/deployment/yarn_setup.md
+++ b/docs/ops/deployment/yarn_setup.md
@@ -336,3 +336,5 @@ The next step of the client is to request (step 2) a YARN container to start the
 The *JobManager* and AM are running in the same container. Once they successfully started, the AM knows the address of the JobManager (its own host). It is generating a new Flink configuration file for the TaskManagers (so that they can connect to the JobManager). The file is also uploaded to HDFS. Additionally, the *AM* container is also serving Flink's web interface. All ports the YARN code is allocating are *ephemeral ports*. This allows users to execute multiple Flink YARN sessions in parallel.
 
 After that, the AM starts allocating the containers for Flink's TaskManagers, which will download the jar file and the modified configuration from the HDFS. Once these steps are completed, Flink is set up and ready to accept Jobs.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/jobmanager_high_availability.md
----------------------------------------------------------------------
diff --git a/docs/ops/jobmanager_high_availability.md b/docs/ops/jobmanager_high_availability.md
index 7dd7d4c..e73353b 100644
--- a/docs/ops/jobmanager_high_availability.md
+++ b/docs/ops/jobmanager_high_availability.md
@@ -237,3 +237,5 @@ server.Y=addressY:peerPort:leaderPort
 </pre>
 
 The script `bin/start-zookeeper-quorum.sh` will start a ZooKeeper server on each of the configured hosts. The started processes start ZooKeeper servers via a Flink wrapper, which reads the configuration from `conf/zoo.cfg` and makes sure to set some required configuration values for convenience. In production setups, it is recommended to manage your own ZooKeeper installation.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/production_ready.md
----------------------------------------------------------------------
diff --git a/docs/ops/production_ready.md b/docs/ops/production_ready.md
index c58ce5b..303e7a7 100644
--- a/docs/ops/production_ready.md
+++ b/docs/ops/production_ready.md
@@ -86,3 +86,5 @@ stream processing. However, RocksDB can have worse performance than, for example
 you are sure that your state will never exceed main memory and blocking the stream processing to write it is not an issue,
 you **could consider** to not use the RocksDB backends. However, at this point, we **strongly recommend** using RocksDB
 for production.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/security-kerberos.md
----------------------------------------------------------------------
diff --git a/docs/ops/security-kerberos.md b/docs/ops/security-kerberos.md
index eac72f1..5589057 100644
--- a/docs/ops/security-kerberos.md
+++ b/docs/ops/security-kerberos.md
@@ -117,3 +117,5 @@ Steps to run a secure Flink cluster using `kinit`:
 Each component that uses Kerberos is independently responsible for renewing the Kerberos ticket-granting-ticket (TGT).
 Hadoop, ZooKeeper, and Kafka all renew the TGT automatically when provided a keytab.  In the delegation token scenario,
 YARN itself renews the token (up to its maximum lifespan).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/security-ssl.md
----------------------------------------------------------------------
diff --git a/docs/ops/security-ssl.md b/docs/ops/security-ssl.md
index 7c7268a..8c7bf2b 100644
--- a/docs/ops/security-ssl.md
+++ b/docs/ops/security-ssl.md
@@ -142,3 +142,4 @@ flink run -m yarn-cluster -yt deploy-keys/ TestJob.jar
 
 When deployed using YARN, flink's web dashboard is accessible through YARN proxy's Tracking URL. To ensure that the YARN proxy is able to access flink's https url you need to configure YARN proxy to accept flink's SSL certificates. Add the custom CA certificate into Java's default trustore on the YARN Proxy node.
 
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/state/checkpoints.md
----------------------------------------------------------------------
diff --git a/docs/ops/state/checkpoints.md b/docs/ops/state/checkpoints.md
index 96c7a20..690680f 100644
--- a/docs/ops/state/checkpoints.md
+++ b/docs/ops/state/checkpoints.md
@@ -99,3 +99,5 @@ above).
 ```sh
 $ bin/flink run -s :checkpointMetaDataPath [:runArgs]
 ```
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/state/large_state_tuning.md
----------------------------------------------------------------------
diff --git a/docs/ops/state/large_state_tuning.md b/docs/ops/state/large_state_tuning.md
index aa673a4..dd3e404 100644
--- a/docs/ops/state/large_state_tuning.md
+++ b/docs/ops/state/large_state_tuning.md
@@ -235,3 +235,5 @@ Compression can be activated through the `ExecutionConfig`:
 
 **Notice:** The compression option has no impact on incremental snapshots, because they are using RocksDB's internal
 format which is always using snappy compression out of the box.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/state/savepoints.md
----------------------------------------------------------------------
diff --git a/docs/ops/state/savepoints.md b/docs/ops/state/savepoints.md
index 1d82d2b..d6d4c53 100644
--- a/docs/ops/state/savepoints.md
+++ b/docs/ops/state/savepoints.md
@@ -196,3 +196,5 @@ If you did not assign IDs, the auto generated IDs of the stateful operators will
 If the savepoint was triggered with Flink >= 1.2.0 and using no deprecated state API like `Checkpointed`, you can simply restore the program from a savepoint and specify a new parallelism.
 
 If you are resuming from a savepoint triggered with Flink < 1.2.0 or using now deprecated APIs you first have to migrate your job and savepoint to Flink >= 1.2.0 before being able to change the parallelism. See the [upgrading jobs and Flink versions guide]({{ site.baseurl }}/ops/upgrading.html).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/state/state_backends.md
----------------------------------------------------------------------
diff --git a/docs/ops/state/state_backends.md b/docs/ops/state/state_backends.md
index b53bcef..422df3e 100644
--- a/docs/ops/state/state_backends.md
+++ b/docs/ops/state/state_backends.md
@@ -167,3 +167,5 @@ state.backend: filesystem
 
 state.backend.fs.checkpointdir: hdfs://namenode:40010/flink/checkpoints
 ~~~
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/ops/upgrading.md
----------------------------------------------------------------------
diff --git a/docs/ops/upgrading.md b/docs/ops/upgrading.md
index 12d15ea..2a34c17 100644
--- a/docs/ops/upgrading.md
+++ b/docs/ops/upgrading.md
@@ -240,3 +240,5 @@ Savepoints are compatible across Flink versions as indicated by the table below:
     </tr>
   </tbody>
 </table>
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/quickstart/java_api_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/java_api_quickstart.md b/docs/quickstart/java_api_quickstart.md
index c21e06e..109240b 100644
--- a/docs/quickstart/java_api_quickstart.md
+++ b/docs/quickstart/java_api_quickstart.md
@@ -192,3 +192,5 @@ public static final class LineSplitter implements FlatMapFunction<String, Tuple2
 {% gh_link /flink-examples/flink-examples-batch/src/main/java/org/apache/flink/examples/java/wordcount/WordCount.java "Check GitHub" %} for the full example code.
 
 For a complete overview over our API, have a look at the [DataStream API]({{ site.baseurl }}/dev/datastream_api.html) and [DataSet API]({{ site.baseurl }}/dev/batch/index.html) sections. If you have any trouble, ask on our [Mailing List](http://mail-archives.apache.org/mod_mbox/flink-dev/). We are happy to provide help.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/quickstart/run_example_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/run_example_quickstart.md b/docs/quickstart/run_example_quickstart.md
index 123e265..683c5ce 100644
--- a/docs/quickstart/run_example_quickstart.md
+++ b/docs/quickstart/run_example_quickstart.md
@@ -392,3 +392,5 @@ and, for example, see the number of processed elements:
 <a href="{{ site.baseurl }}/page/img/quickstart-example/jobmanager-job.png" ><img class="img-responsive" src="{{ site.baseurl }}/page/img/quickstart-example/jobmanager-job.png" alt="Example Job View"/></a>
 
 This concludes our little tour of Flink. If you have any questions, please don't hesitate to ask on our [Mailing Lists](http://flink.apache.org/community.html#mailing-lists).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/quickstart/scala_api_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/scala_api_quickstart.md b/docs/quickstart/scala_api_quickstart.md
index 9e563ed..33013e4 100644
--- a/docs/quickstart/scala_api_quickstart.md
+++ b/docs/quickstart/scala_api_quickstart.md
@@ -260,3 +260,5 @@ For a complete overview over our API, have a look at the
 sections. If you have any trouble, ask on our
 [Mailing List](http://mail-archives.apache.org/mod_mbox/flink-dev/).
 We are happy to provide help.
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/quickstart/setup_quickstart.md
----------------------------------------------------------------------
diff --git a/docs/quickstart/setup_quickstart.md b/docs/quickstart/setup_quickstart.md
index 0e4f3d8..3d40ddf 100644
--- a/docs/quickstart/setup_quickstart.md
+++ b/docs/quickstart/setup_quickstart.md
@@ -299,3 +299,5 @@ window of processing time, as long as words are floating in.
 ## Next Steps
 
 Check out some more [examples]({{ site.baseurl }}/examples) to get a better feel for Flink's programming APIs. When you are done with that, go ahead and read the [streaming guide]({{ site.baseurl }}/dev/datastream_api.html).
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/8c239ac3/docs/search-results.md
----------------------------------------------------------------------
diff --git a/docs/search-results.md b/docs/search-results.md
index 2c37b44..5d8de99 100644
--- a/docs/search-results.md
+++ b/docs/search-results.md
@@ -32,3 +32,5 @@ under the License.
 </script>
 <!-- add the keyword flink to every search -->
 <gcse:search as_oq="flink"></gcse:search>
+
+{% top %}