You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by uc...@apache.org on 2015/05/18 10:37:07 UTC

flink-web git commit: Fix broken links in blog posts

Repository: flink-web
Updated Branches:
  refs/heads/asf-site 6a518fdc5 -> c4ce2d7c5


Fix broken links in blog posts


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/c4ce2d7c
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/c4ce2d7c
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/c4ce2d7c

Branch: refs/heads/asf-site
Commit: c4ce2d7c55e637eb47c2675d170d7104a4ecc420
Parents: 6a518fd
Author: Ufuk Celebi <uc...@apache.org>
Authored: Mon May 18 10:37:01 2015 +0200
Committer: Ufuk Celebi <uc...@apache.org>
Committed: Mon May 18 10:37:01 2015 +0200

----------------------------------------------------------------------
 _posts/2014-11-04-release-0.7.0.md              |  6 +--
 _posts/2014-11-18-hadoop-compatibility.md       |  4 +-
 _posts/2015-01-21-release-0.8.md                |  4 +-
 _posts/2015-02-09-streaming-example.md          | 12 ++---
 _posts/2015-03-02-february-2015-in-flink.md     |  2 +-
 ...13-peeking-into-Apache-Flinks-Engine-Room.md | 10 ++--
 _posts/2015-04-07-march-in-flink.md             |  5 +-
 _posts/2015-04-13-release-0.9.0-milestone1.md   | 12 ++---
 content/blog/feed.xml                           | 54 +++++++++-----------
 content/news/2014/11/04/release-0.7.0.html      |  6 +--
 .../news/2014/11/18/hadoop-compatibility.html   |  4 +-
 content/news/2015/01/21/release-0.8.html        |  4 +-
 content/news/2015/02/09/streaming-example.html  | 12 ++---
 .../news/2015/03/02/february-2015-in-flink.html |  2 +-
 .../peeking-into-Apache-Flinks-Engine-Room.html |  9 ++--
 content/news/2015/04/07/march-in-flink.html     |  5 +-
 .../2015/04/13/release-0.9.0-milestone1.html    | 12 ++---
 17 files changed, 75 insertions(+), 88 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2014-11-04-release-0.7.0.md
----------------------------------------------------------------------
diff --git a/_posts/2014-11-04-release-0.7.0.md b/_posts/2014-11-04-release-0.7.0.md
index 3783eba..cad1432 100644
--- a/_posts/2014-11-04-release-0.7.0.md
+++ b/_posts/2014-11-04-release-0.7.0.md
@@ -13,11 +13,11 @@ See the release changelog [here](https://issues.apache.org/jira/secure/ReleaseNo
 
 ## Overview of major new features
 
-**Flink Streaming:** The gem of the 0.7.0 release is undoubtedly Flink Streaming. Available currently in alpha, Flink Streaming provides a Java API on top of Apache Flink that can consume streaming data sources (e.g., from Apache Kafka, Apache Flume, and others) and process them in real time. A dedicated blog post on Flink Streaming and its performance is coming up here soon. You can check out the Streaming programming guide [here](http://flink.incubator.apache.org/docs/0.7-incubating/streaming_guide.html).
+**Flink Streaming:** The gem of the 0.7.0 release is undoubtedly Flink Streaming. Available currently in alpha, Flink Streaming provides a Java API on top of Apache Flink that can consume streaming data sources (e.g., from Apache Kafka, Apache Flume, and others) and process them in real time. A dedicated blog post on Flink Streaming and its performance is coming up here soon. You can check out the Streaming programming guide [here](http://ci.apache.org/projects/flink/flink-docs-release-0.7/streaming_guide.html).
 
-**New Scala API:** The Scala API has been completely rewritten. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release. See the new Scala API [here](http://flink.incubator.apache.org/docs/0.7-incubating/programming_guide.html).
+**New Scala API:** The Scala API has been completely rewritten. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release. See the new Scala API [here](http://ci.apache.org/projects/flink/flink-docs-release-0.7/programming_guide.html).
 
-**Logical key expressions:** You can now specify grouping and joining keys with logical names for member variables of POJO data types. For example, you can join two data sets as ``persons.join(cities).where(“zip”).equalTo(“zipcode”)``. Read more [here](http://flink.incubator.apache.org/docs/0.7-incubating/programming_guide.html#specifying-keys).
+**Logical key expressions:** You can now specify grouping and joining keys with logical names for member variables of POJO data types. For example, you can join two data sets as ``persons.join(cities).where(“zip”).equalTo(“zipcode”)``. Read more [here](http://ci.apache.org/projects/flink/flink-docs-release-0.7/programming_guide.html#specifying-keys).
 
 **Hadoop MapReduce compatibility:** You can run unmodified Hadoop Mappers and Reducers (mapred API) in Flink, use all Hadoop data types, and read data with all Hadoop InputFormats.
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2014-11-18-hadoop-compatibility.md
----------------------------------------------------------------------
diff --git a/_posts/2014-11-18-hadoop-compatibility.md b/_posts/2014-11-18-hadoop-compatibility.md
index 214049c..154c92a 100644
--- a/_posts/2014-11-18-hadoop-compatibility.md
+++ b/_posts/2014-11-18-hadoop-compatibility.md
@@ -81,10 +81,10 @@ Hadoop functions can be used at any position within a Flink program and of cours
 
 ## What comes next?
 
-While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as [Spargel]({{ site.baseurl }}/docs/0.7-incubating/spargel_guide.html) operations (Pregel/Giraph-style jobs).
+While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as [Spargel](http://ci.apache.org/projects/flink/flink-docs-release-0.7/spargel_guide.html) operations (Pregel/Giraph-style jobs).
 
 ## Summary
 
 Flink lets you reuse a lot of the code you wrote for Hadoop MapReduce, including all data types, all Input- and OutputFormats, and Mapper and Reducers of the mapred-API. Hadoop functions can be used within Flink programs and mixed with all other Flink functions. Due to Flink’s pipelined execution, Hadoop functions can arbitrarily be assembled without data exchange via HDFS. Moreover, the Flink community is currently working on a dedicated Hadoop Job operation to supporting the execution of Hadoop jobs as a whole.
 
-If you want to use Flink’s Hadoop compatibility package checkout our [documentation]({{ site.baseurl }}/docs/0.7-incubating/hadoop_compatibility.html).
\ No newline at end of file
+If you want to use Flink’s Hadoop compatibility package checkout our [documentation](http://ci.apache.org/projects/flink/flink-docs-release-0.7/hadoop_compatibility.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2015-01-21-release-0.8.md
----------------------------------------------------------------------
diff --git a/_posts/2015-01-21-release-0.8.md b/_posts/2015-01-21-release-0.8.md
index 740c6c6..a03d209 100644
--- a/_posts/2015-01-21-release-0.8.md
+++ b/_posts/2015-01-21-release-0.8.md
@@ -16,11 +16,11 @@ We are pleased to announce the availability of Flink 0.8.0. This release include
 
 
  - **Extended filesystem support**: The former `DistributedFileSystem` interface has been generalized to `HadoopFileSystem` now supporting all sub classes of `org.apache.hadoop.fs.FileSystem`. This allows users to use all file systems supported by Hadoop with Apache Flink.
-[See connecting to other systems](http://flink.incubator.apache.org/docs/0.8/example_connectors.html)
+[See connecting to other systems](http://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html)
 
  - **Streaming Scala API**: As an alternative to the existing Java API Streaming is now also programmable in Scala. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release.
 
- - **Streaming windowing semantics**: The new windowing api offers an expressive way to define custom logic for triggering the execution of a stream window and removing elements. The new features include out-of-the-box support for windows based in logical or physical time and data-driven properties on the events themselves among others. [Read more here](http://flink.apache.org/docs/0.8/streaming_guide.html#window-operators)
+ - **Streaming windowing semantics**: The new windowing api offers an expressive way to define custom logic for triggering the execution of a stream window and removing elements. The new features include out-of-the-box support for windows based in logical or physical time and data-driven properties on the events themselves among others. [Read more here](http://ci.apache.org/projects/flink/flink-docs-release-0.8/streaming_guide.html#window-operators)
 
  - **Mutable and immutable objects in runtime** All Flink versions before 0.8.0 were always passing the same objects to functions written by users. This is a common performance optimization, also used in other systems such as Hadoop.
  However, this is error-prone for new users because one has to carefully check that references to the object aren’t kept in the user function. Starting from 0.8.0, Flink allows to configure a mode which is disabling that mechanism.

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2015-02-09-streaming-example.md
----------------------------------------------------------------------
diff --git a/_posts/2015-02-09-streaming-example.md b/_posts/2015-02-09-streaming-example.md
index 5fcba01..b1ecf96 100644
--- a/_posts/2015-02-09-streaming-example.md
+++ b/_posts/2015-02-09-streaming-example.md
@@ -15,7 +15,7 @@ In this post, we go through an example that uses the Flink Streaming
 API to compute statistics on stock market data that arrive
 continuously and combine the stock market data with Twitter streams.
 See the [Streaming Programming
-Guide](http://flink.apache.org/docs/latest/streaming_guide.html) for a
+Guide](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html) for a
 detailed presentation of the Streaming API.
 
 First, we read a bunch of stock price streams and combine them into
@@ -115,11 +115,11 @@ public static void main(String[] args) throws Exception {
 </div>
 
 See
-[here](http://flink.apache.org/docs/latest/streaming_guide.html#sources)
+[here](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#sources)
 on how you can create streaming sources for Flink Streaming
 programs. Flink, of course, has support for reading in streams from
 [external
-sources](http://flink.apache.org/docs/latest/streaming_guide.html#stream-connectors)
+sources](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#stream-connectors)
 such as Apache Kafka, Apache Flume, RabbitMQ, and others. For the sake
 of this example, the data streams are simply generated using the
 `generateStock` method:
@@ -230,7 +230,7 @@ Window aggregations
 ---------------
 
 We first compute aggregations on time-based windows of the
-data. Flink provides [flexible windowing semantics](http://flink.apache.org/docs/latest/streaming_guide.html#window-operators) where windows can
+data. Flink provides [flexible windowing semantics](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#window-operators) where windows can
 also be defined based on count of records or any custom user defined
 logic.
 
@@ -432,7 +432,7 @@ Combining with a Twitter stream
 
 Next, we will read a Twitter stream and correlate it with our stock
 price stream. Flink has support for connecting to [Twitter's
-API](http://flink.apache.org/docs/latest/streaming_guide.html#twitter-streaming-api),
+API](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#twitter-streaming-api),
 but for the sake of this example we generate dummy tweet data.
 
 <img alt="Social media analytics" src="{{ site.baseurl }}/img/blog/blog_social_media.png" width="100%" class="img-responsive center-block">
@@ -666,7 +666,7 @@ public static final class WindowCorrelation
 Other things to try
 ---------------
 
-For a full feature overview please check the [Streaming Guide](http://flink.apache.org/docs/latest/streaming_guide.html), which describes all the available API features.
+For a full feature overview please check the [Streaming Guide](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html), which describes all the available API features.
 You are very welcome to try out our features for different use-cases we are looking forward to your experiences. Feel free to [contact us](http://flink.apache.org/community.html#mailing-lists).
 
 Upcoming for streaming

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2015-03-02-february-2015-in-flink.md
----------------------------------------------------------------------
diff --git a/_posts/2015-03-02-february-2015-in-flink.md b/_posts/2015-03-02-february-2015-in-flink.md
index 980d8ee..a9fe325 100644
--- a/_posts/2015-03-02-february-2015-in-flink.md
+++ b/_posts/2015-03-02-february-2015-in-flink.md
@@ -75,7 +75,7 @@ See more Gelly examples
 ### Flink Expressions
 
 The newly merged
-[flink-expressions](https://github.com/apache/flink/tree/master/flink-staging/flink-expressions)
+[flink-table](https://github.com/apache/flink/tree/master/flink-staging/flink-table)
 module is the first step in Flink’s roadmap towards logical queries
 and SQL support. Here’s a preview on how you can read two CSV file,
 assign a logical schema to, and apply transformations like filters and

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2015-03-13-peeking-into-Apache-Flinks-Engine-Room.md
----------------------------------------------------------------------
diff --git a/_posts/2015-03-13-peeking-into-Apache-Flinks-Engine-Room.md b/_posts/2015-03-13-peeking-into-Apache-Flinks-Engine-Room.md
index 72f56cb..8156780 100644
--- a/_posts/2015-03-13-peeking-into-Apache-Flinks-Engine-Room.md
+++ b/_posts/2015-03-13-peeking-into-Apache-Flinks-Engine-Room.md
@@ -122,7 +122,7 @@ The Hybrid-Hash-Join distinguishes its inputs as build-side and probe-side input
 
 Ship and local strategies do not depend on each other and can be independently chosen. Therefore, Flink can execute a join of two data sets R and S in nine different ways by combining any of the three ship strategies (RR, BF with R being broadcasted, BF with S being broadcasted) with any of the three local strategies (SM, HH with R being build-side, HH with S being build-side). Each of these strategy combinations results in different execution performance depending on the data sizes and the available amount of working memory. In case of a small data set R and a much larger data set S, broadcasting R and using it as build-side input of a Hybrid-Hash-Join is usually a good choice because the much larger data set S is not shipped and not materialized (given that the hash table completely fits into memory). If both data sets are rather large or the join is performed on many parallel instances, repartitioning both inputs is a robust choice.
 
-Flink features a cost-based optimizer which automatically chooses the execution strategies for all operators including joins. Without going into the details of cost-based optimization, this is done by computing cost estimates for execution plans with different strategies and picking the plan with the least estimated costs. Thereby, the optimizer estimates the amount of data which is shipped over the the network and written to disk. If no reliable size estimates for the input data can be obtained, the optimizer falls back to robust default choices. A key feature of the optimizer is to reason about existing data properties. For example, if the data of one input is already partitioned in a suitable way, the generated candidate plans will not repartition this input. Hence, the choice of a RR ship strategy becomes more likely. The same applies for previously sorted data and the Sort-Merge-Join strategy. Flink programs can help the optimizer to reason about existing data properties by pro
 viding semantic information about  user-defined functions [[4]](http://ci.apache.org/projects/flink/flink-docs-master/programming_guide.html#semantic-annotations). While the optimizer is a killer feature of Flink, it can happen that a user knows better than the optimizer how to execute a specific join. Similar to relational database systems, Flink offers optimizer hints to tell the optimizer which join strategies to pick [[5]](http://ci.apache.org/projects/flink/flink-docs-master/dataset_transformations.html#join-algorithm-hints).
+Flink features a cost-based optimizer which automatically chooses the execution strategies for all operators including joins. Without going into the details of cost-based optimization, this is done by computing cost estimates for execution plans with different strategies and picking the plan with the least estimated costs. Thereby, the optimizer estimates the amount of data which is shipped over the the network and written to disk. If no reliable size estimates for the input data can be obtained, the optimizer falls back to robust default choices. A key feature of the optimizer is to reason about existing data properties. For example, if the data of one input is already partitioned in a suitable way, the generated candidate plans will not repartition this input. Hence, the choice of a RR ship strategy becomes more likely. The same applies for previously sorted data and the Sort-Merge-Join strategy. Flink programs can help the optimizer to reason about existing data properties by pro
 viding semantic information about  user-defined functions [[4]](http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#semantic-annotations). While the optimizer is a killer feature of Flink, it can happen that a user knows better than the optimizer how to execute a specific join. Similar to relational database systems, Flink offers optimizer hints to tell the optimizer which join strategies to pick [[5]](http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#join-algorithm-hints).
 
 ### How is Flink’s join performance?
 
@@ -173,9 +173,5 @@ We have seen that off-the-shelf distributed joins work really well in Flink. But
 [1] [“MapReduce: Simplified data processing on large clusters”](), Dean, Ghemawat, 2004 <br>
 [2] [Flink 0.8.1 documentation: Data Transformations](http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html) <br>
 [3] [Flink 0.8.1 documentation: Joins](http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html#join) <br>
-[4] [Flink 0.9-SNAPSHOT documentation: Semantic annotations](http://ci.apache.org/projects/flink/flink-docs-master/programming_guide.html#semantic-annotations) <br>
-[5] [Flink 0.9-SNAPSHOT documentation: Optimizer join hints](http://ci.apache.org/projects/flink/flink-docs-master/dataset_transformations.html#join-algorithm-hints) <br>
-
-
-<br>
-<small>Written by Fabian Hueske ([@fhueske](https://twitter.com/fhueske)).</small>
+[4] [Flink 0.9-SNAPSHOT documentation: Semantic annotations](http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#semantic-annotations) <br>
+[5] [Flink 0.9-SNAPSHOT documentation: Optimizer join hints](http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#join-algorithm-hints) <br>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2015-04-07-march-in-flink.md
----------------------------------------------------------------------
diff --git a/_posts/2015-04-07-march-in-flink.md b/_posts/2015-04-07-march-in-flink.md
index 61aefa2..cf0c8e6 100644
--- a/_posts/2015-04-07-march-in-flink.md
+++ b/_posts/2015-04-07-march-in-flink.md
@@ -11,8 +11,7 @@ March has been a busy month in the Flink community.
 
 A Flink runner for Google Cloud Dataflow was announced. See the blog
 posts by [data Artisans](http://data-artisans.com/dataflow.html) and
-the [Google Cloud Platform Blog]
-(http://googlecloudplatform.blogspot.de/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html).
+the [Google Cloud Platform Blog](http://googlecloudplatform.blogspot.de/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html).
 Google Cloud Dataflow programs can be written using and open-source
 SDK and run in multiple backends, either as a managed service inside
 Google's infrastructure, or leveraging open source runners,
@@ -75,4 +74,4 @@ programs.
 
 A new execution environment enables non-iterative Flink jobs to use
 Tez as an execution backend instead of Flink's own network stack. Learn more
-[here](http://ci.apache.org/projects/flink/flink-docs-master/flink_on_tez_guide.html).
\ No newline at end of file
+[here](http://ci.apache.org/projects/flink/flink-docs-master/setup/flink_on_tez.html).
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/_posts/2015-04-13-release-0.9.0-milestone1.md
----------------------------------------------------------------------
diff --git a/_posts/2015-04-13-release-0.9.0-milestone1.md b/_posts/2015-04-13-release-0.9.0-milestone1.md
index 67199db..f10823d 100644
--- a/_posts/2015-04-13-release-0.9.0-milestone1.md
+++ b/_posts/2015-04-13-release-0.9.0-milestone1.md
@@ -45,7 +45,7 @@ for Flink programs. Tables are available for both static and streaming
 data sources (DataSet and DataStream APIs).
 
 Check out the Table guide for Java and Scala
-[here](http://ci.apache.org/projects/flink/flink-docs-master/table.html).
+[here](http://ci.apache.org/projects/flink/flink-docs-master/libs/table.html).
 
 ### Gelly Graph Processing API
 
@@ -60,13 +60,13 @@ algorithms, including PageRank, SSSP, label propagation, and community
 detection.
 
 Gelly internally builds on top of Flink’s [delta
-iterations](http://ci.apache.org/projects/flink/flink-docs-master/iterations.html). Iterative
+iterations](http://ci.apache.org/projects/flink/flink-docs-master/apis/iterations.html). Iterative
 graph algorithms are executed leveraging mutable state, achieving
 similar performance with specialized graph processing systems.
 
 Gelly will eventually subsume Spargel, Flink’s Pregel-like API. Check
 out the Gelly guide
-[here](http://ci.apache.org/projects/flink/flink-docs-master/gelly_guide.html).
+[here](http://ci.apache.org/projects/flink/flink-docs-master/libs/gelly_guide.html).
 
 ### Flink Machine Learning Library
 
@@ -112,7 +112,7 @@ algorithms, Tez focuses on scalability and elastic resource usage in
 shared YARN clusters.
 
 Get started with Flink on Tez
-[here](http://ci.apache.org/projects/flink/flink-docs-master/flink_on_tez_guide.html).
+[here](http://ci.apache.org/projects/flink/flink-docs-master/setup/flink_on_tez.html).
 
 ### Reworked Distributed Runtime on Akka
 
@@ -135,7 +135,7 @@ system is internally tracking the Kafka offsets to ensure that Flink
 can pick up data from Kafka where it left off in case of an failure.
 
 Read
-[here](http://ci.apache.org/projects/flink/flink-docs-master/streaming_guide.html#apache-kafka)
+[here](http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#apache-kafka)
 on how to use the persistent Kafka source.
 
 ### Improved YARN support
@@ -152,7 +152,7 @@ integrators to easily control Flink on YARN within their Hadoop 2
 cluster.
 
 See the YARN docs
-[here](http://ci.apache.org/projects/flink/flink-docs-master/yarn_setup.html).
+[here](http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html).
 
 ## More Improvements and Fixes
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 514cd67..627f2fd 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -266,7 +266,7 @@ for Flink programs. Tables are available for both static and streaming
 data sources (DataSet and DataStream APIs).&lt;/p&gt;
 
 &lt;p&gt;Check out the Table guide for Java and Scala
-&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/table.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/libs/table.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h3 id=&quot;gelly-graph-processing-api&quot;&gt;Gelly Graph Processing API&lt;/h3&gt;
 
@@ -280,14 +280,14 @@ vertex-centric graph processing, as well as a library of common graph
 algorithms, including PageRank, SSSP, label propagation, and community
 detection.&lt;/p&gt;
 
-&lt;p&gt;Gelly internally builds on top of Flink’s &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/iterations.html&quot;&gt;delta
+&lt;p&gt;Gelly internally builds on top of Flink’s &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/iterations.html&quot;&gt;delta
 iterations&lt;/a&gt;. Iterative
 graph algorithms are executed leveraging mutable state, achieving
 similar performance with specialized graph processing systems.&lt;/p&gt;
 
 &lt;p&gt;Gelly will eventually subsume Spargel, Flink’s Pregel-like API. Check
 out the Gelly guide
-&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/gelly_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/libs/gelly_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h3 id=&quot;flink-machine-learning-library&quot;&gt;Flink Machine Learning Library&lt;/h3&gt;
 
@@ -333,7 +333,7 @@ algorithms, Tez focuses on scalability and elastic resource usage in
 shared YARN clusters.&lt;/p&gt;
 
 &lt;p&gt;Get started with Flink on Tez
-&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/flink_on_tez_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/setup/flink_on_tez.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h3 id=&quot;reworked-distributed-runtime-on-akka&quot;&gt;Reworked Distributed Runtime on Akka&lt;/h3&gt;
 
@@ -356,7 +356,7 @@ system is internally tracking the Kafka offsets to ensure that Flink
 can pick up data from Kafka where it left off in case of an failure.&lt;/p&gt;
 
 &lt;p&gt;Read
-&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/streaming_guide.html#apache-kafka&quot;&gt;here&lt;/a&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#apache-kafka&quot;&gt;here&lt;/a&gt;
 on how to use the persistent Kafka source.&lt;/p&gt;
 
 &lt;h3 id=&quot;improved-yarn-support&quot;&gt;Improved YARN support&lt;/h3&gt;
@@ -373,7 +373,7 @@ integrators to easily control Flink on YARN within their Hadoop 2
 cluster.&lt;/p&gt;
 
 &lt;p&gt;See the YARN docs
-&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/yarn_setup.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h2 id=&quot;more-improvements-and-fixes&quot;&gt;More Improvements and Fixes&lt;/h2&gt;
 
@@ -486,8 +486,7 @@ Improve usability of command line interface&lt;/p&gt;
 
 &lt;p&gt;A Flink runner for Google Cloud Dataflow was announced. See the blog
 posts by &lt;a href=&quot;http://data-artisans.com/dataflow.html&quot;&gt;data Artisans&lt;/a&gt; and
-the [Google Cloud Platform Blog]
-(http://googlecloudplatform.blogspot.de/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html).
+the &lt;a href=&quot;http://googlecloudplatform.blogspot.de/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html&quot;&gt;Google Cloud Platform Blog&lt;/a&gt;.
 Google Cloud Dataflow programs can be written using and open-source
 SDK and run in multiple backends, either as a managed service inside
 Google’s infrastructure, or leveraging open source runners,
@@ -550,7 +549,7 @@ programs.&lt;/p&gt;
 
 &lt;p&gt;A new execution environment enables non-iterative Flink jobs to use
 Tez as an execution backend instead of Flink’s own network stack. Learn more
-&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/flink_on_tez_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/setup/flink_on_tez.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
 <pubDate>Tue, 07 Apr 2015 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/04/07/march-in-flink.html</link>
@@ -679,7 +678,7 @@ Tez as an execution backend instead of Flink’s own network stack. Learn more
 
 &lt;p&gt;Ship and local strategies do not depend on each other and can be independently chosen. Therefore, Flink can execute a join of two data sets R and S in nine different ways by combining any of the three ship strategies (RR, BF with R being broadcasted, BF with S being broadcasted) with any of the three local strategies (SM, HH with R being build-side, HH with S being build-side). Each of these strategy combinations results in different execution performance depending on the data sizes and the available amount of working memory. In case of a small data set R and a much larger data set S, broadcasting R and using it as build-side input of a Hybrid-Hash-Join is usually a good choice because the much larger data set S is not shipped and not materialized (given that the hash table completely fits into memory). If both data sets are rather large or the join is performed on many parallel instances, repartitioning both inputs is a robust choice.&lt;/p&gt;
 
-&lt;p&gt;Flink features a cost-based optimizer which automatically chooses the execution strategies for all operators including joins. Without going into the details of cost-based optimization, this is done by computing cost estimates for execution plans with different strategies and picking the plan with the least estimated costs. Thereby, the optimizer estimates the amount of data which is shipped over the the network and written to disk. If no reliable size estimates for the input data can be obtained, the optimizer falls back to robust default choices. A key feature of the optimizer is to reason about existing data properties. For example, if the data of one input is already partitioned in a suitable way, the generated candidate plans will not repartition this input. Hence, the choice of a RR ship strategy becomes more likely. The same applies for previously sorted data and the Sort-Merge-Join strategy. Flink programs can help the optimizer to reason about existing data properti
 es by providing semantic information about  user-defined functions &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/programming_guide.html#semantic-annotations&quot;&gt;[4]&lt;/a&gt;. While the optimizer is a killer feature of Flink, it can happen that a user knows better than the optimizer how to execute a specific join. Similar to relational database systems, Flink offers optimizer hints to tell the optimizer which join strategies to pick &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/dataset_transformations.html#join-algorithm-hints&quot;&gt;[5]&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;Flink features a cost-based optimizer which automatically chooses the execution strategies for all operators including joins. Without going into the details of cost-based optimization, this is done by computing cost estimates for execution plans with different strategies and picking the plan with the least estimated costs. Thereby, the optimizer estimates the amount of data which is shipped over the the network and written to disk. If no reliable size estimates for the input data can be obtained, the optimizer falls back to robust default choices. A key feature of the optimizer is to reason about existing data properties. For example, if the data of one input is already partitioned in a suitable way, the generated candidate plans will not repartition this input. Hence, the choice of a RR ship strategy becomes more likely. The same applies for previously sorted data and the Sort-Merge-Join strategy. Flink programs can help the optimizer to reason about existing data properti
 es by providing semantic information about  user-defined functions &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#semantic-annotations&quot;&gt;[4]&lt;/a&gt;. While the optimizer is a killer feature of Flink, it can happen that a user knows better than the optimizer how to execute a specific join. Similar to relational database systems, Flink offers optimizer hints to tell the optimizer which join strategies to pick &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#join-algorithm-hints&quot;&gt;[5]&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h3 id=&quot;how-is-flinks-join-performance&quot;&gt;How is Flink’s join performance?&lt;/h3&gt;
 
@@ -736,11 +735,8 @@ Tez as an execution backend instead of Flink’s own network stack. Learn more
 &lt;p&gt;[1] &lt;a href=&quot;&quot;&gt;“MapReduce: Simplified data processing on large clusters”&lt;/a&gt;, Dean, Ghemawat, 2004 &lt;br /&gt;
 [2] &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html&quot;&gt;Flink 0.8.1 documentation: Data Transformations&lt;/a&gt; &lt;br /&gt;
 [3] &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html#join&quot;&gt;Flink 0.8.1 documentation: Joins&lt;/a&gt; &lt;br /&gt;
-[4] &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/programming_guide.html#semantic-annotations&quot;&gt;Flink 0.9-SNAPSHOT documentation: Semantic annotations&lt;/a&gt; &lt;br /&gt;
-[5] &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/dataset_transformations.html#join-algorithm-hints&quot;&gt;Flink 0.9-SNAPSHOT documentation: Optimizer join hints&lt;/a&gt; &lt;br /&gt;&lt;/p&gt;
-
-&lt;p&gt;&lt;br /&gt;
-&lt;small&gt;Written by Fabian Hueske (&lt;a href=&quot;https://twitter.com/fhueske&quot;&gt;@fhueske&lt;/a&gt;).&lt;/small&gt;&lt;/p&gt;
+[4] &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#semantic-annotations&quot;&gt;Flink 0.9-SNAPSHOT documentation: Semantic annotations&lt;/a&gt; &lt;br /&gt;
+[5] &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#join-algorithm-hints&quot;&gt;Flink 0.9-SNAPSHOT documentation: Optimizer join hints&lt;/a&gt; &lt;br /&gt;&lt;/p&gt;
 </description>
 <pubDate>Fri, 13 Mar 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html</link>
@@ -824,7 +820,7 @@ graph:&lt;/p&gt;
 &lt;h3 id=&quot;flink-expressions&quot;&gt;Flink Expressions&lt;/h3&gt;
 
 &lt;p&gt;The newly merged
-&lt;a href=&quot;https://github.com/apache/flink/tree/master/flink-staging/flink-expressions&quot;&gt;flink-expressions&lt;/a&gt;
+&lt;a href=&quot;https://github.com/apache/flink/tree/master/flink-staging/flink-table&quot;&gt;flink-table&lt;/a&gt;
 module is the first step in Flink’s roadmap towards logical queries
 and SQL support. Here’s a preview on how you can read two CSV file,
 assign a logical schema to, and apply transformations like filters and
@@ -874,7 +870,7 @@ and offers a new API including definition of flexible windows.&lt;/p&gt;
 &lt;p&gt;In this post, we go through an example that uses the Flink Streaming
 API to compute statistics on stock market data that arrive
 continuously and combine the stock market data with Twitter streams.
-See the &lt;a href=&quot;http://flink.apache.org/docs/latest/streaming_guide.html&quot;&gt;Streaming Programming
+See the &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html&quot;&gt;Streaming Programming
 Guide&lt;/a&gt; for a
 detailed presentation of the Streaming API.&lt;/p&gt;
 
@@ -974,10 +970,10 @@ found &lt;a href=&quot;https://github.com/apache/flink/blob/master/flink-staging
 &lt;/div&gt;
 
 &lt;p&gt;See
-&lt;a href=&quot;http://flink.apache.org/docs/latest/streaming_guide.html#sources&quot;&gt;here&lt;/a&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#sources&quot;&gt;here&lt;/a&gt;
 on how you can create streaming sources for Flink Streaming
 programs. Flink, of course, has support for reading in streams from
-&lt;a href=&quot;http://flink.apache.org/docs/latest/streaming_guide.html#stream-connectors&quot;&gt;external
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#stream-connectors&quot;&gt;external
 sources&lt;/a&gt;
 such as Apache Kafka, Apache Flume, RabbitMQ, and others. For the sake
 of this example, the data streams are simply generated using the
@@ -1085,7 +1081,7 @@ INFO    Custom Source(1/1) switched to DEPLOYING
 &lt;h2 id=&quot;window-aggregations&quot;&gt;Window aggregations&lt;/h2&gt;
 
 &lt;p&gt;We first compute aggregations on time-based windows of the
-data. Flink provides &lt;a href=&quot;http://flink.apache.org/docs/latest/streaming_guide.html#window-operators&quot;&gt;flexible windowing semantics&lt;/a&gt; where windows can
+data. Flink provides &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#window-operators&quot;&gt;flexible windowing semantics&lt;/a&gt; where windows can
 also be defined based on count of records or any custom user defined
 logic.&lt;/p&gt;
 
@@ -1273,7 +1269,7 @@ every 30 seconds.&lt;/p&gt;
 &lt;h2 id=&quot;combining-with-a-twitter-stream&quot;&gt;Combining with a Twitter stream&lt;/h2&gt;
 
 &lt;p&gt;Next, we will read a Twitter stream and correlate it with our stock
-price stream. Flink has support for connecting to &lt;a href=&quot;http://flink.apache.org/docs/latest/streaming_guide.html#twitter-streaming-api&quot;&gt;Twitter’s
+price stream. Flink has support for connecting to &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#twitter-streaming-api&quot;&gt;Twitter’s
 API&lt;/a&gt;,
 but for the sake of this example we generate dummy tweet data.&lt;/p&gt;
 
@@ -1490,7 +1486,7 @@ these data streams are potentially infinite, we apply the join on a
 
 &lt;h2 id=&quot;other-things-to-try&quot;&gt;Other things to try&lt;/h2&gt;
 
-&lt;p&gt;For a full feature overview please check the &lt;a href=&quot;http://flink.apache.org/docs/latest/streaming_guide.html&quot;&gt;Streaming Guide&lt;/a&gt;, which describes all the available API features.
+&lt;p&gt;For a full feature overview please check the &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html&quot;&gt;Streaming Guide&lt;/a&gt;, which describes all the available API features.
 You are very welcome to try out our features for different use-cases we are looking forward to your experiences. Feel free to &lt;a href=&quot;http://flink.apache.org/community.html#mailing-lists&quot;&gt;contact us&lt;/a&gt;.&lt;/p&gt;
 
 &lt;h2 id=&quot;upcoming-for-streaming&quot;&gt;Upcoming for streaming&lt;/h2&gt;
@@ -1574,13 +1570,13 @@ internally, fault tolerance, and performance measurements!&lt;/p&gt;
 &lt;ul&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;Extended filesystem support&lt;/strong&gt;: The former &lt;code&gt;DistributedFileSystem&lt;/code&gt; interface has been generalized to &lt;code&gt;HadoopFileSystem&lt;/code&gt; now supporting all sub classes of &lt;code&gt;org.apache.hadoop.fs.FileSystem&lt;/code&gt;. This allows users to use all file systems supported by Hadoop with Apache Flink.
-&lt;a href=&quot;http://flink.incubator.apache.org/docs/0.8/example_connectors.html&quot;&gt;See connecting to other systems&lt;/a&gt;&lt;/p&gt;
+&lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html&quot;&gt;See connecting to other systems&lt;/a&gt;&lt;/p&gt;
   &lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;Streaming Scala API&lt;/strong&gt;: As an alternative to the existing Java API Streaming is now also programmable in Scala. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release.&lt;/p&gt;
   &lt;/li&gt;
   &lt;li&gt;
-    &lt;p&gt;&lt;strong&gt;Streaming windowing semantics&lt;/strong&gt;: The new windowing api offers an expressive way to define custom logic for triggering the execution of a stream window and removing elements. The new features include out-of-the-box support for windows based in logical or physical time and data-driven properties on the events themselves among others. &lt;a href=&quot;http://flink.apache.org/docs/0.8/streaming_guide.html#window-operators&quot;&gt;Read more here&lt;/a&gt;&lt;/p&gt;
+    &lt;p&gt;&lt;strong&gt;Streaming windowing semantics&lt;/strong&gt;: The new windowing api offers an expressive way to define custom logic for triggering the execution of a stream window and removing elements. The new features include out-of-the-box support for windows based in logical or physical time and data-driven properties on the events themselves among others. &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.8/streaming_guide.html#window-operators&quot;&gt;Read more here&lt;/a&gt;&lt;/p&gt;
   &lt;/li&gt;
   &lt;li&gt;
     &lt;p&gt;&lt;strong&gt;Mutable and immutable objects in runtime&lt;/strong&gt; All Flink versions before 0.8.0 were always passing the same objects to functions written by users. This is a common performance optimization, also used in other systems such as Hadoop.
@@ -1783,13 +1779,13 @@ Flink serialization system improved a lot over time and by now surpasses the cap
 
 &lt;h2 id=&quot;what-comes-next&quot;&gt;What comes next?&lt;/h2&gt;
 
-&lt;p&gt;While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as &lt;a href=&quot;/docs/0.7-incubating/spargel_guide.html&quot;&gt;Spargel&lt;/a&gt; operations (Pregel/Giraph-style jobs).&lt;/p&gt;
+&lt;p&gt;While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.7/spargel_guide.html&quot;&gt;Spargel&lt;/a&gt; operations (Pregel/Giraph-style jobs).&lt;/p&gt;
 
 &lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;
 
 &lt;p&gt;Flink lets you reuse a lot of the code you wrote for Hadoop MapReduce, including all data types, all Input- and OutputFormats, and Mapper and Reducers of the mapred-API. Hadoop functions can be used within Flink programs and mixed with all other Flink functions. Due to Flink’s pipelined execution, Hadoop functions can arbitrarily be assembled without data exchange via HDFS. Moreover, the Flink community is currently working on a dedicated Hadoop Job operation to supporting the execution of Hadoop jobs as a whole.&lt;/p&gt;
 
-&lt;p&gt;If you want to use Flink’s Hadoop compatibility package checkout our &lt;a href=&quot;/docs/0.7-incubating/hadoop_compatibility.html&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;If you want to use Flink’s Hadoop compatibility package checkout our &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.7/hadoop_compatibility.html&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
 </description>
 <pubDate>Tue, 18 Nov 2014 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2014/11/18/hadoop-compatibility.html</link>
@@ -1806,11 +1802,11 @@ Flink serialization system improved a lot over time and by now surpasses the cap
 
 &lt;h2 id=&quot;overview-of-major-new-features&quot;&gt;Overview of major new features&lt;/h2&gt;
 
-&lt;p&gt;&lt;strong&gt;Flink Streaming:&lt;/strong&gt; The gem of the 0.7.0 release is undoubtedly Flink Streaming. Available currently in alpha, Flink Streaming provides a Java API on top of Apache Flink that can consume streaming data sources (e.g., from Apache Kafka, Apache Flume, and others) and process them in real time. A dedicated blog post on Flink Streaming and its performance is coming up here soon. You can check out the Streaming programming guide &lt;a href=&quot;http://flink.incubator.apache.org/docs/0.7-incubating/streaming_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Flink Streaming:&lt;/strong&gt; The gem of the 0.7.0 release is undoubtedly Flink Streaming. Available currently in alpha, Flink Streaming provides a Java API on top of Apache Flink that can consume streaming data sources (e.g., from Apache Kafka, Apache Flume, and others) and process them in real time. A dedicated blog post on Flink Streaming and its performance is coming up here soon. You can check out the Streaming programming guide &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.7/streaming_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
-&lt;p&gt;&lt;strong&gt;New Scala API:&lt;/strong&gt; The Scala API has been completely rewritten. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release. See the new Scala API &lt;a href=&quot;http://flink.incubator.apache.org/docs/0.7-incubating/programming_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;New Scala API:&lt;/strong&gt; The Scala API has been completely rewritten. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release. See the new Scala API &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.7/programming_guide.html&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
-&lt;p&gt;&lt;strong&gt;Logical key expressions:&lt;/strong&gt; You can now specify grouping and joining keys with logical names for member variables of POJO data types. For example, you can join two data sets as &lt;code&gt;persons.join(cities).where(“zip”).equalTo(“zipcode”)&lt;/code&gt;. Read more &lt;a href=&quot;http://flink.incubator.apache.org/docs/0.7-incubating/programming_guide.html#specifying-keys&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;&lt;strong&gt;Logical key expressions:&lt;/strong&gt; You can now specify grouping and joining keys with logical names for member variables of POJO data types. For example, you can join two data sets as &lt;code&gt;persons.join(cities).where(“zip”).equalTo(“zipcode”)&lt;/code&gt;. Read more &lt;a href=&quot;http://ci.apache.org/projects/flink/flink-docs-release-0.7/programming_guide.html#specifying-keys&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;&lt;strong&gt;Hadoop MapReduce compatibility:&lt;/strong&gt; You can run unmodified Hadoop Mappers and Reducers (mapred API) in Flink, use all Hadoop data types, and read data with all Hadoop InputFormats.&lt;/p&gt;
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2014/11/04/release-0.7.0.html
----------------------------------------------------------------------
diff --git a/content/news/2014/11/04/release-0.7.0.html b/content/news/2014/11/04/release-0.7.0.html
index a3f8fc8..6b5b2b6 100644
--- a/content/news/2014/11/04/release-0.7.0.html
+++ b/content/news/2014/11/04/release-0.7.0.html
@@ -141,11 +141,11 @@
 
 <h2 id="overview-of-major-new-features">Overview of major new features</h2>
 
-<p><strong>Flink Streaming:</strong> The gem of the 0.7.0 release is undoubtedly Flink Streaming. Available currently in alpha, Flink Streaming provides a Java API on top of Apache Flink that can consume streaming data sources (e.g., from Apache Kafka, Apache Flume, and others) and process them in real time. A dedicated blog post on Flink Streaming and its performance is coming up here soon. You can check out the Streaming programming guide <a href="http://flink.incubator.apache.org/docs/0.7-incubating/streaming_guide.html">here</a>.</p>
+<p><strong>Flink Streaming:</strong> The gem of the 0.7.0 release is undoubtedly Flink Streaming. Available currently in alpha, Flink Streaming provides a Java API on top of Apache Flink that can consume streaming data sources (e.g., from Apache Kafka, Apache Flume, and others) and process them in real time. A dedicated blog post on Flink Streaming and its performance is coming up here soon. You can check out the Streaming programming guide <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.7/streaming_guide.html">here</a>.</p>
 
-<p><strong>New Scala API:</strong> The Scala API has been completely rewritten. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release. See the new Scala API <a href="http://flink.incubator.apache.org/docs/0.7-incubating/programming_guide.html">here</a>.</p>
+<p><strong>New Scala API:</strong> The Scala API has been completely rewritten. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release. See the new Scala API <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.7/programming_guide.html">here</a>.</p>
 
-<p><strong>Logical key expressions:</strong> You can now specify grouping and joining keys with logical names for member variables of POJO data types. For example, you can join two data sets as <code>persons.join(cities).where(“zip”).equalTo(“zipcode”)</code>. Read more <a href="http://flink.incubator.apache.org/docs/0.7-incubating/programming_guide.html#specifying-keys">here</a>.</p>
+<p><strong>Logical key expressions:</strong> You can now specify grouping and joining keys with logical names for member variables of POJO data types. For example, you can join two data sets as <code>persons.join(cities).where(“zip”).equalTo(“zipcode”)</code>. Read more <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.7/programming_guide.html#specifying-keys">here</a>.</p>
 
 <p><strong>Hadoop MapReduce compatibility:</strong> You can run unmodified Hadoop Mappers and Reducers (mapred API) in Flink, use all Hadoop data types, and read data with all Hadoop InputFormats.</p>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2014/11/18/hadoop-compatibility.html
----------------------------------------------------------------------
diff --git a/content/news/2014/11/18/hadoop-compatibility.html b/content/news/2014/11/18/hadoop-compatibility.html
index df24619..b7ff435 100644
--- a/content/news/2014/11/18/hadoop-compatibility.html
+++ b/content/news/2014/11/18/hadoop-compatibility.html
@@ -205,13 +205,13 @@
 
 <h2 id="what-comes-next">What comes next?</h2>
 
-<p>While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as <a href="/docs/0.7-incubating/spargel_guide.html">Spargel</a> operations (Pregel/Giraph-style jobs).</p>
+<p>While the Hadoop compatibility package is already very useful, we are currently working on a dedicated Hadoop Job operation to embed and execute Hadoop jobs as a whole in Flink programs, including their custom partitioning, sorting, and grouping code. With this feature, you will be able to chain multiple Hadoop jobs, mix them with Flink functions, and other operations such as <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.7/spargel_guide.html">Spargel</a> operations (Pregel/Giraph-style jobs).</p>
 
 <h2 id="summary">Summary</h2>
 
 <p>Flink lets you reuse a lot of the code you wrote for Hadoop MapReduce, including all data types, all Input- and OutputFormats, and Mapper and Reducers of the mapred-API. Hadoop functions can be used within Flink programs and mixed with all other Flink functions. Due to Flink’s pipelined execution, Hadoop functions can arbitrarily be assembled without data exchange via HDFS. Moreover, the Flink community is currently working on a dedicated Hadoop Job operation to supporting the execution of Hadoop jobs as a whole.</p>
 
-<p>If you want to use Flink’s Hadoop compatibility package checkout our <a href="/docs/0.7-incubating/hadoop_compatibility.html">documentation</a>.</p>
+<p>If you want to use Flink’s Hadoop compatibility package checkout our <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.7/hadoop_compatibility.html">documentation</a>.</p>
 
       </article>
     </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2015/01/21/release-0.8.html
----------------------------------------------------------------------
diff --git a/content/news/2015/01/21/release-0.8.html b/content/news/2015/01/21/release-0.8.html
index c3ae95f..887d3ad 100644
--- a/content/news/2015/01/21/release-0.8.html
+++ b/content/news/2015/01/21/release-0.8.html
@@ -144,13 +144,13 @@
 <ul>
   <li>
     <p><strong>Extended filesystem support</strong>: The former <code>DistributedFileSystem</code> interface has been generalized to <code>HadoopFileSystem</code> now supporting all sub classes of <code>org.apache.hadoop.fs.FileSystem</code>. This allows users to use all file systems supported by Hadoop with Apache Flink.
-<a href="http://flink.incubator.apache.org/docs/0.8/example_connectors.html">See connecting to other systems</a></p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-release-0.8/example_connectors.html">See connecting to other systems</a></p>
   </li>
   <li>
     <p><strong>Streaming Scala API</strong>: As an alternative to the existing Java API Streaming is now also programmable in Scala. The Java and Scala APIs have now the same syntax and transformations and will be kept from now on in sync in every future release.</p>
   </li>
   <li>
-    <p><strong>Streaming windowing semantics</strong>: The new windowing api offers an expressive way to define custom logic for triggering the execution of a stream window and removing elements. The new features include out-of-the-box support for windows based in logical or physical time and data-driven properties on the events themselves among others. <a href="http://flink.apache.org/docs/0.8/streaming_guide.html#window-operators">Read more here</a></p>
+    <p><strong>Streaming windowing semantics</strong>: The new windowing api offers an expressive way to define custom logic for triggering the execution of a stream window and removing elements. The new features include out-of-the-box support for windows based in logical or physical time and data-driven properties on the events themselves among others. <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.8/streaming_guide.html#window-operators">Read more here</a></p>
   </li>
   <li>
     <p><strong>Mutable and immutable objects in runtime</strong> All Flink versions before 0.8.0 were always passing the same objects to functions written by users. This is a common performance optimization, also used in other systems such as Hadoop.

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2015/02/09/streaming-example.html
----------------------------------------------------------------------
diff --git a/content/news/2015/02/09/streaming-example.html b/content/news/2015/02/09/streaming-example.html
index 5428a77..4f6d1d4 100644
--- a/content/news/2015/02/09/streaming-example.html
+++ b/content/news/2015/02/09/streaming-example.html
@@ -142,7 +142,7 @@ and offers a new API including definition of flexible windows.</p>
 <p>In this post, we go through an example that uses the Flink Streaming
 API to compute statistics on stock market data that arrive
 continuously and combine the stock market data with Twitter streams.
-See the <a href="http://flink.apache.org/docs/latest/streaming_guide.html">Streaming Programming
+See the <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html">Streaming Programming
 Guide</a> for a
 detailed presentation of the Streaming API.</p>
 
@@ -242,10 +242,10 @@ found <a href="https://github.com/apache/flink/blob/master/flink-staging/flink-s
 </div>
 
 <p>See
-<a href="http://flink.apache.org/docs/latest/streaming_guide.html#sources">here</a>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#sources">here</a>
 on how you can create streaming sources for Flink Streaming
 programs. Flink, of course, has support for reading in streams from
-<a href="http://flink.apache.org/docs/latest/streaming_guide.html#stream-connectors">external
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#stream-connectors">external
 sources</a>
 such as Apache Kafka, Apache Flume, RabbitMQ, and others. For the sake
 of this example, the data streams are simply generated using the
@@ -353,7 +353,7 @@ INFO    Custom Source(1/1) switched to DEPLOYING
 <h2 id="window-aggregations">Window aggregations</h2>
 
 <p>We first compute aggregations on time-based windows of the
-data. Flink provides <a href="http://flink.apache.org/docs/latest/streaming_guide.html#window-operators">flexible windowing semantics</a> where windows can
+data. Flink provides <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#window-operators">flexible windowing semantics</a> where windows can
 also be defined based on count of records or any custom user defined
 logic.</p>
 
@@ -541,7 +541,7 @@ every 30 seconds.</p>
 <h2 id="combining-with-a-twitter-stream">Combining with a Twitter stream</h2>
 
 <p>Next, we will read a Twitter stream and correlate it with our stock
-price stream. Flink has support for connecting to <a href="http://flink.apache.org/docs/latest/streaming_guide.html#twitter-streaming-api">Twitter’s
+price stream. Flink has support for connecting to <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#twitter-streaming-api">Twitter’s
 API</a>,
 but for the sake of this example we generate dummy tweet data.</p>
 
@@ -758,7 +758,7 @@ these data streams are potentially infinite, we apply the join on a
 
 <h2 id="other-things-to-try">Other things to try</h2>
 
-<p>For a full feature overview please check the <a href="http://flink.apache.org/docs/latest/streaming_guide.html">Streaming Guide</a>, which describes all the available API features.
+<p>For a full feature overview please check the <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html">Streaming Guide</a>, which describes all the available API features.
 You are very welcome to try out our features for different use-cases we are looking forward to your experiences. Feel free to <a href="http://flink.apache.org/community.html#mailing-lists">contact us</a>.</p>
 
 <h2 id="upcoming-for-streaming">Upcoming for streaming</h2>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2015/03/02/february-2015-in-flink.html
----------------------------------------------------------------------
diff --git a/content/news/2015/03/02/february-2015-in-flink.html b/content/news/2015/03/02/february-2015-in-flink.html
index c41d9ee..fe4ca30 100644
--- a/content/news/2015/03/02/february-2015-in-flink.html
+++ b/content/news/2015/03/02/february-2015-in-flink.html
@@ -208,7 +208,7 @@ graph:</p>
 <h3 id="flink-expressions">Flink Expressions</h3>
 
 <p>The newly merged
-<a href="https://github.com/apache/flink/tree/master/flink-staging/flink-expressions">flink-expressions</a>
+<a href="https://github.com/apache/flink/tree/master/flink-staging/flink-table">flink-table</a>
 module is the first step in Flink’s roadmap towards logical queries
 and SQL support. Here’s a preview on how you can read two CSV file,
 assign a logical schema to, and apply transformations like filters and

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
----------------------------------------------------------------------
diff --git a/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html b/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
index e1112fb..9224190 100644
--- a/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
+++ b/content/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html
@@ -253,7 +253,7 @@
 
 <p>Ship and local strategies do not depend on each other and can be independently chosen. Therefore, Flink can execute a join of two data sets R and S in nine different ways by combining any of the three ship strategies (RR, BF with R being broadcasted, BF with S being broadcasted) with any of the three local strategies (SM, HH with R being build-side, HH with S being build-side). Each of these strategy combinations results in different execution performance depending on the data sizes and the available amount of working memory. In case of a small data set R and a much larger data set S, broadcasting R and using it as build-side input of a Hybrid-Hash-Join is usually a good choice because the much larger data set S is not shipped and not materialized (given that the hash table completely fits into memory). If both data sets are rather large or the join is performed on many parallel instances, repartitioning both inputs is a robust choice.</p>
 
-<p>Flink features a cost-based optimizer which automatically chooses the execution strategies for all operators including joins. Without going into the details of cost-based optimization, this is done by computing cost estimates for execution plans with different strategies and picking the plan with the least estimated costs. Thereby, the optimizer estimates the amount of data which is shipped over the the network and written to disk. If no reliable size estimates for the input data can be obtained, the optimizer falls back to robust default choices. A key feature of the optimizer is to reason about existing data properties. For example, if the data of one input is already partitioned in a suitable way, the generated candidate plans will not repartition this input. Hence, the choice of a RR ship strategy becomes more likely. The same applies for previously sorted data and the Sort-Merge-Join strategy. Flink programs can help the optimizer to reason about existing data properties by 
 providing semantic information about  user-defined functions <a href="http://ci.apache.org/projects/flink/flink-docs-master/programming_guide.html#semantic-annotations">[4]</a>. While the optimizer is a killer feature of Flink, it can happen that a user knows better than the optimizer how to execute a specific join. Similar to relational database systems, Flink offers optimizer hints to tell the optimizer which join strategies to pick <a href="http://ci.apache.org/projects/flink/flink-docs-master/dataset_transformations.html#join-algorithm-hints">[5]</a>.</p>
+<p>Flink features a cost-based optimizer which automatically chooses the execution strategies for all operators including joins. Without going into the details of cost-based optimization, this is done by computing cost estimates for execution plans with different strategies and picking the plan with the least estimated costs. Thereby, the optimizer estimates the amount of data which is shipped over the the network and written to disk. If no reliable size estimates for the input data can be obtained, the optimizer falls back to robust default choices. A key feature of the optimizer is to reason about existing data properties. For example, if the data of one input is already partitioned in a suitable way, the generated candidate plans will not repartition this input. Hence, the choice of a RR ship strategy becomes more likely. The same applies for previously sorted data and the Sort-Merge-Join strategy. Flink programs can help the optimizer to reason about existing data properties by 
 providing semantic information about  user-defined functions <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#semantic-annotations">[4]</a>. While the optimizer is a killer feature of Flink, it can happen that a user knows better than the optimizer how to execute a specific join. Similar to relational database systems, Flink offers optimizer hints to tell the optimizer which join strategies to pick <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#join-algorithm-hints">[5]</a>.</p>
 
 <h3 id="how-is-flinks-join-performance">How is Flink’s join performance?</h3>
 
@@ -310,11 +310,8 @@
 <p>[1] <a href="">“MapReduce: Simplified data processing on large clusters”</a>, Dean, Ghemawat, 2004 <br />
 [2] <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html">Flink 0.8.1 documentation: Data Transformations</a> <br />
 [3] <a href="http://ci.apache.org/projects/flink/flink-docs-release-0.8/dataset_transformations.html#join">Flink 0.8.1 documentation: Joins</a> <br />
-[4] <a href="http://ci.apache.org/projects/flink/flink-docs-master/programming_guide.html#semantic-annotations">Flink 0.9-SNAPSHOT documentation: Semantic annotations</a> <br />
-[5] <a href="http://ci.apache.org/projects/flink/flink-docs-master/dataset_transformations.html#join-algorithm-hints">Flink 0.9-SNAPSHOT documentation: Optimizer join hints</a> <br /></p>
-
-<p><br />
-<small>Written by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>).</small></p>
+[4] <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/programming_guide.html#semantic-annotations">Flink 0.9-SNAPSHOT documentation: Semantic annotations</a> <br />
+[5] <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/dataset_transformations.html#join-algorithm-hints">Flink 0.9-SNAPSHOT documentation: Optimizer join hints</a> <br /></p>
 
       </article>
     </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2015/04/07/march-in-flink.html
----------------------------------------------------------------------
diff --git a/content/news/2015/04/07/march-in-flink.html b/content/news/2015/04/07/march-in-flink.html
index 6c7003d..b85c11c 100644
--- a/content/news/2015/04/07/march-in-flink.html
+++ b/content/news/2015/04/07/march-in-flink.html
@@ -139,8 +139,7 @@
 
 <p>A Flink runner for Google Cloud Dataflow was announced. See the blog
 posts by <a href="http://data-artisans.com/dataflow.html">data Artisans</a> and
-the [Google Cloud Platform Blog]
-(http://googlecloudplatform.blogspot.de/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html).
+the <a href="http://googlecloudplatform.blogspot.de/2015/03/announcing-Google-Cloud-Dataflow-runner-for-Apache-Flink.html">Google Cloud Platform Blog</a>.
 Google Cloud Dataflow programs can be written using and open-source
 SDK and run in multiple backends, either as a managed service inside
 Google’s infrastructure, or leveraging open source runners,
@@ -203,7 +202,7 @@ programs.</p>
 
 <p>A new execution environment enables non-iterative Flink jobs to use
 Tez as an execution backend instead of Flink’s own network stack. Learn more
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/flink_on_tez_guide.html">here</a>.</p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/flink_on_tez.html">here</a>.</p>
 
       </article>
     </div>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/c4ce2d7c/content/news/2015/04/13/release-0.9.0-milestone1.html
----------------------------------------------------------------------
diff --git a/content/news/2015/04/13/release-0.9.0-milestone1.html b/content/news/2015/04/13/release-0.9.0-milestone1.html
index 55b8c55..1322b30 100644
--- a/content/news/2015/04/13/release-0.9.0-milestone1.html
+++ b/content/news/2015/04/13/release-0.9.0-milestone1.html
@@ -171,7 +171,7 @@ for Flink programs. Tables are available for both static and streaming
 data sources (DataSet and DataStream APIs).</p>
 
 <p>Check out the Table guide for Java and Scala
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/table.html">here</a>.</p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/libs/table.html">here</a>.</p>
 
 <h3 id="gelly-graph-processing-api">Gelly Graph Processing API</h3>
 
@@ -185,14 +185,14 @@ vertex-centric graph processing, as well as a library of common graph
 algorithms, including PageRank, SSSP, label propagation, and community
 detection.</p>
 
-<p>Gelly internally builds on top of Flink’s <a href="http://ci.apache.org/projects/flink/flink-docs-master/iterations.html">delta
+<p>Gelly internally builds on top of Flink’s <a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/iterations.html">delta
 iterations</a>. Iterative
 graph algorithms are executed leveraging mutable state, achieving
 similar performance with specialized graph processing systems.</p>
 
 <p>Gelly will eventually subsume Spargel, Flink’s Pregel-like API. Check
 out the Gelly guide
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/gelly_guide.html">here</a>.</p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/libs/gelly_guide.html">here</a>.</p>
 
 <h3 id="flink-machine-learning-library">Flink Machine Learning Library</h3>
 
@@ -238,7 +238,7 @@ algorithms, Tez focuses on scalability and elastic resource usage in
 shared YARN clusters.</p>
 
 <p>Get started with Flink on Tez
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/flink_on_tez_guide.html">here</a>.</p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/flink_on_tez.html">here</a>.</p>
 
 <h3 id="reworked-distributed-runtime-on-akka">Reworked Distributed Runtime on Akka</h3>
 
@@ -261,7 +261,7 @@ system is internally tracking the Kafka offsets to ensure that Flink
 can pick up data from Kafka where it left off in case of an failure.</p>
 
 <p>Read
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/streaming_guide.html#apache-kafka">here</a>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/apis/streaming_guide.html#apache-kafka">here</a>
 on how to use the persistent Kafka source.</p>
 
 <h3 id="improved-yarn-support">Improved YARN support</h3>
@@ -278,7 +278,7 @@ integrators to easily control Flink on YARN within their Hadoop 2
 cluster.</p>
 
 <p>See the YARN docs
-<a href="http://ci.apache.org/projects/flink/flink-docs-master/yarn_setup.html">here</a>.</p>
+<a href="http://ci.apache.org/projects/flink/flink-docs-master/setup/yarn_setup.html">here</a>.</p>
 
 <h2 id="more-improvements-and-fixes">More Improvements and Fixes</h2>