You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by nk...@apache.org on 2019/07/23 15:48:17 UTC

[flink-web] 01/05: [hotfix] use site.baseurl and site.DOCS_BASE_URL instead of manual URLs in 2019 posts

This is an automated email from the ASF dual-hosted git repository.

nkruber pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 0266dc92dc246b6ca2f768c8c965d26e597ba168
Author: Nico Kruber <ni...@ververica.com>
AuthorDate: Wed Jul 17 11:46:36 2019 +0200

    [hotfix] use site.baseurl and site.DOCS_BASE_URL instead of manual URLs in 2019 posts
    
    This replaces most uses of http[s]://flink.apache.org with {{ site.baseurl }}
    and http[s]://ci.apache.org/projects/flink/ with {{ site.DOCS_BASE_URL }},
    allowing a better local build, smaller .md files, and safer URLs.
---
 _posts/2019-02-15-release-1.7.2.md             |  2 +-
 _posts/2019-02-21-monitoring-best-practices.md | 18 +++++++++---------
 _posts/2019-02-25-release-1.6.4.md             |  2 +-
 _posts/2019-03-11-prometheus-monitoring.md     | 10 +++++-----
 _posts/2019-04-09-release-1.8.0.md             | 14 +++++++-------
 _posts/2019-04-17-sod.md                       | 24 ++++++++++++------------
 _posts/2019-05-03-pulsar-flink.md              |  4 ++--
 _posts/2019-05-14-temporal-tables.md           |  8 ++++----
 _posts/2019-05-17-state-ttl.md                 |  4 ++--
 _posts/2019-06-05-flink-network-stack.md       | 26 +++++++++++++-------------
 _posts/2019-06-26-broadcast-state.md           |  2 +-
 _posts/2019-07-02-release-1.8.1.md             |  2 +-
 content/2019/05/03/pulsar-flink.html           |  4 ++--
 content/2019/05/14/temporal-tables.html        |  2 +-
 content/2019/06/05/flink-network-stack.html    |  2 +-
 content/blog/feed.xml                          | 22 +++++++++++-----------
 content/news/2019/02/15/release-1.7.2.html     |  2 +-
 content/news/2019/02/25/release-1.6.4.html     |  2 +-
 content/news/2019/04/09/release-1.8.0.html     |  8 ++++----
 content/news/2019/07/02/release-1.8.1.html     |  2 +-
 20 files changed, 80 insertions(+), 80 deletions(-)

diff --git a/_posts/2019-02-15-release-1.7.2.md b/_posts/2019-02-15-release-1.7.2.md
index be5b0f6..a2df9ca 100644
--- a/_posts/2019-02-15-release-1.7.2.md
+++ b/_posts/2019-02-15-release-1.7.2.md
@@ -33,7 +33,7 @@ Updated Maven dependencies:
 </dependency>
 ```
 
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
+You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
 
 List of resolved issues:
 
diff --git a/_posts/2019-02-21-monitoring-best-practices.md b/_posts/2019-02-21-monitoring-best-practices.md
index 153625f..f7b4ad2 100644
--- a/_posts/2019-02-21-monitoring-best-practices.md
+++ b/_posts/2019-02-21-monitoring-best-practices.md
@@ -40,7 +40,7 @@ any given point in time.
 ## Flink’s Metrics System
 
 The foundation for monitoring Flink jobs is its [metrics
-system](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html>)
+system](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html>)
 which consists of two components; `Metrics` and `MetricsReporters`.
 
 ### Metrics
@@ -61,7 +61,7 @@ the number of records temporarily buffered in managed state. Besides counters,
 Flink offers additional metrics types like gauges and histograms. For
 instructions on how to register your own metrics with Flink’s metrics system
 please check out [Flink’s
-documentation](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#registering-metrics>).
+documentation](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#registering-metrics>).
 In this blog post, we will focus on how to get the most out of Flink’s built-in
 metrics.
 
@@ -72,7 +72,7 @@ MetricsReporters to send the metrics to external systems. Apache Flink provides
 reporters to the most common monitoring tools out-of-the-box including JMX,
 Prometheus, Datadog, Graphite and InfluxDB. For information about how to
 configure a reporter check out Flink’s [MetricsReporter
-documentation](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#reporter>).
+documentation](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#reporter>).
 
 In the remaining part of this blog post, we will go over some of the most
 important metrics to monitor your Apache Flink application.
@@ -132,7 +132,7 @@ keeping up with the upstream systems.
 
 Flink provides multiple metrics to measure the throughput of our application.
 For each operator or task (remember: a task can contain multiple [chained
-tasks](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/#task-chaining-and-resource-groups>)
+tasks](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/dev/stream/operators/#task-chaining-and-resource-groups>)
 Flink counts the number of records and bytes going in and out. Out of those
 metrics, the rate of outgoing records per operator is often the most intuitive
 and easiest to reason about.
@@ -261,7 +261,7 @@ inside the Flink topology and cannot be attributed to transactional sinks or
 events being buffered for functional reasons (4.).
 
 To this end, Flink comes with a feature called [Latency
-Tracking](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#latency-tracking>).
+Tracking](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#latency-tracking>).
 When enabled, Flink will insert so-called latency markers periodically at all
 sources. For each sub-task, a latency distribution from each source to this
 operator will be reported. The granularity of these histograms can be further
@@ -309,7 +309,7 @@ metric to watch. This is especially true when using Flink’s filesystem
 statebackend as it keeps all state objects on the JVM Heap. If the size of
 long-living objects on the Heap increases significantly, this can usually be
 attributed to the size of your application state (check the 
-[checkpointing metrics](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#checkpointing>)
+[checkpointing metrics](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#checkpointing>)
 for an estimated size of the on-heap state). The possible reasons for growing
 state are very application-specific. Typically, an increasing number of keys, a
 large event-time skew between different input streams or simply missing state
@@ -322,7 +322,7 @@ to 250 megabyte by default.
 
 * The biggest driver of Direct memory is by far the
 number of Flink’s network buffers, which can be
-[configured](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/config.html#configuring-the-network-buffers>).
+[configured](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/ops/config.html#configuring-the-network-buffers>).
 
 * Mapped memory is usually close to zero as Flink does not use memory-mapped files.
 
@@ -414,7 +414,7 @@ system to gather insights about system resources, i.e. memory, CPU &
 network-related metrics for the whole machine as opposed to the Flink processes
 alone. System resource monitoring is disabled by default and requires additional
 dependencies on the classpath. Please check out the 
-[Flink system resource metrics documentation](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#system-resources>) for
+[Flink system resource metrics documentation](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#system-resources>) for
 additional guidance and details. System resource monitoring in Flink can be very
 helpful in setups without existing host monitoring capabilities.
 
@@ -432,5 +432,5 @@ Flink’s internals early on.
 
 Last but not least, this post only scratches the surface of the overall metrics
 and monitoring capabilities of Apache Flink. I highly recommend going over
-[Flink’s metrics documentation](<https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html>)
+[Flink’s metrics documentation](<{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html>)
 for a full reference of Flink’s metrics system.
\ No newline at end of file
diff --git a/_posts/2019-02-25-release-1.6.4.md b/_posts/2019-02-25-release-1.6.4.md
index 5fc54b7..dd46e29 100644
--- a/_posts/2019-02-25-release-1.6.4.md
+++ b/_posts/2019-02-25-release-1.6.4.md
@@ -31,7 +31,7 @@ Updated Maven dependencies:
 </dependency>
 ```
 
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
+You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
 
 List of resolved issues:
 
diff --git a/_posts/2019-03-11-prometheus-monitoring.md b/_posts/2019-03-11-prometheus-monitoring.md
index dc47eb9..20f554f 100644
--- a/_posts/2019-03-11-prometheus-monitoring.md
+++ b/_posts/2019-03-11-prometheus-monitoring.md
@@ -10,7 +10,7 @@ category: features
 excerpt: This blog post describes how developers can leverage Apache Flink's built-in metrics system together with Prometheus to observe and monitor streaming applications in an effective way.
 ---
 
-This blog post describes how developers can leverage Apache Flink's built-in [metrics system](https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html) together with [Prometheus](https://prometheus.io/) to observe and monitor streaming applications in an effective way. This is a follow-up post from my [Flink Forward](https://flink-forward.org/) Berlin 2018 talk ([slides](https://www.slideshare.net/MaximilianBode1/monitoring-flink-with-prometheus), [video](https [...]
+This blog post describes how developers can leverage Apache Flink's built-in [metrics system]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html) together with [Prometheus](https://prometheus.io/) to observe and monitor streaming applications in an effective way. This is a follow-up post from my [Flink Forward](https://flink-forward.org/) Berlin 2018 talk ([slides](https://www.slideshare.net/MaximilianBode1/monitoring-flink-with-prometheus), [video](https://www.verver [...]
 
 ## Why Prometheus?
 
@@ -24,7 +24,7 @@ Prometheus is a metrics-based monitoring system that was originally created in 2
 
 * **PromQL** is Prometheus' [query language](https://prometheus.io/docs/prometheus/latest/querying/basics/). It can be used for both building dashboards and setting up alert rules that will trigger when specific conditions are met.
 
-When considering metrics and monitoring systems for your Flink jobs, there are many [options](https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html). Flink offers native support for exposing data to Prometheus via the `PrometheusReporter` configuration. Setting up this integration is very easy.
+When considering metrics and monitoring systems for your Flink jobs, there are many [options]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html). Flink offers native support for exposing data to Prometheus via the `PrometheusReporter` configuration. Setting up this integration is very easy.
 
 Prometheus is a great choice as usually Flink jobs are not running in isolation but in a greater context of microservices. For making metrics available to Prometheus from other parts of a larger system, there are two options: There exist [libraries for all major languages](https://prometheus.io/docs/instrumenting/clientlibs/) to instrument other applications. Additionally, there is a wide variety of [exporters](https://prometheus.io/docs/instrumenting/exporters/), which are tools that ex [...]
 
@@ -36,7 +36,7 @@ We have provided a [GitHub repository](https://github.com/mbode/flink-prometheus
 ./gradlew composeUp
 ```
 
-This builds a Flink job using the build tool [Gradle](https://gradle.org/) and starts up a local environment based on [Docker Compose](https://docs.docker.com/compose/) running the job in a [Flink job cluster](https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/deployment/docker.html#flink-job-cluster) (reachable at [http://localhost:8081](http://localhost:8081/)) as well as a Prometheus instance ([http://localhost:9090](http://localhost:9090/)).
+This builds a Flink job using the build tool [Gradle](https://gradle.org/) and starts up a local environment based on [Docker Compose](https://docs.docker.com/compose/) running the job in a [Flink job cluster]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/ops/deployment/docker.html#flink-job-cluster) (reachable at [http://localhost:8081](http://localhost:8081/)) as well as a Prometheus instance ([http://localhost:9090](http://localhost:9090/)).
 
 <center>
 <img src="{{ site.baseurl }}/img/blog/2019-03-11-prometheus-monitoring/prometheusexamplejob.png" width="600px" alt="PrometheusExampleJob in Flink Web UI"/>
@@ -73,7 +73,7 @@ To start monitoring Flink with Prometheus, the following steps are necessary:
 
         cp /opt/flink/opt/flink-metrics-prometheus-1.7.2.jar /opt/flink/lib
 
-2. [Configure the reporter](https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#reporter) in Flink's _flink-conf.yaml_. All job managers and task managers will expose the metrics on the configured port.
+2. [Configure the reporter]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#reporter) in Flink's _flink-conf.yaml_. All job managers and task managers will expose the metrics on the configured port.
 
         metrics.reporters: prom
         metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
@@ -105,7 +105,7 @@ To test Prometheus' alerting feature, kill one of the Flink task managers via
 docker kill taskmanager1
 ```
 
-Our Flink job can recover from this partial failure via the mechanism of [Checkpointing](https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/state/checkpointing.html). Nevertheless, after roughly one minute (as configured in the alert rule) the following alert will fire:
+Our Flink job can recover from this partial failure via the mechanism of [Checkpointing]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/dev/stream/state/checkpointing.html). Nevertheless, after roughly one minute (as configured in the alert rule) the following alert will fire:
 
 <center>
 <img src="{{ site.baseurl }}/img/blog/2019-03-11-prometheus-monitoring/prometheusalerts.png" width="600px" alt="Prometheus web UI with example alert"/>
diff --git a/_posts/2019-04-09-release-1.8.0.md b/_posts/2019-04-09-release-1.8.0.md
index 6ff8107..b072bf6 100644
--- a/_posts/2019-04-09-release-1.8.0.md
+++ b/_posts/2019-04-09-release-1.8.0.md
@@ -17,15 +17,15 @@ for more details.
 
 Flink 1.8.0 is API-compatible with previous 1.x.y releases for APIs annotated
 with the `@Public` annotation.  The release is available now and we encourage
-everyone to [download the release](http://flink.apache.org/downloads.html) and
+everyone to [download the release]({{ site.baseurl }}/downloads.html) and
 check out the updated
-[documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.8/).
+[documentation]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/).
 Feedback through the Flink [mailing
-lists](http://flink.apache.org/community.html#mailing-lists) or
+lists]({{ site.baseurl }}/community.html#mailing-lists) or
 [JIRA](https://issues.apache.org/jira/projects/FLINK/summary) is, as always,
 very much appreciated!
 
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html) on the Flink project site.
+You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html) on the Flink project site.
 
 {% toc %}
 
@@ -43,7 +43,7 @@ addition of the Blink enhancements
 Nevertheless, this release includes some important new features and bug fixes.
 The most interesting of those are highlighted below. Please consult the
 [complete changelog](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&version=12344274)
-and the [release notes](https://ci.apache.org/projects/flink/flink-docs-release-1.8/release-notes/flink-1.8.html)
+and the [release notes]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/release-notes/flink-1.8.html)
 for more details.
 
 
@@ -190,7 +190,7 @@ for more details.
   If a deployment relies on `flink-shaded-hadoop2` being included in
   `flink-dist`, then you must manually download a pre-packaged Hadoop
   jar from the optional components section of the [download
-  page](https://flink.apache.org/downloads.html) and copy it into the
+  page]({{ site.baseurl }}/downloads.html) and copy it into the
   `/lib` directory.  Alternatively, a Flink distribution that includes
   hadoop can be built by packaging `flink-dist` and activating the
   `include-hadoop` maven profile.
@@ -239,7 +239,7 @@ for more details.
 ## Release Notes
 
 Please review the [release
-notes](https://ci.apache.org/projects/flink/flink-docs-release-1.8/release-notes/flink-1.8.html)
+notes]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/release-notes/flink-1.8.html)
 for a more detailed list of changes and new features if you plan to upgrade
 your Flink setup to Flink 1.8.
 
diff --git a/_posts/2019-04-17-sod.md b/_posts/2019-04-17-sod.md
index 301d211..5b81ba6 100644
--- a/_posts/2019-04-17-sod.md
+++ b/_posts/2019-04-17-sod.md
@@ -9,7 +9,7 @@ authors:
   twitter: "snntrable"
 ---
 
-The Apache Flink community is happy to announce its application to the first edition of [Season of Docs](https://developers.google.com/season-of-docs/) by Google. The program is bringing together Open Source projects and technical writers to raise awareness for and improve documentation of Open Source projects. While the community is continuously looking for new contributors to collaborate on our documentation, we would like to take this chance to work with one or two technical writers t [...]
+The Apache Flink community is happy to announce its application to the first edition of [Season of Docs](https://developers.google.com/season-of-docs/) by Google. The program is bringing together Open Source projects and technical writers to raise awareness for and improve documentation of Open Source projects. While the community is continuously looking for new contributors to collaborate on our documentation, we would like to take this chance to work with one or two technical writers t [...]
 
 The community has discussed this opportunity on the [dev mailinglist](https://lists.apache.org/thread.html/3c789b6187da23ad158df59bbc598543b652e3cfc1010a14e294e16a@%3Cdev.flink.apache.org%3E) and agreed on three project ideas to submit to the program. We have a great team of mentors (Stephan, Fabian, David, Jark & Konstantin) lined up and are very much looking forward to the first proposals by potential technical writers (given we are admitted to the program ;)). In case of questions fee [...]
 
@@ -24,11 +24,11 @@ In this project, we would like to restructure, consolidate and extend the concep
 
 **Related material:**
 
-1. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/](https://ci.apache.org/projects/flink/flink-docs-release-1.8/)
-2. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev)
-3. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops](https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops)
-4. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/concepts/programming-model.html#time](https://ci.apache.org/projects/flink/flink-docs-release-1.8/concepts/programming-model.html#time)
-5. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html)
+1. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/)
+2. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev)
+3. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops)
+4. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/concepts/programming-model.html#time]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/concepts/programming-model.html#time)
+5. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/event_time.html]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/event_time.html)
 
 ### Project 2: Improve Documentation of Flink Deployments & Operations
 
@@ -39,8 +39,8 @@ In this project, we would like to restructure this part of the documentation and
 
 **Related material:**
 
-1. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops](https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/)
-2. [https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring](https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring)
+1. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/)
+2. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring)
 
 ### Project 3: Improve Documentation for Relational APIs (Table API & SQL)
 
@@ -51,8 +51,8 @@ The existing documentation could be reorganized to prepare for covering the new
 
 **Related material:**
 
-1. [Table API & SQL docs main page](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table)
-2. [Built-in functions](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/functions.html)
-3. [Concepts](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/common.html)
-4. [Streaming Concepts](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/)
+1. [Table API & SQL docs main page]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table)
+2. [Built-in functions]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/functions.html)
+3. [Concepts]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/common.html)
+4. [Streaming Concepts]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/streaming/)
 
diff --git a/_posts/2019-05-03-pulsar-flink.md b/_posts/2019-05-03-pulsar-flink.md
index b9059e9..14e0365 100644
--- a/_posts/2019-05-03-pulsar-flink.md
+++ b/_posts/2019-05-03-pulsar-flink.md
@@ -41,7 +41,7 @@ Finally, Pulsar’s flexible messaging framework unifies the streaming and queui
 
 ## Pulsar’s view on data: Segmented data streams
 
-Apache Flink is a streaming-first computation framework that perceives [batch processing as a special case of streaming](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html). Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.
+Apache Flink is a streaming-first computation framework that perceives [batch processing as a special case of streaming]({{ site.baseurl }}/news/2019/02/13/unified-batch-streaming-blink.html). Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.
 
 Apache Pulsar has a similar perspective to that of Apache Flink with regards to the data layer. The framework also uses streams as a unified view on all data, while its layered architecture allows traditional pub-sub messaging for streaming workloads and continuous data processing or usage of *Segmented Streams* and bounded data stream for batch and static workloads. 
 
@@ -155,4 +155,4 @@ wc.output(pulsarOutputFormat);
 
 ## Conclusion
 
-Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be *“streaming-first”* with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the [Apache Flink](https://flink.apache.org/community.html#m [...]
+Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be *“streaming-first”* with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the [Apache Flink]({{ site.baseurl }}/community.html#mailing [...]
diff --git a/_posts/2019-05-14-temporal-tables.md b/_posts/2019-05-14-temporal-tables.md
index d630807..432765a 100644
--- a/_posts/2019-05-14-temporal-tables.md
+++ b/_posts/2019-05-14-temporal-tables.md
@@ -27,7 +27,7 @@ In the 1.7 release, Flink has introduced the concept of **temporal tables** into
 
 * Exposing the stream as a **temporal table function** that maps each point in time to a static relation.
 
-Going back to our example use case, a temporal table is just what we need to model the conversion rate data such as to make it useful for point-in-time querying. Temporal table functions are implemented as an extension of Flink’s generic [table function](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/udfs.html#table-functions) class and can be defined in the same straightforward way to be used with the Table API or SQL parser.
+Going back to our example use case, a temporal table is just what we need to model the conversion rate data such as to make it useful for point-in-time querying. Temporal table functions are implemented as an extension of Flink’s generic [table function]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/udfs.html#table-functions) class and can be defined in the same straightforward way to be used with the Table API or SQL parser.
 
 ```java
 import org.apache.flink.table.functions.TemporalTableFunction;
@@ -97,10 +97,10 @@ Each record from the append-only table on the probe side (```Taxi Fare```) is jo
 </center>
 <br>
 
-Temporal table joins support both [processing](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/joins.html#processing-time-temporal-joins) and [event time](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/joins.html#event-time-temporal-joins) semantics and effectively limit the amount of data kept in state while also allowing records on the build side to be arbitrarily old, as opposed to time-windowed joins. Probe-side records [...]
+Temporal table joins support both [processing]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/streaming/joins.html#processing-time-temporal-joins) and [event time]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/streaming/joins.html#event-time-temporal-joins) semantics and effectively limit the amount of data kept in state while also allowing records on the build side to be arbitrarily old, as opposed to time-windowed joins. Probe-side records only need to be kept in s [...]
 
 * Narrowing the **scope** of the join: only the time-matching version of ```ratesHistory``` is visible for a given ```taxiFare.time```;
-* Pruning **unneeded records** from state: for cases using event time, records between current time and the [watermark](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html#event-time-and-watermarks) delay are persisted for both the probe and build side. These are discarded as soon as the watermark arrives and the results are emitted — allowing the join operation to move forward in time and the build table to “refresh” its version in state.
+* Pruning **unneeded records** from state: for cases using event time, records between current time and the [watermark]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/event_time.html#event-time-and-watermarks) delay are persisted for both the probe and build side. These are discarded as soon as the watermark arrives and the results are emitted — allowing the join operation to move forward in time and the build table to “refresh” its version in state.
 
 ## Conclusion
 
@@ -108,4 +108,4 @@ All this means it is now possible to express continuous stream enrichment in rel
 
 If you'd like to get some **hands-on practice in joining streams with Flink SQL** (and Flink SQL in general), checkout this [free training for Flink SQL](https://github.com/ververica/sql-training/wiki). The training environment is based on Docker and set up in just a few minutes.
 
-Subscribe to the [Apache Flink mailing lists](https://flink.apache.org/community.html#mailing-lists) to stay up-to-date with the latest developments in this space.
+Subscribe to the [Apache Flink mailing lists]({{ site.baseurl }}/community.html#mailing-lists) to stay up-to-date with the latest developments in this space.
diff --git a/_posts/2019-05-17-state-ttl.md b/_posts/2019-05-17-state-ttl.md
index 592f180..a53e7d4 100644
--- a/_posts/2019-05-17-state-ttl.md
+++ b/_posts/2019-05-17-state-ttl.md
@@ -33,7 +33,7 @@ Both requirements can be addressed by a feature that periodically, yet continuou
 
 The 1.6.0 release of Apache Flink introduced the State TTL feature. It enabled developers of stream processing applications to configure the state of operators to expire and be cleaned up after a defined timeout (time-to-live). In Flink 1.8.0 the feature was extended, including continuous cleanup of old entries for both the RocksDB and the heap state backends (FSStateBackend and MemoryStateBackend), enabling a continuous cleanup process of old entries (according to the TTL setting).
 
-In Flink’s DataStream API, application state is defined by a [state descriptor](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/state.html#using-managed-keyed-state). State TTL is configured by passing a `StateTtlConfiguration` object to a state descriptor. The following Java example shows how to create a state TTL configuration and provide it to the state descriptor that holds the last login time of a user as a `Long` value:
+In Flink’s DataStream API, application state is defined by a [state descriptor]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/stream/state/state.html#using-managed-keyed-state). State TTL is configured by passing a `StateTtlConfiguration` object to a state descriptor. The following Java example shows how to create a state TTL configuration and provide it to the state descriptor that holds the last login time of a user as a `Long` value:
 
 ```java
 import org.apache.flink.api.common.state.StateTtlConfig;
@@ -63,7 +63,7 @@ State TTL employs a lazy strategy to clean up expired state. This can lead to th
 * **Which time semantics are used for the Time-to-Live timers?** 
 With Flink 1.8.0, users can only define a state TTL in terms of processing time. The support for event time is planned for future Apache Flink releases.
 
-You can read more about how to use state TTL in the [Apache Flink documentation](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#state-time-to-live-ttl).
+You can read more about how to use state TTL in the [Apache Flink documentation]({{ site.DOCS_BASE_URL }}flink-docs-stable/dev/stream/state/state.html#state-time-to-live-ttl).
 
 Internally, the State TTL feature is implemented by storing an additional timestamp of the last relevant state access, along with the actual state value. While this approach adds some storage overhead, it allows Flink to check for the expired state during state access, checkpointing, recovery, or dedicated storage cleanup procedures.
 
diff --git a/_posts/2019-06-05-flink-network-stack.md b/_posts/2019-06-05-flink-network-stack.md
index 0888a15..6e26fbd 100644
--- a/_posts/2019-06-05-flink-network-stack.md
+++ b/_posts/2019-06-05-flink-network-stack.md
@@ -97,17 +97,17 @@ The following table summarises the valid combinations:
 
 
 <sup>1</sup> Currently not used by Flink. <br>
-<sup>2</sup> This may become applicable to streaming jobs once the [Batch/Streaming unification](https://flink.apache.org/roadmap.html#batch-and-streaming-unification) is done.
+<sup>2</sup> This may become applicable to streaming jobs once the [Batch/Streaming unification]({{ site.baseurl }}/roadmap.html#batch-and-streaming-unification) is done.
 
 
 <br>
-Additionally, for subtasks with more than one input, scheduling start in two ways: after *all* or after *any* input producers to have produced a record/their complete dataset. For tuning the output types and scheduling decisions in batch jobs, please have a look at [ExecutionConfig#setExecutionMode()](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/api/common/ExecutionConfig.html#setExecutionMode-org.apache.flink.api.common.ExecutionMode-) - and [Exe [...]
+Additionally, for subtasks with more than one input, scheduling start in two ways: after *all* or after *any* input producers to have produced a record/their complete dataset. For tuning the output types and scheduling decisions in batch jobs, please have a look at [ExecutionConfig#setExecutionMode()]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/api/common/ExecutionConfig.html#setExecutionMode-org.apache.flink.api.common.ExecutionMode-) - and [ExecutionMode]({ [...]
 
 <br>
 
 # Physical Transport
 
-In order to understand the physical data connections, please recall that, in Flink, different tasks may share the same slot via [slot sharing groups](https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/#task-chaining-and-resource-groups). TaskManagers may also provide more than one slot to allow multiple subtasks of the same task to be scheduled onto the same TaskManager.
+In order to understand the physical data connections, please recall that, in Flink, different tasks may share the same slot via [slot sharing groups]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/stream/operators/#task-chaining-and-resource-groups). TaskManagers may also provide more than one slot to allow multiple subtasks of the same task to be scheduled onto the same TaskManager.
 
 For the example pictured below, we will assume a parallelism of 4 and a deployment with two task managers offering 2 slots each. TaskManager 1 executes subtasks A.1, A.2, B.1, and B.2 and TaskManager 2 executes subtasks A.3, A.4, B.3, and B.4. In a shuffle-type connection between task A and task B, for example from a `keyBy()`, there are 2x4 logical connections to handle on each TaskManager, some of which are local, some remote:
 <br>
@@ -158,11 +158,11 @@ Each (remote) network connection between different tasks will get its own TCP ch
 </center>
 <br>
 
-The results of each subtask are called [ResultPartition](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultPartition.html), each split into separate [ResultSubpartitions](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultSubpartition.html) — one for each logical channel. At this point in the stack, Flink is not dealing with individual records anymo [...]
+The results of each subtask are called [ResultPartition]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultPartition.html), each split into separate [ResultSubpartitions]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultSubpartition.html) — one for each logical channel. At this point in the stack, Flink is not dealing with individual records anymore but instead with a grou [...]
 
     #channels * buffers-per-channel + floating-buffers-per-gate
 
-The total number of buffers on a single TaskManager usually does not need configuration. See the [Configuring the Network Buffers](https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#configuring-the-network-buffers) documentation for details on how to do so if needed.
+The total number of buffers on a single TaskManager usually does not need configuration. See the [Configuring the Network Buffers]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#configuring-the-network-buffers) documentation for details on how to do so if needed.
 
 ## Inflicting Backpressure (1)
 
@@ -191,7 +191,7 @@ Receivers will announce the availability of buffers as **credits** to the sender
 </center>
 <br>
 
-Credit-based flow control will use [buffers-per-channel](https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-buffers-per-channel) to specify how many buffers are exclusive (mandatory) and [floating-buffers-per-gate](https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-floating-buffers-per-gate) for the local buffer pool (optional<sup>3</sup>) thus achieving the same buffer limit as withou [...]
+Credit-based flow control will use [buffers-per-channel]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-buffers-per-channel) to specify how many buffers are exclusive (mandatory) and [floating-buffers-per-gate]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-floating-buffers-per-gate) for the local buffer pool (optional<sup>3</sup>) thus achieving the same buffer limit as without flow control. The defaul [...]
 <br>
 
 <sup>3</sup>If there are not enough buffers available, each buffer pool will get the same share of the globally available ones (± 1).
@@ -204,11 +204,11 @@ As opposed to the receiver's backpressure mechanisms without flow control, credi
 
 <img align="right" src="{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack5.png" width="300" height="200" alt="Physical-transport-credit-flow-checkpoints-Flink's Network Stack"/>
 
-Since, with flow control, a channel in a multiplex cannot block another of its logical channels, the overall resource utilisation should increase. In addition, by having full control over how much data is “on the wire”, we are also able to improve [checkpoint alignments](https://ci.apache.org/projects/flink/flink-docs-release-1.8/internals/stream_checkpointing.html#checkpointing): without flow control, it would take a while for the channel to fill the network stack’s internal buffers and [...]
+Since, with flow control, a channel in a multiplex cannot block another of its logical channels, the overall resource utilisation should increase. In addition, by having full control over how much data is “on the wire”, we are also able to improve [checkpoint alignments]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/internals/stream_checkpointing.html#checkpointing): without flow control, it would take a while for the channel to fill the network stack’s internal buffers and propagate th [...]
 
-However, the additional announce messages from the receiver may come at some additional costs, especially in setup using SSL-encrypted channels. Also, a single input channel cannot make use of all buffers in the buffer pool because exclusive buffers are not shared. It can also not start right away with sending as much data as is available so that during ramp-up (if you are producing data faster than announcing credits in return) it may take longer to send data through. While this may aff [...]
+However, the additional announce messages from the receiver may come at some additional costs, especially in setup using SSL-encrypted channels. Also, a single input channel cannot make use of all buffers in the buffer pool because exclusive buffers are not shared. It can also not start right away with sending as much data as is available so that during ramp-up (if you are producing data faster than announcing credits in return) it may take longer to send data through. While this may aff [...]
 
-There is one more thing you may notice when using credit-based flow control: since we buffer less data between the sender and receiver, you may experience backpressure earlier. This is, however, desired and you do not really get any advantage by buffering more data. If you want to buffer more but keep flow control, you could consider increasing the number of floating buffers via [floating-buffers-per-gate](https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskma [...]
+There is one more thing you may notice when using credit-based flow control: since we buffer less data between the sender and receiver, you may experience backpressure earlier. This is, however, desired and you do not really get any advantage by buffering more data. If you want to buffer more but keep flow control, you could consider increasing the number of floating buffers via [floating-buffers-per-gate]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network [...]
 
 <br>
 
@@ -252,9 +252,9 @@ The following picture extends the slightly more high-level view from above with
 </center>
 <br>
 
-After creating a record and passing it along, for example via `Collector#collect()`, it is given to the [RecordWriter](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.html) which serialises the record from a Java object into a sequence of bytes which eventually ends up in a network buffer that is handed along as described above. The RecordWriter first serialises the record to a flexible on-heap byte array us [...]
+After creating a record and passing it along, for example via `Collector#collect()`, it is given to the [RecordWriter]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.html) which serialises the record from a Java object into a sequence of bytes which eventually ends up in a network buffer that is handed along as described above. The RecordWriter first serialises the record to a flexible on-heap byte array using the [Span [...]
 
-On the receiver’s side, the lower network stack (netty) is writing received buffers into the appropriate input channels. The (stream) tasks’s thread eventually reads from these queues and tries to deserialise the accumulated bytes into Java objects with the help of the [RecordReader](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/reader/RecordReader.html) and going through the [SpillingAdaptiveSpanningRecordDeserializer](https [...]
+On the receiver’s side, the lower network stack (netty) is writing received buffers into the appropriate input channels. The (stream) tasks’s thread eventually reads from these queues and tries to deserialise the accumulated bytes into Java objects with the help of the [RecordReader]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/reader/RecordReader.html) and going through the [SpillingAdaptiveSpanningRecordDeserializer]({{ site.DOCS_BASE_ [...]
 <br>
 
 ## Flushing Buffers to Netty
@@ -283,7 +283,7 @@ The RecordWriter works with a local serialisation buffer for the current record
 
 ### Flush after Buffer Timeout
 
-In order to support low-latency use cases, we cannot only rely on buffers being full in order to send data downstream. There may be cases where a certain communication channel does not have too many records flowing through and unnecessarily increase the latency of the few records you actually have. Therefore, a periodic process will flush whatever data is available down the stack: the output flusher. The periodic interval can be configured via [StreamExecutionEnvironment#setBufferTimeout [...]
+In order to support low-latency use cases, we cannot only rely on buffers being full in order to send data downstream. There may be cases where a certain communication channel does not have too many records flowing through and unnecessarily increase the latency of the few records you actually have. Therefore, a periodic process will flush whatever data is available down the stack: the output flusher. The periodic interval can be configured via [StreamExecutionEnvironment#setBufferTimeout [...]
 <br>
 
 <center>
@@ -312,7 +312,7 @@ However, you may notice an increased CPU use and TCP packet rate during low load
 
 ## Buffer Builder & Buffer Consumer
 
-If you want to dig deeper into how the producer-consumer mechanics are implemented in Flink, please take a closer look at the [BufferBuilder](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.html) and [BufferConsumer](https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.html) classes which have been introduced in Flink 1.5. While reading i [...]
+If you want to dig deeper into how the producer-consumer mechanics are implemented in Flink, please take a closer look at the [BufferBuilder]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.html) and [BufferConsumer]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.html) classes which have been introduced in Flink 1.5. While reading is potentially only *per bu [...]
 
 <br>
 
diff --git a/_posts/2019-06-26-broadcast-state.md b/_posts/2019-06-26-broadcast-state.md
index 7c34421..5616d11 100644
--- a/_posts/2019-06-26-broadcast-state.md
+++ b/_posts/2019-06-26-broadcast-state.md
@@ -208,4 +208,4 @@ The `KeyedBroadcastProcessFunction` has full access to Flink state and time feat
 
 In this blog post, we walked you through an example application to explain what Apache Flink’s broadcast state is and how it can be used to evaluate dynamic patterns on event streams. We’ve also discussed the API and showed the source code of our example application. 
 
-We invite you to check the [documentation](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html) of this feature and provide feedback or suggestions for further improvements through our [mailing list](http://mail-archives.apache.org/mod_mbox/flink-community/).
+We invite you to check the [documentation]({{ site.DOCS_BASE_URL }}flink-docs-stable/dev/stream/state/broadcast_state.html) of this feature and provide feedback or suggestions for further improvements through our [mailing list](http://mail-archives.apache.org/mod_mbox/flink-community/).
diff --git a/_posts/2019-07-02-release-1.8.1.md b/_posts/2019-07-02-release-1.8.1.md
index 2d70fef..acd02ce 100644
--- a/_posts/2019-07-02-release-1.8.1.md
+++ b/_posts/2019-07-02-release-1.8.1.md
@@ -35,7 +35,7 @@ Updated Maven dependencies:
 </dependency>
 ```
 
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
+You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
 
 List of resolved issues:
     
diff --git a/content/2019/05/03/pulsar-flink.html b/content/2019/05/03/pulsar-flink.html
index b42b22e..98a9eab 100644
--- a/content/2019/05/03/pulsar-flink.html
+++ b/content/2019/05/03/pulsar-flink.html
@@ -192,7 +192,7 @@
 
 <h2 id="pulsars-view-on-data-segmented-data-streams">Pulsar’s view on data: Segmented data streams</h2>
 
-<p>Apache Flink is a streaming-first computation framework that perceives <a href="https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html">batch processing as a special case of streaming</a>. Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.</p>
+<p>Apache Flink is a streaming-first computation framework that perceives <a href="/news/2019/02/13/unified-batch-streaming-blink.html">batch processing as a special case of streaming</a>. Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.</p>
 
 <p>Apache Pulsar has a similar perspective to that of Apache Flink with regards to the data layer. The framework also uses streams as a unified view on all data, while its layered architecture allows traditional pub-sub messaging for streaming workloads and continuous data processing or usage of <em>Segmented Streams</em> and bounded data stream for batch and static workloads.</p>
 
@@ -298,7 +298,7 @@
 
 <h2 id="conclusion">Conclusion</h2>
 
-<p>Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be <em>“streaming-first”</em> with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the <a href="https://flink.apache.org/community.ht [...]
+<p>Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be <em>“streaming-first”</em> with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the <a href="/community.html#mailing-lists">Apache [...]
 
       </article>
     </div>
diff --git a/content/2019/05/14/temporal-tables.html b/content/2019/05/14/temporal-tables.html
index ae56dc7..009966f 100644
--- a/content/2019/05/14/temporal-tables.html
+++ b/content/2019/05/14/temporal-tables.html
@@ -264,7 +264,7 @@
 
 <p>If you’d like to get some <strong>hands-on practice in joining streams with Flink SQL</strong> (and Flink SQL in general), checkout this <a href="https://github.com/ververica/sql-training/wiki">free training for Flink SQL</a>. The training environment is based on Docker and set up in just a few minutes.</p>
 
-<p>Subscribe to the <a href="https://flink.apache.org/community.html#mailing-lists">Apache Flink mailing lists</a> to stay up-to-date with the latest developments in this space.</p>
+<p>Subscribe to the <a href="/community.html#mailing-lists">Apache Flink mailing lists</a> to stay up-to-date with the latest developments in this space.</p>
 
       </article>
     </div>
diff --git a/content/2019/06/05/flink-network-stack.html b/content/2019/06/05/flink-network-stack.html
index 2c67a21..2445ec5 100644
--- a/content/2019/06/05/flink-network-stack.html
+++ b/content/2019/06/05/flink-network-stack.html
@@ -268,7 +268,7 @@
 <p><br /></p>
 
 <p><sup>1</sup> Currently not used by Flink. <br />
-<sup>2</sup> This may become applicable to streaming jobs once the <a href="https://flink.apache.org/roadmap.html#batch-and-streaming-unification">Batch/Streaming unification</a> is done.</p>
+<sup>2</sup> This may become applicable to streaming jobs once the <a href="/roadmap.html#batch-and-streaming-unification">Batch/Streaming unification</a> is done.</p>
 
 <p><br />
 Additionally, for subtasks with more than one input, scheduling start in two ways: after <em>all</em> or after <em>any</em> input producers to have produced a record/their complete dataset. For tuning the output types and scheduling decisions in batch jobs, please have a look at <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/api/common/ExecutionConfig.html#setExecutionMode-org.apache.flink.api.common.ExecutionMode-">ExecutionConfig#setExecu [...]
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 1c51569..d4d033b 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -32,7 +32,7 @@
   &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
 &lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;List of resolved issues:&lt;/p&gt;
 
@@ -454,7 +454,7 @@ The website implements a streaming application that detects a pattern on the str
 &lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;p&gt;&lt;sup&gt;1&lt;/sup&gt; Currently not used by Flink. &lt;br /&gt;
-&lt;sup&gt;2&lt;/sup&gt; This may become applicable to streaming jobs once the &lt;a href=&quot;https://flink.apache.org/roadmap.html#batch-and-streaming-unification&quot;&gt;Batch/Streaming unification&lt;/a&gt; is done.&lt;/p&gt;
+&lt;sup&gt;2&lt;/sup&gt; This may become applicable to streaming jobs once the &lt;a href=&quot;/roadmap.html#batch-and-streaming-unification&quot;&gt;Batch/Streaming unification&lt;/a&gt; is done.&lt;/p&gt;
 
 &lt;p&gt;&lt;br /&gt;
 Additionally, for subtasks with more than one input, scheduling start in two ways: after &lt;em&gt;all&lt;/em&gt; or after &lt;em&gt;any&lt;/em&gt; input producers to have produced a record/their complete dataset. For tuning the output types and scheduling decisions in batch jobs, please have a look at &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/api/common/ExecutionConfig.html#setExecutionMode-org.apache.flink.api.common.Executio [...]
@@ -939,7 +939,7 @@ With Flink 1.8.0, users can only define a state TTL in terms of processing time.
 
 &lt;p&gt;If you’d like to get some &lt;strong&gt;hands-on practice in joining streams with Flink SQL&lt;/strong&gt; (and Flink SQL in general), checkout this &lt;a href=&quot;https://github.com/ververica/sql-training/wiki&quot;&gt;free training for Flink SQL&lt;/a&gt;. The training environment is based on Docker and set up in just a few minutes.&lt;/p&gt;
 
-&lt;p&gt;Subscribe to the &lt;a href=&quot;https://flink.apache.org/community.html#mailing-lists&quot;&gt;Apache Flink mailing lists&lt;/a&gt; to stay up-to-date with the latest developments in this space.&lt;/p&gt;
+&lt;p&gt;Subscribe to the &lt;a href=&quot;/community.html#mailing-lists&quot;&gt;Apache Flink mailing lists&lt;/a&gt; to stay up-to-date with the latest developments in this space.&lt;/p&gt;
 </description>
 <pubDate>Tue, 14 May 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/05/14/temporal-tables.html</link>
@@ -979,7 +979,7 @@ With Flink 1.8.0, users can only define a state TTL in terms of processing time.
 
 &lt;h2 id=&quot;pulsars-view-on-data-segmented-data-streams&quot;&gt;Pulsar’s view on data: Segmented data streams&lt;/h2&gt;
 
-&lt;p&gt;Apache Flink is a streaming-first computation framework that perceives &lt;a href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;batch processing as a special case of streaming&lt;/a&gt;. Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.&lt;/p&gt;
+&lt;p&gt;Apache Flink is a streaming-first computation framework that perceives &lt;a href=&quot;/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;batch processing as a special case of streaming&lt;/a&gt;. Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.&lt;/p&gt;
 
 &lt;p&gt;Apache Pulsar has a similar perspective to that of Apache Flink with regards to the data layer. The framework also uses streams as a unified view on all data, while its layered architecture allows traditional pub-sub messaging for streaming workloads and continuous data processing or usage of &lt;em&gt;Segmented Streams&lt;/em&gt; and bounded data stream for batch and static workloads.&lt;/p&gt;
 
@@ -1085,7 +1085,7 @@ With Flink 1.8.0, users can only define a state TTL in terms of processing time.
 
 &lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-&lt;p&gt;Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be &lt;em&gt;“streaming-first”&lt;/em&gt; with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the &lt;a href=&quot;https://fli [...]
+&lt;p&gt;Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be &lt;em&gt;“streaming-first”&lt;/em&gt; with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the &lt;a href=&quot;/community. [...]
 </description>
 <pubDate>Fri, 03 May 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/05/03/pulsar-flink.html</link>
@@ -1163,15 +1163,15 @@ for more details.&lt;/p&gt;
 
 &lt;p&gt;Flink 1.8.0 is API-compatible with previous 1.x.y releases for APIs annotated
 with the &lt;code&gt;@Public&lt;/code&gt; annotation.  The release is available now and we encourage
-everyone to &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;download the release&lt;/a&gt; and
+everyone to &lt;a href=&quot;/downloads.html&quot;&gt;download the release&lt;/a&gt; and
 check out the updated
 &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/&quot;&gt;documentation&lt;/a&gt;.
-Feedback through the Flink &lt;a href=&quot;http://flink.apache.org/community.html#mailing-lists&quot;&gt;mailing
+Feedback through the Flink &lt;a href=&quot;/community.html#mailing-lists&quot;&gt;mailing
 lists&lt;/a&gt; or
 &lt;a href=&quot;https://issues.apache.org/jira/projects/FLINK/summary&quot;&gt;JIRA&lt;/a&gt; is, as always,
 very much appreciated!&lt;/p&gt;
 
-&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt; on the Flink project site.&lt;/p&gt;
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt; on the Flink project site.&lt;/p&gt;
 
 &lt;div class=&quot;page-toc&quot;&gt;
 &lt;ul id=&quot;markdown-toc&quot;&gt;
@@ -1360,7 +1360,7 @@ Convenience binaries that include hadoop are no longer released.&lt;/p&gt;
 
     &lt;p&gt;If a deployment relies on &lt;code&gt;flink-shaded-hadoop2&lt;/code&gt; being included in
 &lt;code&gt;flink-dist&lt;/code&gt;, then you must manually download a pre-packaged Hadoop
-jar from the optional components section of the &lt;a href=&quot;https://flink.apache.org/downloads.html&quot;&gt;download
+jar from the optional components section of the &lt;a href=&quot;/downloads.html&quot;&gt;download
 page&lt;/a&gt; and copy it into the
 &lt;code&gt;/lib&lt;/code&gt; directory.  Alternatively, a Flink distribution that includes
 hadoop can be built by packaging &lt;code&gt;flink-dist&lt;/code&gt; and activating the
@@ -2235,7 +2235,7 @@ for a full reference of Flink’s metrics system.&lt;/p&gt;
   &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.4&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
 &lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;List of resolved issues:&lt;/p&gt;
 
@@ -2334,7 +2334,7 @@ We highly recommend all users to upgrade to Flink 1.7.2.&lt;/p&gt;
   &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.7.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
 &lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
 &lt;p&gt;List of resolved issues:&lt;/p&gt;
 
diff --git a/content/news/2019/02/15/release-1.7.2.html b/content/news/2019/02/15/release-1.7.2.html
index e079b6d..5e47199 100644
--- a/content/news/2019/02/15/release-1.7.2.html
+++ b/content/news/2019/02/15/release-1.7.2.html
@@ -188,7 +188,7 @@ We highly recommend all users to upgrade to Flink 1.7.2.</p>
   <span class="nt">&lt;version&gt;</span>1.7.2<span class="nt">&lt;/version&gt;</span>
 <span class="nt">&lt;/dependency&gt;</span></code></pre></div>
 
-<p>You can find the binaries on the updated <a href="http://flink.apache.org/downloads.html">Downloads page</a>.</p>
+<p>You can find the binaries on the updated <a href="/downloads.html">Downloads page</a>.</p>
 
 <p>List of resolved issues:</p>
 
diff --git a/content/news/2019/02/25/release-1.6.4.html b/content/news/2019/02/25/release-1.6.4.html
index 07aff42..8c54e82 100644
--- a/content/news/2019/02/25/release-1.6.4.html
+++ b/content/news/2019/02/25/release-1.6.4.html
@@ -186,7 +186,7 @@
   <span class="nt">&lt;version&gt;</span>1.6.4<span class="nt">&lt;/version&gt;</span>
 <span class="nt">&lt;/dependency&gt;</span></code></pre></div>
 
-<p>You can find the binaries on the updated <a href="http://flink.apache.org/downloads.html">Downloads page</a>.</p>
+<p>You can find the binaries on the updated <a href="/downloads.html">Downloads page</a>.</p>
 
 <p>List of resolved issues:</p>
 
diff --git a/content/news/2019/04/09/release-1.8.0.html b/content/news/2019/04/09/release-1.8.0.html
index 619808a..de67886 100644
--- a/content/news/2019/04/09/release-1.8.0.html
+++ b/content/news/2019/04/09/release-1.8.0.html
@@ -170,15 +170,15 @@ for more details.</p>
 
 <p>Flink 1.8.0 is API-compatible with previous 1.x.y releases for APIs annotated
 with the <code>@Public</code> annotation.  The release is available now and we encourage
-everyone to <a href="http://flink.apache.org/downloads.html">download the release</a> and
+everyone to <a href="/downloads.html">download the release</a> and
 check out the updated
 <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.8/">documentation</a>.
-Feedback through the Flink <a href="http://flink.apache.org/community.html#mailing-lists">mailing
+Feedback through the Flink <a href="/community.html#mailing-lists">mailing
 lists</a> or
 <a href="https://issues.apache.org/jira/projects/FLINK/summary">JIRA</a> is, as always,
 very much appreciated!</p>
 
-<p>You can find the binaries on the updated <a href="http://flink.apache.org/downloads.html">Downloads page</a> on the Flink project site.</p>
+<p>You can find the binaries on the updated <a href="/downloads.html">Downloads page</a> on the Flink project site.</p>
 
 <div class="page-toc">
 <ul id="markdown-toc">
@@ -367,7 +367,7 @@ Convenience binaries that include hadoop are no longer released.</p>
 
     <p>If a deployment relies on <code>flink-shaded-hadoop2</code> being included in
 <code>flink-dist</code>, then you must manually download a pre-packaged Hadoop
-jar from the optional components section of the <a href="https://flink.apache.org/downloads.html">download
+jar from the optional components section of the <a href="/downloads.html">download
 page</a> and copy it into the
 <code>/lib</code> directory.  Alternatively, a Flink distribution that includes
 hadoop can be built by packaging <code>flink-dist</code> and activating the
diff --git a/content/news/2019/07/02/release-1.8.1.html b/content/news/2019/07/02/release-1.8.1.html
index 328d4a7..03d8701 100644
--- a/content/news/2019/07/02/release-1.8.1.html
+++ b/content/news/2019/07/02/release-1.8.1.html
@@ -186,7 +186,7 @@
   <span class="nt">&lt;version&gt;</span>1.8.1<span class="nt">&lt;/version&gt;</span>
 <span class="nt">&lt;/dependency&gt;</span></code></pre></div>
 
-<p>You can find the binaries on the updated <a href="http://flink.apache.org/downloads.html">Downloads page</a>.</p>
+<p>You can find the binaries on the updated <a href="/downloads.html">Downloads page</a>.</p>
 
 <p>List of resolved issues:</p>