You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by rm...@apache.org on 2020/03/30 09:53:20 UTC

[flink-web] branch asf-site updated: regnerate to fix links on index.html

This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 15ac3c8  regnerate to fix links on index.html
15ac3c8 is described below

commit 15ac3c80fecf14b97f8baa1f61d576157fccacad
Author: Robert Metzger <rm...@apache.org>
AuthorDate: Mon Mar 30 11:53:08 2020 +0200

    regnerate to fix links on index.html
---
 content/blog/feed.xml | 19314 +++++++++++++++++++++++++-----------------------
 content/index.html    |     2 +-
 content/zh/index.html |     2 +-
 3 files changed, 9988 insertions(+), 9330 deletions(-)

diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index e204548..6d57b1b 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -8,2008 +8,2007 @@
 
 <item>
 <title>Flink as Unified Engine for Modern Data Warehousing: Production-Ready Hive Integration</title>
-<description>In this blog post, you will learn our motivation behind the Flink-Hive integration, and how Flink 1.10 can help modernize your data warehouse.
-
-{% toc %}
-
-
-## Introduction 
-
-What are some of the latest requirements for your data warehouse and data infrastructure in 2020?
-
-We’ve came up with some for you.
-
-Firstly, today’s business is shifting to a more real-time fashion, and thus demands abilities to process online streaming data with low latency for near-real-time or even real-time analytics. People become less and less tolerant of delays between when data is generated and when it arrives at their hands, ready to use. Hours or even days of delay is not acceptable anymore. Users are expecting minutes, or even seconds, of end-to-end latency for data in their warehouse, to get quicker-than- [...]
-
-Secondly, the infrastructure should be able to handle both offline batch data for offline analytics and exploration, and online streaming data for more timely analytics. Both are indispensable as they both have very valid use cases. Apart from the real time processing mentioned above, batch processing would still exist as it’s good for ad hoc queries and explorations, and full-size calculations. Your modern infrastructure should not force users to choose between one or the other, it shou [...]
-
-Thirdly, the data players, including data engineers, data scientists, analysts, and operations, urge a more unified infrastructure than ever before for easier ramp-up and higher working efficiency. The big data landscape has been fragmented for years - companies may have one set of infrastructure for real time processing, one set for batch, one set for OLAP, etc. That, oftentimes, comes as a result of the legacy of lambda architecture, which was popular in the era when stream processors  [...]
-
-If any of these resonate with you, you just found the right post to read: we have never been this close to the vision by strengthening Flink’s integration with Hive to a production grade.
+<description>&lt;p&gt;In this blog post, you will learn our motivation behind the Flink-Hive integration, and how Flink 1.10 can help modernize your data warehouse.&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#introduction&quot; id=&quot;markdown-toc-introduction&quot;&gt;Introduction&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#flink-and-its-integration-with-hive-comes-into-the-scene&quot; id=&quot;markdown-toc-flink-and-its-integration-with-hive-comes-into-the-scene&quot;&gt;Flink and Its Integration With Hive Comes into the Scene&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#unified-metadata-management&quot; id=&quot;markdown-toc-unified-metadata-management&quot;&gt;Unified Metadata Management&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#stream-processing&quot; id=&quot;markdown-toc-stream-processing&quot;&gt;Stream Processing&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#compatible-with-more-hive-versions&quot; id=&quot;markdown-toc-compatible-with-more-hive-versions&quot;&gt;Compatible with More Hive Versions&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#reuse-hive-user-defined-functions-udfs&quot; id=&quot;markdown-toc-reuse-hive-user-defined-functions-udfs&quot;&gt;Reuse Hive User Defined Functions (UDFs)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#enhanced-read-and-write-on-hive-data&quot; id=&quot;markdown-toc-enhanced-read-and-write-on-hive-data&quot;&gt;Enhanced Read and Write on Hive Data&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#formats&quot; id=&quot;markdown-toc-formats&quot;&gt;Formats&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#more-data-types&quot; id=&quot;markdown-toc-more-data-types&quot;&gt;More Data Types&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#roadmap&quot; id=&quot;markdown-toc-roadmap&quot;&gt;Roadmap&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#summary&quot; id=&quot;markdown-toc-summary&quot;&gt;Summary&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
+&lt;/div&gt;
 
-## Flink and Its Integration With Hive Comes into the Scene
+&lt;h2 id=&quot;introduction&quot;&gt;Introduction&lt;/h2&gt;
 
-Apache Flink has been a proven scalable system to handle extremely high workload of streaming data in super low latency in many giant tech companies.
+&lt;p&gt;What are some of the latest requirements for your data warehouse and data infrastructure in 2020?&lt;/p&gt;
 
-Despite its huge success in the real time processing domain, at its deep root, Flink has been faithfully following its inborn philosophy of being [a unified data processing engine for both batch and streaming](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html), and taking a streaming-first approach in its architecture to do batch processing. By making batch a special case for streaming, Flink really leverages its cutting edge streaming capabilities and applies t [...]
+&lt;p&gt;We’ve came up with some for you.&lt;/p&gt;
 
-On the other hand, Apache Hive has established itself as a focal point of the data warehousing ecosystem. It serves as not only a SQL engine for big data analytics and ETL, but also a data management platform, where data is discovered and defined. As business evolves, it puts new requirements on data warehouse.
+&lt;p&gt;Firstly, today’s business is shifting to a more real-time fashion, and thus demands abilities to process online streaming data with low latency for near-real-time or even real-time analytics. People become less and less tolerant of delays between when data is generated and when it arrives at their hands, ready to use. Hours or even days of delay is not acceptable anymore. Users are expecting minutes, or even seconds, of end-to-end latency for data in their warehouse, to get quic [...]
 
-Thus we started integrating Flink and Hive as a beta version in Flink 1.9. Over the past few months, we have been listening to users’ requests and feedback, extensively enhancing our product, and running rigorous benchmarks (which will be published soon separately). I’m glad to announce that the integration between Flink and Hive is at production grade in [Flink 1.10](https://flink.apache.org/news/2020/02/11/release-1.10.0.html) and we can’t wait to walk you through the details.
+&lt;p&gt;Secondly, the infrastructure should be able to handle both offline batch data for offline analytics and exploration, and online streaming data for more timely analytics. Both are indispensable as they both have very valid use cases. Apart from the real time processing mentioned above, batch processing would still exist as it’s good for ad hoc queries and explorations, and full-size calculations. Your modern infrastructure should not force users to choose between one or the other [...]
 
+&lt;p&gt;Thirdly, the data players, including data engineers, data scientists, analysts, and operations, urge a more unified infrastructure than ever before for easier ramp-up and higher working efficiency. The big data landscape has been fragmented for years - companies may have one set of infrastructure for real time processing, one set for batch, one set for OLAP, etc. That, oftentimes, comes as a result of the legacy of lambda architecture, which was popular in the era when stream pr [...]
 
-### Unified Metadata Management 
+&lt;p&gt;If any of these resonate with you, you just found the right post to read: we have never been this close to the vision by strengthening Flink’s integration with Hive to a production grade.&lt;/p&gt;
 
-Hive Metastore has evolved into the de facto metadata hub over the years in the Hadoop, or even the cloud, ecosystem. Many companies have a single Hive Metastore service instance in production to manage all of their schemas, either Hive or non-Hive metadata, as the single source of truth.
+&lt;h2 id=&quot;flink-and-its-integration-with-hive-comes-into-the-scene&quot;&gt;Flink and Its Integration With Hive Comes into the Scene&lt;/h2&gt;
 
-In 1.9 we introduced Flink’s [HiveCatalog](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html), connecting Flink to users’ rich metadata pool. The meaning of `HiveCatalog` is two-fold here. First, it allows Apache Flink users to utilize Hive Metastore to store and manage Flink’s metadata, including tables, UDFs, and statistics of data. Second, it enables Flink to access Hive’s existing metadata, so that Flink itself can read and write Hive tables.
+&lt;p&gt;Apache Flink has been a proven scalable system to handle extremely high workload of streaming data in super low latency in many giant tech companies.&lt;/p&gt;
 
-In Flink 1.10, users can store Flink&#39;s own tables, views, UDFs, statistics in Hive Metastore on all of the compatible Hive versions mentioned above. [Here’s an end-to-end example](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html#example) of how to store a Flink’s Kafka source table in Hive Metastore and later query the table in Flink SQL.
+&lt;p&gt;Despite its huge success in the real time processing domain, at its deep root, Flink has been faithfully following its inborn philosophy of being &lt;a href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;a unified data processing engine for both batch and streaming&lt;/a&gt;, and taking a streaming-first approach in its architecture to do batch processing. By making batch a special case for streaming, Flink really leverages its cutting [...]
 
+&lt;p&gt;On the other hand, Apache Hive has established itself as a focal point of the data warehousing ecosystem. It serves as not only a SQL engine for big data analytics and ETL, but also a data management platform, where data is discovered and defined. As business evolves, it puts new requirements on data warehouse.&lt;/p&gt;
 
-### Stream Processing
+&lt;p&gt;Thus we started integrating Flink and Hive as a beta version in Flink 1.9. Over the past few months, we have been listening to users’ requests and feedback, extensively enhancing our product, and running rigorous benchmarks (which will be published soon separately). I’m glad to announce that the integration between Flink and Hive is at production grade in &lt;a href=&quot;https://flink.apache.org/news/2020/02/11/release-1.10.0.html&quot;&gt;Flink 1.10&lt;/a&gt; and we can’t wait [...]
 
-The Hive integration feature in Flink 1.10 empowers users to re-imagine what they can accomplish with their Hive data and unlock stream processing use cases:
+&lt;h3 id=&quot;unified-metadata-management&quot;&gt;Unified Metadata Management&lt;/h3&gt;
 
-- join real-time streaming data in Flink with offline Hive data for more complex data processing
-- backfill Hive data with Flink directly in a unified fashion
-- leverage Flink to move real-time data into Hive more quickly, greatly shortening the end-to-end latency between when data is generated and when it arrives at your data warehouse for analytics, from hours — or even days — to minutes
+&lt;p&gt;Hive Metastore has evolved into the de facto metadata hub over the years in the Hadoop, or even the cloud, ecosystem. Many companies have a single Hive Metastore service instance in production to manage all of their schemas, either Hive or non-Hive metadata, as the single source of truth.&lt;/p&gt;
 
+&lt;p&gt;In 1.9 we introduced Flink’s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html&quot;&gt;HiveCatalog&lt;/a&gt;, connecting Flink to users’ rich metadata pool. The meaning of &lt;code&gt;HiveCatalog&lt;/code&gt; is two-fold here. First, it allows Apache Flink users to utilize Hive Metastore to store and manage Flink’s metadata, including tables, UDFs, and statistics of data. Second, it enables Flink to access Hive’s exis [...]
 
-### Compatible with More Hive Versions
+&lt;p&gt;In Flink 1.10, users can store Flink’s own tables, views, UDFs, statistics in Hive Metastore on all of the compatible Hive versions mentioned above. &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_catalog.html#example&quot;&gt;Here’s an end-to-end example&lt;/a&gt; of how to store a Flink’s Kafka source table in Hive Metastore and later query the table in Flink SQL.&lt;/p&gt;
 
-In Flink 1.10, we brought full coverage to most Hive versions including 1.0, 1.1, 1.2, 2.0, 2.1, 2.2, 2.3, and 3.1. Take a look [here](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/#supported-hive-versions).
+&lt;h3 id=&quot;stream-processing&quot;&gt;Stream Processing&lt;/h3&gt;
 
+&lt;p&gt;The Hive integration feature in Flink 1.10 empowers users to re-imagine what they can accomplish with their Hive data and unlock stream processing use cases:&lt;/p&gt;
 
-### Reuse Hive User Defined Functions (UDFs)
+&lt;ul&gt;
+  &lt;li&gt;join real-time streaming data in Flink with offline Hive data for more complex data processing&lt;/li&gt;
+  &lt;li&gt;backfill Hive data with Flink directly in a unified fashion&lt;/li&gt;
+  &lt;li&gt;leverage Flink to move real-time data into Hive more quickly, greatly shortening the end-to-end latency between when data is generated and when it arrives at your data warehouse for analytics, from hours — or even days — to minutes&lt;/li&gt;
+&lt;/ul&gt;
 
-Users can [reuse all kinds of Hive UDFs in Flink](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_functions.html#hive-user-defined-functions) since Flink 1.9.
+&lt;h3 id=&quot;compatible-with-more-hive-versions&quot;&gt;Compatible with More Hive Versions&lt;/h3&gt;
 
-This is a great win for Flink users with past history with the Hive ecosystem, as they may have developed custom business logic in their Hive UDFs. Being able to run these functions without any rewrite saves users a lot of time and brings them a much smoother experience when they migrate to Flink.
+&lt;p&gt;In Flink 1.10, we brought full coverage to most Hive versions including 1.0, 1.1, 1.2, 2.0, 2.1, 2.2, 2.3, and 3.1. Take a look &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/#supported-hive-versions&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
-To take it a step further, Flink 1.10 introduces [compatibility of Hive built-in functions via HiveModule](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_functions.html#use-hive-built-in-functions-via-hivemodule). Over the years, the Hive community has developed a few hundreds of built-in functions that are super handy for users. For those built-in functions that don&#39;t exist in Flink yet, users are now able to leverage the existing Hive built-in func [...]
+&lt;h3 id=&quot;reuse-hive-user-defined-functions-udfs&quot;&gt;Reuse Hive User Defined Functions (UDFs)&lt;/h3&gt;
 
+&lt;p&gt;Users can &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_functions.html#hive-user-defined-functions&quot;&gt;reuse all kinds of Hive UDFs in Flink&lt;/a&gt; since Flink 1.9.&lt;/p&gt;
 
-### Enhanced Read and Write on Hive Data
+&lt;p&gt;This is a great win for Flink users with past history with the Hive ecosystem, as they may have developed custom business logic in their Hive UDFs. Being able to run these functions without any rewrite saves users a lot of time and brings them a much smoother experience when they migrate to Flink.&lt;/p&gt;
 
-Flink 1.10 extends its read and write capabilities on Hive data to all the common use cases with better performance. 
+&lt;p&gt;To take it a step further, Flink 1.10 introduces &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/hive_functions.html#use-hive-built-in-functions-via-hivemodule&quot;&gt;compatibility of Hive built-in functions via HiveModule&lt;/a&gt;. Over the years, the Hive community has developed a few hundreds of built-in functions that are super handy for users. For those built-in functions that don’t exist in Flink yet, users are now able to le [...]
 
-On the reading side, Flink now can read Hive regular tables, partitioned tables, and views. Lots of optimization techniques are developed around reading, including partition pruning and projection pushdown to transport less data from file storage, limit pushdown for faster experiment and exploration, and vectorized reader for ORC files.
+&lt;h3 id=&quot;enhanced-read-and-write-on-hive-data&quot;&gt;Enhanced Read and Write on Hive Data&lt;/h3&gt;
 
-On the writing side, Flink 1.10 introduces “INSERT INTO” and “INSERT OVERWRITE” to its syntax, and can write to not only Hive’s regular tables, but also partitioned tables with either static or dynamic partitions.
+&lt;p&gt;Flink 1.10 extends its read and write capabilities on Hive data to all the common use cases with better performance.&lt;/p&gt;
 
-### Formats
+&lt;p&gt;On the reading side, Flink now can read Hive regular tables, partitioned tables, and views. Lots of optimization techniques are developed around reading, including partition pruning and projection pushdown to transport less data from file storage, limit pushdown for faster experiment and exploration, and vectorized reader for ORC files.&lt;/p&gt;
 
-Your engine should be able to handle all common types of file formats to give you the freedom of choosing one over another in order to fit your business needs. It’s no exception for Flink. We have tested the following table storage formats: text, csv, SequenceFile, ORC, and Parquet.
+&lt;p&gt;On the writing side, Flink 1.10 introduces “INSERT INTO” and “INSERT OVERWRITE” to its syntax, and can write to not only Hive’s regular tables, but also partitioned tables with either static or dynamic partitions.&lt;/p&gt;
 
-### More Data Types
+&lt;h3 id=&quot;formats&quot;&gt;Formats&lt;/h3&gt;
 
-In Flink 1.10, we added support for a few more frequently-used Hive data types that were not covered by Flink 1.9. Flink users now should have a full, smooth experience to query and manipulate Hive data from Flink.
+&lt;p&gt;Your engine should be able to handle all common types of file formats to give you the freedom of choosing one over another in order to fit your business needs. It’s no exception for Flink. We have tested the following table storage formats: text, csv, SequenceFile, ORC, and Parquet.&lt;/p&gt;
 
+&lt;h3 id=&quot;more-data-types&quot;&gt;More Data Types&lt;/h3&gt;
 
-### Roadmap
+&lt;p&gt;In Flink 1.10, we added support for a few more frequently-used Hive data types that were not covered by Flink 1.9. Flink users now should have a full, smooth experience to query and manipulate Hive data from Flink.&lt;/p&gt;
 
-Integration between any two systems is a never-ending story. 
+&lt;h3 id=&quot;roadmap&quot;&gt;Roadmap&lt;/h3&gt;
 
-We are constantly improving Flink itself and the Flink-Hive integration also gets improved by collecting user feedback and working with folks in this vibrant community.
+&lt;p&gt;Integration between any two systems is a never-ending story.&lt;/p&gt;
 
-After careful consideration and prioritization of the feedback we received, we have prioritize many of the below requests for the next Flink release of 1.11.
+&lt;p&gt;We are constantly improving Flink itself and the Flink-Hive integration also gets improved by collecting user feedback and working with folks in this vibrant community.&lt;/p&gt;
 
-- Hive streaming sink so that Flink can stream data into Hive tables, bringing a real streaming experience to Hive
-- Native Parquet reader for better performance
-- Additional interoperability - support creating Hive tables, views, functions in Flink
-- Better out-of-box experience with built-in dependencies, including documentations
-- JDBC driver so that users can reuse their existing toolings to run SQL jobs on Flink
-- Hive syntax and semantic compatible mode
+&lt;p&gt;After careful consideration and prioritization of the feedback we received, we have prioritize many of the below requests for the next Flink release of 1.11.&lt;/p&gt;
 
-If you have more feature requests or discover bugs, please reach out to the community through mailing list and JIRAs.
+&lt;ul&gt;
+  &lt;li&gt;Hive streaming sink so that Flink can stream data into Hive tables, bringing a real streaming experience to Hive&lt;/li&gt;
+  &lt;li&gt;Native Parquet reader for better performance&lt;/li&gt;
+  &lt;li&gt;Additional interoperability - support creating Hive tables, views, functions in Flink&lt;/li&gt;
+  &lt;li&gt;Better out-of-box experience with built-in dependencies, including documentations&lt;/li&gt;
+  &lt;li&gt;JDBC driver so that users can reuse their existing toolings to run SQL jobs on Flink&lt;/li&gt;
+  &lt;li&gt;Hive syntax and semantic compatible mode&lt;/li&gt;
+&lt;/ul&gt;
 
+&lt;p&gt;If you have more feature requests or discover bugs, please reach out to the community through mailing list and JIRAs.&lt;/p&gt;
 
-## Summary
+&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;
 
-Data warehousing is shifting to a more real-time fashion, and Apache Flink can make a difference for your organization in this space.
+&lt;p&gt;Data warehousing is shifting to a more real-time fashion, and Apache Flink can make a difference for your organization in this space.&lt;/p&gt;
 
-Flink 1.10 brings production-ready Hive integration and empowers users to achieve more in both metadata management and unified/batch data processing.
+&lt;p&gt;Flink 1.10 brings production-ready Hive integration and empowers users to achieve more in both metadata management and unified/batch data processing.&lt;/p&gt;
 
-We encourage all our users to get their hands on Flink 1.10. You are very welcome to join the community in development, discussions, and all other kinds of collaborations in this topic.
+&lt;p&gt;We encourage all our users to get their hands on Flink 1.10. You are very welcome to join the community in development, discussions, and all other kinds of collaborations in this topic.&lt;/p&gt;
 
 </description>
-<pubDate>Fri, 27 Mar 2020 02:30:00 +0000</pubDate>
+<pubDate>Fri, 27 Mar 2020 03:30:00 +0100</pubDate>
 <link>https://flink.apache.org/features/2020/03/27/flink-for-data-warehouse.html</link>
 <guid isPermaLink="true">/features/2020/03/27/flink-for-data-warehouse.html</guid>
 </item>
 
 <item>
 <title>Advanced Flink Application Patterns Vol.2: Dynamic Updates of Application Logic</title>
-<description>In the [first article](https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html) of the series, we gave a high-level description of the objectives and required functionality of a Fraud Detection engine. We also described how to make data partitioning in Apache Flink customizable based on modifiable rules instead of using a hardcoded `KeysExtractor` implementation.
+<description>&lt;p&gt;In the &lt;a href=&quot;https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html&quot;&gt;first article&lt;/a&gt; of the series, we gave a high-level description of the objectives and required functionality of a Fraud Detection engine. We also described how to make data partitioning in Apache Flink customizable based on modifiable rules instead of using a hardcoded &lt;code&gt;KeysExtractor&lt;/code&gt; implementation.&lt;/p&gt;
 
-We intentionally omitted details of how the applied rules are initialized and what possibilities exist for updating them at runtime. In this post, we will address exactly these details. You will learn how the approach to data partitioning described in [Part 1](https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html) can be applied in combination with a dynamic configuration. These two patterns, when used together, can eliminate the need to recompile the code and redeploy your  [...]
+&lt;p&gt;We intentionally omitted details of how the applied rules are initialized and what possibilities exist for updating them at runtime. In this post, we will address exactly these details. You will learn how the approach to data partitioning described in &lt;a href=&quot;https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html&quot;&gt;Part 1&lt;/a&gt; can be applied in combination with a dynamic configuration. These two patterns, when used together, can eliminate the nee [...]
 
-## Rules Broadcasting
+&lt;h2 id=&quot;rules-broadcasting&quot;&gt;Rules Broadcasting&lt;/h2&gt;
 
-Let&#39;s first have a look at the [previously-defined](https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html#dynamic-data-partitioning) data-processing pipeline:
+&lt;p&gt;Let’s first have a look at the &lt;a href=&quot;https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html#dynamic-data-partitioning&quot;&gt;previously-defined&lt;/a&gt; data-processing pipeline:&lt;/p&gt;
 
-```java
-DataStream&lt;Alert&gt; alerts =
-    transactions
-        .process(new DynamicKeyFunction())
-        .keyBy((keyed) -&gt; keyed.getKey());
-        .process(new DynamicAlertFunction())
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Alert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;alerts&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;transactions&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicKeyFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt; [...]
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicAlertFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-`DynamicKeyFunction` provides dynamic data partitioning while `DynamicAlertFunction` is responsible for executing the main logic of processing transactions and sending alert messages according to defined rules.
+&lt;p&gt;&lt;code&gt;DynamicKeyFunction&lt;/code&gt; provides dynamic data partitioning while &lt;code&gt;DynamicAlertFunction&lt;/code&gt; is responsible for executing the main logic of processing transactions and sending alert messages according to defined rules.&lt;/p&gt;
 
-Vol.1 of this series simplified the use case and assumed that the applied set of rules is pre-initialized and accessible via the `List&lt;Rules&gt;` within `DynamicKeyFunction`.
+&lt;p&gt;Vol.1 of this series simplified the use case and assumed that the applied set of rules is pre-initialized and accessible via the &lt;code&gt;List&amp;lt;Rules&amp;gt;&lt;/code&gt; within &lt;code&gt;DynamicKeyFunction&lt;/code&gt;.&lt;/p&gt;
 
-```java
-public class DynamicKeyFunction
-    extends ProcessFunction&lt;Transaction, Keyed&lt;Transaction, String, Integer&gt;&gt; {
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;DynamicKeyFunction&lt;/span&gt;
+    &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span cla [...]
 
-  /* Simplified */
-  List&lt;Rule&gt; rules = /* Rules that are initialized somehow.*/;
-  ...
-}
-```
+  &lt;span class=&quot;cm&quot;&gt;/* Simplified */&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;List&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rules&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;cm&quot;&gt;/* Rules that are initialized somehow.*/&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;...&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Adding rules to this list is obviously possible directly inside the code of the Flink Job at the stage of its initialization (Create a `List` object; use it&#39;s `add` method). A major drawback of doing so is that it will require recompilation of the job with each rule modification. In a real Fraud Detection system, rules are expected to change on a frequent basis, making this approach unacceptable from the point of view of business and operational requirements. A different approach is needed.
+&lt;p&gt;Adding rules to this list is obviously possible directly inside the code of the Flink Job at the stage of its initialization (Create a &lt;code&gt;List&lt;/code&gt; object; use it’s &lt;code&gt;add&lt;/code&gt; method). A major drawback of doing so is that it will require recompilation of the job with each rule modification. In a real Fraud Detection system, rules are expected to change on a frequent basis, making this approach unacceptable from the point of view of business and [...]
 
-Next, let&#39;s take a look at a sample rule definition that we introduced in the previous post of the series:
+&lt;p&gt;Next, let’s take a look at a sample rule definition that we introduced in the previous post of the series:&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/patterns-blog-2/rule-dsl.png&quot; width=&quot;800px&quot; alt=&quot;Figure 1: Rule definition&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/patterns-blog-2/rule-dsl.png&quot; width=&quot;800px&quot; alt=&quot;Figure 1: Rule definition&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 1: Rule definition&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The previous post covered use of `groupingKeyNames` by `DynamicKeyFunction` to extract message keys. Parameters from the second part of this rule are used by `DynamicAlertFunction`: they define the actual logic of the performed operations and their parameters (such as the alert-triggering limit). This means that the same rule must be present in both `DynamicKeyFunction` and `DynamicAlertFunction`. To achieve this result, we will use the [broadcast data distribution mechanism](https://ci. [...]
+&lt;p&gt;The previous post covered use of &lt;code&gt;groupingKeyNames&lt;/code&gt; by &lt;code&gt;DynamicKeyFunction&lt;/code&gt; to extract message keys. Parameters from the second part of this rule are used by &lt;code&gt;DynamicAlertFunction&lt;/code&gt;: they define the actual logic of the performed operations and their parameters (such as the alert-triggering limit). This means that the same rule must be present in both &lt;code&gt;DynamicKeyFunction&lt;/code&gt; and &lt;code&gt;Dy [...]
 
-Figure 2 presents the final job graph of the system that we are building:
+&lt;p&gt;Figure 2 presents the final job graph of the system that we are building:&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/patterns-blog-2/job-graph.png&quot; width=&quot;800px&quot; alt=&quot;Figure 2: Job Graph of the Fraud Detection Flink Job&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/patterns-blog-2/job-graph.png&quot; width=&quot;800px&quot; alt=&quot;Figure 2: Job Graph of the Fraud Detection Flink Job&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 2: Job Graph of the Fraud Detection Flink Job&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
-
-The main blocks of the Transactions processing pipeline are:&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-* **Transaction Source** that consumes transaction messages from Kafka partitions in parallel. &lt;br&gt;
+&lt;p&gt;The main blocks of the Transactions processing pipeline are:&lt;br /&gt;&lt;/p&gt;
 
-* **Dynamic Key Function** that performs data enrichment with a dynamic key. The subsequent `keyBy` hashes this dynamic key and partitions the data accordingly among all parallel instances of the following operator.
-
-* **Dynamic Alert Function** that accumulates a data window and creates Alerts based on it.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Transaction Source&lt;/strong&gt; that consumes transaction messages from Kafka partitions in parallel. &lt;br /&gt;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Dynamic Key Function&lt;/strong&gt; that performs data enrichment with a dynamic key. The subsequent &lt;code&gt;keyBy&lt;/code&gt; hashes this dynamic key and partitions the data accordingly among all parallel instances of the following operator.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Dynamic Alert Function&lt;/strong&gt; that accumulates a data window and creates Alerts based on it.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-## Data Exchange inside Apache Flink
+&lt;h2 id=&quot;data-exchange-inside-apache-flink&quot;&gt;Data Exchange inside Apache Flink&lt;/h2&gt;
 
-The job graph above also indicates various data exchange patterns between the operators. In order to understand how the broadcast pattern works, let&#39;s take a short detour and discuss what methods of message propagation exist in Apache Flink&#39;s distributed runtime.
+&lt;p&gt;The job graph above also indicates various data exchange patterns between the operators. In order to understand how the broadcast pattern works, let’s take a short detour and discuss what methods of message propagation exist in Apache Flink’s distributed runtime.&lt;/p&gt;
 
-* The __FORWARD__ connection after the Transaction Source means that all data consumed by one of the parallel instances of the Transaction Source operator is transferred to exactly one instance of the subsequent `DynamicKeyFunction` operator. It also indicates the same level of parallelism of the two connected operators (12 in the above case). This communication pattern is illustrated in Figure 3. Orange circles represent transactions, and dotted rectangles depict parallel instances of t [...]
+&lt;ul&gt;
+  &lt;li&gt;The &lt;strong&gt;FORWARD&lt;/strong&gt; connection after the Transaction Source means that all data consumed by one of the parallel instances of the Transaction Source operator is transferred to exactly one instance of the subsequent &lt;code&gt;DynamicKeyFunction&lt;/code&gt; operator. It also indicates the same level of parallelism of the two connected operators (12 in the above case). This communication pattern is illustrated in Figure 3. Orange circles represent transact [...]
+&lt;/ul&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/patterns-blog-2/forward.png&quot; width=&quot;800px&quot; alt=&quot;Figure 3: FORWARD message passing across operator instances&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/patterns-blog-2/forward.png&quot; width=&quot;800px&quot; alt=&quot;Figure 3: FORWARD message passing across operator instances&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 3: FORWARD message passing across operator instances&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-* The __HASH__ connection between `DynamicKeyFunction` and `DynamicAlertFunction` means that for each message a hash code is calculated and messages are evenly distributed among available parallel instances of the next operator. Such a connection needs to be explicitly &quot;requested&quot; from Flink by using `keyBy`.
+&lt;ul&gt;
+  &lt;li&gt;The &lt;strong&gt;HASH&lt;/strong&gt; connection between &lt;code&gt;DynamicKeyFunction&lt;/code&gt; and &lt;code&gt;DynamicAlertFunction&lt;/code&gt; means that for each message a hash code is calculated and messages are evenly distributed among available parallel instances of the next operator. Such a connection needs to be explicitly “requested” from Flink by using &lt;code&gt;keyBy&lt;/code&gt;.&lt;/li&gt;
+&lt;/ul&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/patterns-blog-2/hash.png&quot; width=&quot;800px&quot; alt=&quot;Figure 4: HASHED message passing across operator instances (via `keyBy`)&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/patterns-blog-2/hash.png&quot; width=&quot;800px&quot; alt=&quot;Figure 4: HASHED message passing across operator instances (via `keyBy`)&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 4: HASHED message passing across operator instances (via `keyBy`)&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-* A __REBALANCE__ distribution is either caused by an explicit call to `rebalance()` or by a change of parallelism (12 -&gt; 1 in the case of the job graph from Figure 2). Calling `rebalance()` causes data to be repartitioned in a round-robin fashion and can help to mitigate data skew in certain scenarios.
+&lt;ul&gt;
+  &lt;li&gt;A &lt;strong&gt;REBALANCE&lt;/strong&gt; distribution is either caused by an explicit call to &lt;code&gt;rebalance()&lt;/code&gt; or by a change of parallelism (12 -&amp;gt; 1 in the case of the job graph from Figure 2). Calling &lt;code&gt;rebalance()&lt;/code&gt; causes data to be repartitioned in a round-robin fashion and can help to mitigate data skew in certain scenarios.&lt;/li&gt;
+&lt;/ul&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/patterns-blog-2/rebalance.png&quot; width=&quot;800px&quot; alt=&quot;Figure 5: REBALANCE message passing across operator instances&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/patterns-blog-2/rebalance.png&quot; width=&quot;800px&quot; alt=&quot;Figure 5: REBALANCE message passing across operator instances&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 5: REBALANCE message passing across operator instances&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The Fraud Detection job graph in Figure 2 contains an additional data source: _Rules Source_. It also consumes from Kafka. Rules are &quot;mixed into&quot; the main processing data flow through the __BROADCAST__ channel. Unlike other methods of transmitting data between operators, such as `forward`, `hash` or `rebalance` that make each message available for processing in only one of the parallel instances of the receiving operator, `broadcast` makes each message available at the input of [...]
+&lt;p&gt;The Fraud Detection job graph in Figure 2 contains an additional data source: &lt;em&gt;Rules Source&lt;/em&gt;. It also consumes from Kafka. Rules are “mixed into” the main processing data flow through the &lt;strong&gt;BROADCAST&lt;/strong&gt; channel. Unlike other methods of transmitting data between operators, such as &lt;code&gt;forward&lt;/code&gt;, &lt;code&gt;hash&lt;/code&gt; or &lt;code&gt;rebalance&lt;/code&gt; that make each message available for processing in only o [...]
 
 &lt;center&gt;
- &lt;img src=&quot;{{ site.baseurl }}/img/blog/patterns-blog-2/broadcast.png&quot; width=&quot;800px&quot; alt=&quot;Figure 6: BROADCAST message passing across operator instances&quot;/&gt;
- &lt;br/&gt;
+ &lt;img src=&quot;/img/blog/patterns-blog-2/broadcast.png&quot; width=&quot;800px&quot; alt=&quot;Figure 6: BROADCAST message passing across operator instances&quot; /&gt;
+ &lt;br /&gt;
  &lt;i&gt;&lt;small&gt;Figure 6: BROADCAST message passing across operator instances&lt;/small&gt;&lt;/i&gt;
  &lt;/center&gt;
- &lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;div class=&quot;alert alert-info&quot; markdown=&quot;1&quot;&gt;
-&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
-There are actually a few more specialized data partitioning schemes in Flink which we did not mention here. If you want to find out more, please refer to Flink&#39;s documentation on __[stream partitioning](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/#physical-partitioning)__.
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
+There are actually a few more specialized data partitioning schemes in Flink which we did not mention here. If you want to find out more, please refer to Flink’s documentation on &lt;strong&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/#physical-partitioning&quot;&gt;stream partitioning&lt;/a&gt;&lt;/strong&gt;.&lt;/p&gt;
 &lt;/div&gt;
 
-## Broadcast State Pattern
+&lt;h2 id=&quot;broadcast-state-pattern&quot;&gt;Broadcast State Pattern&lt;/h2&gt;
 
-In order to make use of the Rules Source, we need to &quot;connect&quot; it to the main data stream:
+&lt;p&gt;In order to make use of the Rules Source, we need to “connect” it to the main data stream:&lt;/p&gt;
 
-```java
-// Streams setup
-DataStream&lt;Transaction&gt; transactions = [...]
-DataStream&lt;Rule&gt; rulesUpdateStream = [...]
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// Streams setup&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;transactions&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[...]&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rulesUpdateStream&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;[...]&lt;/span&gt;
 
-BroadcastStream&lt;Rule&gt; rulesStream = rulesUpdateStream.broadcast(RULES_STATE_DESCRIPTOR);
+&lt;span class=&quot;n&quot;&gt;BroadcastStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rulesStream&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rulesUpdateStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;broadcast&lt;/span&gt;&lt;span  [...]
 
-// Processing pipeline setup
- DataStream&lt;Alert&gt; alerts =
-     transactions
-         .connect(rulesStream)
-         .process(new DynamicKeyFunction())
-         .keyBy((keyed) -&gt; keyed.getKey())
-         .connect(rulesStream)
-         .process(new DynamicAlertFunction())
-```
+&lt;span class=&quot;c1&quot;&gt;// Processing pipeline setup&lt;/span&gt;
+ &lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Alert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;alerts&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
+     &lt;span class=&quot;n&quot;&gt;transactions&lt;/span&gt;
+         &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;connect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rulesStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+         &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicKeyFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
+         &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt; [...]
+         &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;connect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rulesStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+         &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicAlertFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-As you can see, the broadcast stream can be created from any regular stream by calling the `broadcast` method and specifying a state descriptor. Flink assumes that broadcasted data needs to be stored and retrieved while processing events of the main data flow and, therefore, always automatically creates a corresponding _broadcast state_ from this state descriptor. This is different from any other Apache Flink state type in which you need to initialize it in the `open()` method of the  pr [...]
+&lt;p&gt;As you can see, the broadcast stream can be created from any regular stream by calling the &lt;code&gt;broadcast&lt;/code&gt; method and specifying a state descriptor. Flink assumes that broadcasted data needs to be stored and retrieved while processing events of the main data flow and, therefore, always automatically creates a corresponding &lt;em&gt;broadcast state&lt;/em&gt; from this state descriptor. This is different from any other Apache Flink state type in which you need [...]
 
-```java
-public static final MapStateDescriptor&lt;Integer, Rule&gt; RULES_STATE_DESCRIPTOR =
-        new MapStateDescriptor&lt;&gt;(&quot;rules&quot;, Integer.class, Rule.class);
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;static&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;final&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;MapStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&qu [...]
+        &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;MapStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;rules&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;class&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&g [...]
 
-Connecting to `rulesStream` causes some changes in the signature of the processing functions. The previous article presented it in a slightly simplified way as a `ProcessFunction`. However, `DynamicKeyFunction` is actually a `BroadcastProcessFunction`.
+&lt;p&gt;Connecting to &lt;code&gt;rulesStream&lt;/code&gt; causes some changes in the signature of the processing functions. The previous article presented it in a slightly simplified way as a &lt;code&gt;ProcessFunction&lt;/code&gt;. However, &lt;code&gt;DynamicKeyFunction&lt;/code&gt; is actually a &lt;code&gt;BroadcastProcessFunction&lt;/code&gt;.&lt;/p&gt;
 
-```java
-public abstract class BroadcastProcessFunction&lt;IN1, IN2, OUT&gt; {
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;abstract&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;BroadcastProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;IN1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot [...]
 
-    public abstract void processElement(IN1 value,
-                                        ReadOnlyContext ctx,
-                                        Collector&lt;OUT&gt; out) throws Exception;
+    &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;abstract&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;IN1&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                                        &lt;span class=&quot;n&quot;&gt;ReadOnlyContext&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                                        &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;OUT&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
 
-    public abstract void processBroadcastElement(IN2 value,
-                                                 Context ctx,
-                                                 Collector&lt;OUT&gt; out) throws Exception;
+    &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;abstract&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processBroadcastElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;IN2&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                                                 &lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                                                 &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;OUT&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt;&lt;span class=&quot;o&quot;&gt; [...]
 
-}
-```
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-The difference is the addition of the `processBroadcastElement` method through which messages of the rules stream will arrive. The following new version of `DynamicKeyFunction` allows modifying the list of data-distribution keys at runtime through this stream:
+&lt;p&gt;The difference is the addition of the &lt;code&gt;processBroadcastElement&lt;/code&gt; method through which messages of the rules stream will arrive. The following new version of &lt;code&gt;DynamicKeyFunction&lt;/code&gt; allows modifying the list of data-distribution keys at runtime through this stream:&lt;/p&gt;
 
-```java
-public class DynamicKeyFunction
-    extends BroadcastProcessFunction&lt;Transaction, Rule, Keyed&lt;Transaction, String, Integer&gt;&gt; {
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;DynamicKeyFunction&lt;/span&gt;
+    &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;BroadcastProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span c [...]
 
 
-  @Override
-  public void processBroadcastElement(Rule rule,
-                                     Context ctx,
-                                     Collector&lt;Keyed&lt;Transaction, String, Integer&gt;&gt; out) {
-    BroadcastState&lt;Integer, Rule&gt; broadcastState = ctx.getBroadcastState(RULES_STATE_DESCRIPTOR);
-    broadcastState.put(rule.getRuleId(), rule);
-  }
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processBroadcastElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                                     &lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                                     &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/ [...]
+    &lt;span class=&quot;n&quot;&gt;BroadcastState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;broadcastState&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quo [...]
+    &lt;span class=&quot;n&quot;&gt;broadcastState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;put&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getRuleId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
 
-  @Override
-  public void processElement(Transaction event,
-                           ReadOnlyContext ctx,
-                           Collector&lt;Keyed&lt;Transaction, String, Integer&gt;&gt; out){
-    ReadOnlyBroadcastState&lt;Integer, Rule&gt; rulesState =
-                                  ctx.getBroadcastState(RULES_STATE_DESCRIPTOR);
-    for (Map.Entry&lt;Integer, Rule&gt; entry : rulesState.immutableEntries()) {
-        final Rule rule = entry.getValue();
-        out.collect(
-          new Keyed&lt;&gt;(
-            event, KeysExtractor.getKey(rule.getGroupingKeyNames(), event), rule.getRuleId()));
-    }
-  }
-}
-```
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                           &lt;span class=&quot;n&quot;&gt;ReadOnlyContext&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+                           &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&l [...]
+    &lt;span class=&quot;n&quot;&gt;ReadOnlyBroadcastState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rulesState&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
+                                  &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getBroadcastState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;RULES_STATE_DESCRIPTOR&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;Entry&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/sp [...]
+        &lt;span class=&quot;kd&quot;&gt;final&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;entry&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getValue&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+        &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+          &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+            &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeysExtractor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getGroupingKeyNames&lt;/span&gt;&lt;span class=&quot; [...]
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-In the above code, `processElement()` receives Transactions, and `processBroadcastElement()` receives Rule updates. When a new rule is created, it is distributed as depicted in Figure 6 and saved in all parallel instances of the operator using `processBroadcastState`. We use a Rule&#39;s ID as the key to store and reference individual rules. Instead of iterating over a hardcoded `List&lt;Rules&gt;`, we iterate over entries in the dynamically-updated broadcast state.
+&lt;p&gt;In the above code, &lt;code&gt;processElement()&lt;/code&gt; receives Transactions, and &lt;code&gt;processBroadcastElement()&lt;/code&gt; receives Rule updates. When a new rule is created, it is distributed as depicted in Figure 6 and saved in all parallel instances of the operator using &lt;code&gt;processBroadcastState&lt;/code&gt;. We use a Rule’s ID as the key to store and reference individual rules. Instead of iterating over a hardcoded &lt;code&gt;List&amp;lt;Rules&amp;gt [...]
 
-`DynamicAlertFunction` follows the same logic with respect to storing the rules in the broadcast `MapState`. As described in [Part 1](https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html), each message in the `processElement` input is intended to be processed by one specific rule and comes &quot;pre-marked&quot; with a corresponding ID by  `DynamicKeyFunction`. All we need to do is retrieve the definition of the corresponding rule from `BroadcastState` by using the provided [...]
+&lt;p&gt;&lt;code&gt;DynamicAlertFunction&lt;/code&gt; follows the same logic with respect to storing the rules in the broadcast &lt;code&gt;MapState&lt;/code&gt;. As described in &lt;a href=&quot;https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html&quot;&gt;Part 1&lt;/a&gt;, each message in the &lt;code&gt;processElement&lt;/code&gt; input is intended to be processed by one specific rule and comes “pre-marked” with a corresponding ID by  &lt;code&gt;DynamicKeyFunction&lt;/ [...]
 
-# Summary
+&lt;h1 id=&quot;summary&quot;&gt;Summary&lt;/h1&gt;
 
-In this blog post, we continued our investigation of the use case of a Fraud Detection System built with Apache Flink. We looked into different ways in which data can be distributed between parallel operator instances and, most importantly, examined broadcast state. We demonstrated how dynamic partitioning — a pattern described in the [first part](https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html) of the series — can be combined and enhanced by the functionality provided [...]
+&lt;p&gt;In this blog post, we continued our investigation of the use case of a Fraud Detection System built with Apache Flink. We looked into different ways in which data can be distributed between parallel operator instances and, most importantly, examined broadcast state. We demonstrated how dynamic partitioning — a pattern described in the &lt;a href=&quot;https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html&quot;&gt;first part&lt;/a&gt; of the series — can be combined  [...]
 </description>
-<pubDate>Tue, 24 Mar 2020 12:00:00 +0000</pubDate>
+<pubDate>Tue, 24 Mar 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/03/24/demo-fraud-detection-2.html</link>
 <guid isPermaLink="true">/news/2020/03/24/demo-fraud-detection-2.html</guid>
 </item>
 
 <item>
 <title>Apache Beam: How Beam Runs on Top of Flink</title>
-<description>Note: This blog post is based on the talk [&quot;Beam on Flink: How Does It Actually Work?&quot;](https://www.youtube.com/watch?v=hxHGLrshnCY).
+<description>&lt;p&gt;Note: This blog post is based on the talk &lt;a href=&quot;https://www.youtube.com/watch?v=hxHGLrshnCY&quot;&gt;“Beam on Flink: How Does It Actually Work?”&lt;/a&gt;.&lt;/p&gt;
 
-[Apache Flink](https://flink.apache.org/) and [Apache Beam](https://beam.apache.org/) are open-source frameworks for parallel, distributed data processing at scale. Unlike Flink, Beam does not come with a full-blown execution engine of its own but plugs into other execution engines, such as Apache Flink, Apache Spark, or Google Cloud Dataflow. In this blog post we discuss the reasons to use Flink together with Beam for your batch and stream processing needs. We also take a closer look at [...]
+&lt;p&gt;&lt;a href=&quot;https://flink.apache.org/&quot;&gt;Apache Flink&lt;/a&gt; and &lt;a href=&quot;https://beam.apache.org/&quot;&gt;Apache Beam&lt;/a&gt; are open-source frameworks for parallel, distributed data processing at scale. Unlike Flink, Beam does not come with a full-blown execution engine of its own but plugs into other execution engines, such as Apache Flink, Apache Spark, or Google Cloud Dataflow. In this blog post we discuss the reasons to use Flink together with Bea [...]
 
+&lt;h1 id=&quot;what-is-apache-beam&quot;&gt;What is Apache Beam&lt;/h1&gt;
 
-# What is Apache Beam
+&lt;p&gt;&lt;a href=&quot;https://beam.apache.org/&quot;&gt;Apache Beam&lt;/a&gt; is an open-source, unified model for defining batch and streaming data-parallel processing pipelines. It is unified in the sense that you use a single API, in contrast to using a separate API for batch and streaming like it is the case in Flink. Beam was originally developed by Google which released it in 2014 as the Cloud Dataflow SDK. In 2016, it was donated to &lt;a href=&quot;https://www.apache.org/&quo [...]
 
-[Apache Beam](https://beam.apache.org/) is an open-source, unified model for defining batch and streaming data-parallel processing pipelines. It is unified in the sense that you use a single API, in contrast to using a separate API for batch and streaming like it is the case in Flink. Beam was originally developed by Google which released it in 2014 as the Cloud Dataflow SDK. In 2016, it was donated to [the Apache Software Foundation](https://www.apache.org/) with the name of Beam. It ha [...]
+&lt;p&gt;The execution model, as well as the API of Apache Beam, are similar to Flink’s. Both frameworks are inspired by the &lt;a href=&quot;https://static.googleusercontent.com/media/research.google.com/en//archive/mapreduce-osdi04.pdf&quot;&gt;MapReduce&lt;/a&gt;, &lt;a href=&quot;https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41378.pdf&quot;&gt;MillWheel&lt;/a&gt;, and &lt;a href=&quot;https://research.google/pubs/pub43864/&quot;&gt;Dataflow&lt;/a&gt; [...]
 
-The execution model, as well as the API of Apache Beam, are similar to Flink&#39;s. Both frameworks are inspired by the [MapReduce](https://static.googleusercontent.com/media/research.google.com/en//archive/mapreduce-osdi04.pdf), [MillWheel](https://static.googleusercontent.com/media/research.google.com/en//pubs/archive/41378.pdf), and [Dataflow](https://research.google/pubs/pub43864/) papers. Like Flink, Beam is designed for parallel, distributed data processing. Both have similar trans [...]
-
-One of the most exciting developments in the Beam technology is the framework’s support for multiple programming languages including Java, Python, Go, Scala and SQL. Essentially, developers can write their applications in a programming language of their choice. Beam, with the help of the Runners, translates the program to one of the execution engines, as shown in the diagram below.
+&lt;p&gt;One of the most exciting developments in the Beam technology is the framework’s support for multiple programming languages including Java, Python, Go, Scala and SQL. Essentially, developers can write their applications in a programming language of their choice. Beam, with the help of the Runners, translates the program to one of the execution engines, as shown in the diagram below.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-beam-vision.png&quot; width=&quot;600px&quot; alt=&quot;The vision of Apache Beam&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-beam-vision.png&quot; width=&quot;600px&quot; alt=&quot;The vision of Apache Beam&quot; /&gt;
 &lt;/center&gt;
 
+&lt;h1 id=&quot;reasons-to-use-beam-with-flink&quot;&gt;Reasons to use Beam with Flink&lt;/h1&gt;
 
-# Reasons to use Beam with Flink
-
-Why would you want to use Beam with Flink instead of directly using Flink? Ultimately, Beam and Flink complement each other and provide additional value to the user. The main reasons for using Beam with Flink are the following: 
+&lt;p&gt;Why would you want to use Beam with Flink instead of directly using Flink? Ultimately, Beam and Flink complement each other and provide additional value to the user. The main reasons for using Beam with Flink are the following:&lt;/p&gt;
 
-* Beam provides a unified API for both batch and streaming scenarios.
-* Beam comes with native support for different programming languages, like Python or Go with all their libraries like Numpy, Pandas, Tensorflow, or TFX.
-* You get the power of Apache Flink like its exactly-once semantics, strong memory management and robustness.
-* Beam programs run on your existing Flink infrastructure or infrastructure for other supported Runners, like Spark or Google Cloud Dataflow. 
-* You get additional features like side inputs and cross-language pipelines that are not supported natively in Flink but only supported when using Beam with Flink. 
-
-
-# The Flink Runner in Beam
+&lt;ul&gt;
+  &lt;li&gt;Beam provides a unified API for both batch and streaming scenarios.&lt;/li&gt;
+  &lt;li&gt;Beam comes with native support for different programming languages, like Python or Go with all their libraries like Numpy, Pandas, Tensorflow, or TFX.&lt;/li&gt;
+  &lt;li&gt;You get the power of Apache Flink like its exactly-once semantics, strong memory management and robustness.&lt;/li&gt;
+  &lt;li&gt;Beam programs run on your existing Flink infrastructure or infrastructure for other supported Runners, like Spark or Google Cloud Dataflow.&lt;/li&gt;
+  &lt;li&gt;You get additional features like side inputs and cross-language pipelines that are not supported natively in Flink but only supported when using Beam with Flink.&lt;/li&gt;
+&lt;/ul&gt;
 
-The Flink Runner in Beam translates Beam pipelines into Flink jobs. The translation can be parameterized using Beam&#39;s pipeline options which are parameters for settings like configuring the job name, parallelism, checkpointing, or metrics reporting.
+&lt;h1 id=&quot;the-flink-runner-in-beam&quot;&gt;The Flink Runner in Beam&lt;/h1&gt;
 
-If you are familiar with a DataSet or a DataStream, you will have no problems understanding what a PCollection is. PCollection stands for parallel collection in Beam and is exactly what DataSet/DataStream would be in Flink. Due to Beam&#39;s unified API we only have one type of results of transformation: PCollection.
+&lt;p&gt;The Flink Runner in Beam translates Beam pipelines into Flink jobs. The translation can be parameterized using Beam’s pipeline options which are parameters for settings like configuring the job name, parallelism, checkpointing, or metrics reporting.&lt;/p&gt;
 
-Beam pipelines are composed of transforms. Transforms are like operators in Flink and come in two flavors: primitive and composite transforms. The beauty of all this is that Beam only comes with a small set of primitive transforms which are:
+&lt;p&gt;If you are familiar with a DataSet or a DataStream, you will have no problems understanding what a PCollection is. PCollection stands for parallel collection in Beam and is exactly what DataSet/DataStream would be in Flink. Due to Beam’s unified API we only have one type of results of transformation: PCollection.&lt;/p&gt;
 
-- `Source` (for loading data)
-- `ParDo` (think of a flat map operator on steroids)
-- `GroupByKey` (think of keyBy() in Flink)
-- `AssignWindows` (windows can be assigned at any point in time in Beam)
-- `Flatten` (like a union() operation in Flink)
+&lt;p&gt;Beam pipelines are composed of transforms. Transforms are like operators in Flink and come in two flavors: primitive and composite transforms. The beauty of all this is that Beam only comes with a small set of primitive transforms which are:&lt;/p&gt;
 
-Composite transforms are built by combining the above primitive transforms. For example, `Combine = GroupByKey + ParDo`.
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;Source&lt;/code&gt; (for loading data)&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;ParDo&lt;/code&gt; (think of a flat map operator on steroids)&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;GroupByKey&lt;/code&gt; (think of keyBy() in Flink)&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;AssignWindows&lt;/code&gt; (windows can be assigned at any point in time in Beam)&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;Flatten&lt;/code&gt; (like a union() operation in Flink)&lt;/li&gt;
+&lt;/ul&gt;
 
+&lt;p&gt;Composite transforms are built by combining the above primitive transforms. For example, &lt;code&gt;Combine = GroupByKey + ParDo&lt;/code&gt;.&lt;/p&gt;
 
-# Flink Runner Internals
+&lt;h1 id=&quot;flink-runner-internals&quot;&gt;Flink Runner Internals&lt;/h1&gt;
 
-Although using the Flink Runner in Beam has no prerequisite to understanding its internals, we provide more details of how the Flink runner works in Beam to share knowledge of how the two frameworks can integrate and work together to provide state-of-the-art streaming data pipelines.
+&lt;p&gt;Although using the Flink Runner in Beam has no prerequisite to understanding its internals, we provide more details of how the Flink runner works in Beam to share knowledge of how the two frameworks can integrate and work together to provide state-of-the-art streaming data pipelines.&lt;/p&gt;
 
-The Flink Runner has two translation paths. Depending on whether we execute in batch or streaming mode, the Runner either translates into Flink&#39;s DataSet or into Flink&#39;s DataStream API. Since multi-language support has been added to Beam, another two translation paths have been added. To summarize the four modes:
+&lt;p&gt;The Flink Runner has two translation paths. Depending on whether we execute in batch or streaming mode, the Runner either translates into Flink’s DataSet or into Flink’s DataStream API. Since multi-language support has been added to Beam, another two translation paths have been added. To summarize the four modes:&lt;/p&gt;
 
-1. **The Classic Flink Runner for batch jobs:** Executes batch Java pipelines
-2. **The Classic Flink Runner for streaming jobs:** Executes streaming Java pipelines
-3. **The Portable Flink Runner for batch jobs:** Executes Java as well as Python, Go and other supported SDK pipelines for batch scenarios
-4. **The Portable Flink Runner for streaming jobs:** Executes Java as well as Python, Go and other supported SDK pipelines for streaming scenarios
+&lt;ol&gt;
+  &lt;li&gt;&lt;strong&gt;The Classic Flink Runner for batch jobs:&lt;/strong&gt; Executes batch Java pipelines&lt;/li&gt;
+  &lt;li&gt;&lt;strong&gt;The Classic Flink Runner for streaming jobs:&lt;/strong&gt; Executes streaming Java pipelines&lt;/li&gt;
+  &lt;li&gt;&lt;strong&gt;The Portable Flink Runner for batch jobs:&lt;/strong&gt; Executes Java as well as Python, Go and other supported SDK pipelines for batch scenarios&lt;/li&gt;
+  &lt;li&gt;&lt;strong&gt;The Portable Flink Runner for streaming jobs:&lt;/strong&gt; Executes Java as well as Python, Go and other supported SDK pipelines for streaming scenarios&lt;/li&gt;
+&lt;/ol&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-runner-translation-paths.png&quot; width=&quot;300px&quot; alt=&quot;The 4 translation paths in the Beam&#39;s Flink Runner&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-runner-translation-paths.png&quot; width=&quot;300px&quot; alt=&quot;The 4 translation paths in the Beam&#39;s Flink Runner&quot; /&gt;
 &lt;/center&gt;
 
+&lt;h2 id=&quot;the-classic-flink-runner-in-beam&quot;&gt;The “Classic” Flink Runner in Beam&lt;/h2&gt;
 
-## The “Classic” Flink Runner in Beam
-
-The classic Flink Runner was the initial version of the Runner, hence the &quot;classic&quot; name. Beam pipelines are represented as a graph in Java which is composed of the aforementioned composite and primitive transforms. Beam provides translators which traverse the graph in topological order. Topological order means that we start from all the sources first as we iterate through the graph. Presented with a transform from the graph, the Flink Runner generates the API calls as you woul [...]
+&lt;p&gt;The classic Flink Runner was the initial version of the Runner, hence the “classic” name. Beam pipelines are represented as a graph in Java which is composed of the aforementioned composite and primitive transforms. Beam provides translators which traverse the graph in topological order. Topological order means that we start from all the sources first as we iterate through the graph. Presented with a transform from the graph, the Flink Runner generates the API calls as you would [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-02-22-beam-on-flink/classic-flink-runner-beam.png&quot; width=&quot;600px&quot; alt=&quot;The Classic Flink Runner in Beam&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-02-22-beam-on-flink/classic-flink-runner-beam.png&quot; width=&quot;600px&quot; alt=&quot;The Classic Flink Runner in Beam&quot; /&gt;
 &lt;/center&gt;
 
-While Beam and Flink share very similar concepts, there are enough differences between the two frameworks that make Beam pipelines impossible to be translated 1:1 into a Flink program. In the following sections, we will present the key differences:
+&lt;p&gt;While Beam and Flink share very similar concepts, there are enough differences between the two frameworks that make Beam pipelines impossible to be translated 1:1 into a Flink program. In the following sections, we will present the key differences:&lt;/p&gt;
 
-### Serializers vs Coders
+&lt;h3 id=&quot;serializers-vs-coders&quot;&gt;Serializers vs Coders&lt;/h3&gt;
 
-When data is transferred over the wire in Flink, it has to be turned into bytes. This is done with the help of serializers. Flink has a type system to instantiate the correct coder for a given type, e.g. `StringTypeSerializer` for a String. Apache Beam also has its own type system which is similar to Flink&#39;s but uses slightly different interfaces. Serializers are called Coders in Beam. In order to make a Beam Coder run in Flink, we have to make the two serializer types compatible. Th [...]
+&lt;p&gt;When data is transferred over the wire in Flink, it has to be turned into bytes. This is done with the help of serializers. Flink has a type system to instantiate the correct coder for a given type, e.g. &lt;code&gt;StringTypeSerializer&lt;/code&gt; for a String. Apache Beam also has its own type system which is similar to Flink’s but uses slightly different interfaces. Serializers are called Coders in Beam. In order to make a Beam Coder run in Flink, we have to make the two ser [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-serializers-coders.png&quot; width=&quot;300px&quot; alt=&quot;Serializers vs Coders&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-serializers-coders.png&quot; width=&quot;300px&quot; alt=&quot;Serializers vs Coders&quot; /&gt;
 &lt;/center&gt;
 
-### Read
-
-The `Read` transform provides a way to read data into your pipeline in Beam. The Read transform is supported by two wrappers in Beam, the `SourceInputFormat` for batch processing and the `UnboundedSourceWrapper` for stream processing.
+&lt;h3 id=&quot;read&quot;&gt;Read&lt;/h3&gt;
 
-### ParDo
+&lt;p&gt;The &lt;code&gt;Read&lt;/code&gt; transform provides a way to read data into your pipeline in Beam. The Read transform is supported by two wrappers in Beam, the &lt;code&gt;SourceInputFormat&lt;/code&gt; for batch processing and the &lt;code&gt;UnboundedSourceWrapper&lt;/code&gt; for stream processing.&lt;/p&gt;
 
-`ParDo` is the swiss army knife of Beam and can be compared to a `RichFlatMapFunction` in Flink with additional features such as `SideInputs`, `SideOutputs`, State and Timers. `ParDo` is essentially translated by the Flink runner using the `FlinkDoFnFunction` for batch processing or the `FlinkStatefulDoFnFunction`, while for streaming scenarios the translation is executed with the `DoFnOperator` that takes care of checkpointing and buffering of data during checkpoints, watermark emission [...]
+&lt;h3 id=&quot;pardo&quot;&gt;ParDo&lt;/h3&gt;
 
-### Side Inputs
+&lt;p&gt;&lt;code&gt;ParDo&lt;/code&gt; is the swiss army knife of Beam and can be compared to a &lt;code&gt;RichFlatMapFunction&lt;/code&gt; in Flink with additional features such as &lt;code&gt;SideInputs&lt;/code&gt;, &lt;code&gt;SideOutputs&lt;/code&gt;, State and Timers. &lt;code&gt;ParDo&lt;/code&gt; is essentially translated by the Flink runner using the &lt;code&gt;FlinkDoFnFunction&lt;/code&gt; for batch processing or the &lt;code&gt;FlinkStatefulDoFnFunction&lt;/code&gt;, while [...]
 
-In addition to the main input, ParDo transforms can have a number of side inputs. A side input can be a static set of data that you want to have available at all parallel instances. However, it is more flexible than that. You can have keyed and even windowed side input which updates based on the window size. This is a very powerful concept which does not exist in Flink but is added on top of Flink using Beam.
+&lt;h3 id=&quot;side-inputs&quot;&gt;Side Inputs&lt;/h3&gt;
 
-### AssignWindows
+&lt;p&gt;In addition to the main input, ParDo transforms can have a number of side inputs. A side input can be a static set of data that you want to have available at all parallel instances. However, it is more flexible than that. You can have keyed and even windowed side input which updates based on the window size. This is a very powerful concept which does not exist in Flink but is added on top of Flink using Beam.&lt;/p&gt;
 
-In Flink, windows are assigned by the `WindowOperator` when you use the `window()` in the API. In Beam, windows can be assigned at any point in time. Any element is implicitly part of a window. If no window is assigned explicitly, the element is part of the `GlobalWindow`. Window information is stored for each element in a wrapper called `WindowedValue`. The window information is only used once we issue a `GroupByKey`.
+&lt;h3 id=&quot;assignwindows&quot;&gt;AssignWindows&lt;/h3&gt;
 
-### GroupByKey
+&lt;p&gt;In Flink, windows are assigned by the &lt;code&gt;WindowOperator&lt;/code&gt; when you use the &lt;code&gt;window()&lt;/code&gt; in the API. In Beam, windows can be assigned at any point in time. Any element is implicitly part of a window. If no window is assigned explicitly, the element is part of the &lt;code&gt;GlobalWindow&lt;/code&gt;. Window information is stored for each element in a wrapper called &lt;code&gt;WindowedValue&lt;/code&gt;. The window information is only use [...]
 
-Most of the time it is useful to partition the data by a key. In Flink, this is done via the `keyBy()` API call. In Beam the `GroupByKey` transform can only be applied if the input is of the form `KV&lt;Key, Value&gt;`. Unlike Flink where the key can even be nested inside the data, Beam enforces the key to always be explicit. The `GroupByKey` transform then groups the data by key and by window which is similar to what `keyBy(..).window(..)` would give us in Flink. Beam has its own set of [...]
+&lt;h3 id=&quot;groupbykey&quot;&gt;GroupByKey&lt;/h3&gt;
 
-### Flatten
+&lt;p&gt;Most of the time it is useful to partition the data by a key. In Flink, this is done via the &lt;code&gt;keyBy()&lt;/code&gt; API call. In Beam the &lt;code&gt;GroupByKey&lt;/code&gt; transform can only be applied if the input is of the form &lt;code&gt;KV&amp;lt;Key, Value&amp;gt;&lt;/code&gt;. Unlike Flink where the key can even be nested inside the data, Beam enforces the key to always be explicit. The &lt;code&gt;GroupByKey&lt;/code&gt; transform then groups the data by key  [...]
 
-The Flatten operator takes multiple DataSet/DataStreams, called P[arallel]Collections in Beam, and combines them into one collection. This is equivalent to Flink&#39;s `union()` operation.
+&lt;h3 id=&quot;flatten&quot;&gt;Flatten&lt;/h3&gt;
 
+&lt;p&gt;The Flatten operator takes multiple DataSet/DataStreams, called P[arallel]Collections in Beam, and combines them into one collection. This is equivalent to Flink’s &lt;code&gt;union()&lt;/code&gt; operation.&lt;/p&gt;
 
-## The “Portable” Flink Runner in Beam
+&lt;h2 id=&quot;the-portable-flink-runner-in-beam&quot;&gt;The “Portable” Flink Runner in Beam&lt;/h2&gt;
 
-The portable Flink Runner in Beam is the evolution of the classic Runner. Classic Runners are tied to the JVM ecosystem, but the Beam community wanted to move past this and also execute Python, Go and other languages. This adds another dimension to Beam in terms of portability because, like previously mentioned, Beam already had portability across execution engines. It was necessary to change the translation logic of the Runner to be able to support language portability.
+&lt;p&gt;The portable Flink Runner in Beam is the evolution of the classic Runner. Classic Runners are tied to the JVM ecosystem, but the Beam community wanted to move past this and also execute Python, Go and other languages. This adds another dimension to Beam in terms of portability because, like previously mentioned, Beam already had portability across execution engines. It was necessary to change the translation logic of the Runner to be able to support language portability.&lt;/p&gt;
 
-There are two important building blocks for portable Runners: 
+&lt;p&gt;There are two important building blocks for portable Runners:&lt;/p&gt;
 
-1. A common pipeline format across all the languages: The Runner API
-2. A common interface during execution for the communication between the Runner and the code written in any language: The Fn API
+&lt;ol&gt;
+  &lt;li&gt;A common pipeline format across all the languages: The Runner API&lt;/li&gt;
+  &lt;li&gt;A common interface during execution for the communication between the Runner and the code written in any language: The Fn API&lt;/li&gt;
+&lt;/ol&gt;
 
-The Runner API provides a universal representation of the pipeline as Protobuf which contains the transforms, types, and user code. Protobuf was chosen as the format because every language has libraries available for it. Similarly, for the execution part, Beam introduced the Fn API interface to handle the communication between the Runner/execution engine and the user code that may be written in a different language and executes in a different process. Fn API is pronounced &quot;fun API&q [...]
+&lt;p&gt;The Runner API provides a universal representation of the pipeline as Protobuf which contains the transforms, types, and user code. Protobuf was chosen as the format because every language has libraries available for it. Similarly, for the execution part, Beam introduced the Fn API interface to handle the communication between the Runner/execution engine and the user code that may be written in a different language and executes in a different process. Fn API is pronounced “fun A [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability.png&quot; width=&quot;600px&quot; alt=&quot;Language Portability in Apache Beam&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability.png&quot; width=&quot;600px&quot; alt=&quot;Language Portability in Apache Beam&quot; /&gt;
 &lt;/center&gt;
 
+&lt;h2 id=&quot;how-are-beam-programs-translated-in-language-portability&quot;&gt;How Are Beam Programs Translated In Language Portability?&lt;/h2&gt;
 
-## How Are Beam Programs Translated In Language Portability?
+&lt;p&gt;Users write their Beam pipelines in one language, but they may get executed in an environment based on a completely different language. How does that work? To explain that, let’s follow the lifecycle of a pipeline. Let’s suppose we use the Python SDK to write the pipeline. Before submitting the pipeline via the Job API to Beam’s JobServer, Beam would convert it to the Runner API, the language-agnostic format we described before. The JobServer is also a Beam component that handle [...]
 
-Users write their Beam pipelines in one language, but they may get executed in an environment based on a completely different language. How does that work? To explain that, let&#39;s follow the lifecycle of a pipeline. Let&#39;s suppose we use the Python SDK to write the pipeline. Before submitting the pipeline via the Job API to Beam&#39;s JobServer, Beam would convert it to the Runner API, the language-agnostic format we described before. The JobServer is also a Beam component that han [...]
-
-- Docker-based (the default)
-- Process-based (a simple process is started)
-- Externally-provided (K8s or other schedulers)
-- Embedded (intended for testing and only works with Java)
+&lt;ul&gt;
+  &lt;li&gt;Docker-based (the default)&lt;/li&gt;
+  &lt;li&gt;Process-based (a simple process is started)&lt;/li&gt;
+  &lt;li&gt;Externally-provided (K8s or other schedulers)&lt;/li&gt;
+  &lt;li&gt;Embedded (intended for testing and only works with Java)&lt;/li&gt;
+&lt;/ul&gt;
 
-Environments hold the _SDK Harness_ which is the code that handles the execution and the communication with the Runner over the Fn API. For example, when Flink executes Python code, it sends the data to the Python environment containing the Python SDK Harness. Sending data to an external process involves a minor overhead which we have measured to be 5-10% slower than the classic Java pipelines. However, Beam uses a fusion of transforms to execute as many transforms as possible in the sam [...]
+&lt;p&gt;Environments hold the &lt;em&gt;SDK Harness&lt;/em&gt; which is the code that handles the execution and the communication with the Runner over the Fn API. For example, when Flink executes Python code, it sends the data to the Python environment containing the Python SDK Harness. Sending data to an external process involves a minor overhead which we have measured to be 5-10% slower than the classic Java pipelines. However, Beam uses a fusion of transforms to execute as many trans [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability-architecture.png&quot; width=&quot;600px&quot; alt=&quot;Language Portability Architecture in beam&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-02-22-beam-on-flink/flink-runner-beam-language-portability-architecture.png&quot; width=&quot;600px&quot; alt=&quot;Language Portability Architecture in beam&quot; /&gt;
 &lt;/center&gt;
 
+&lt;p&gt;Environments can be present for many languages. This opens up an entirely new type of pipelines: cross-language pipelines. In cross-language pipelines we can combine transforms of two or more languages, e.g. a machine learning pipeline with the feature generation written in Java and the learning written in Python. All this can be run on top of Flink.&lt;/p&gt;
 
-Environments can be present for many languages. This opens up an entirely new type of pipelines: cross-language pipelines. In cross-language pipelines we can combine transforms of two or more languages, e.g. a machine learning pipeline with the feature generation written in Java and the learning written in Python. All this can be run on top of Flink.
-
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-## Conclusion
+&lt;p&gt;Using Apache Beam with Apache Flink combines  (a.) the power of Flink with (b.) the flexibility of Beam. All it takes to run Beam is a Flink cluster, which you may already have. Apache Beam’s fully-fledged Python API is probably the most compelling argument for using Beam with Flink, but the unified API which allows to “write-once” and “execute-anywhere” is also very appealing to Beam users. On top of this, features like side inputs and a rich connector ecosystem are also reason [...]
 
-Using Apache Beam with Apache Flink combines  (a.) the power of Flink with (b.) the flexibility of Beam. All it takes to run Beam is a Flink cluster, which you may already have. Apache Beam&#39;s fully-fledged Python API is probably the most compelling argument for using Beam with Flink, but the unified API which allows to &quot;write-once&quot; and &quot;execute-anywhere&quot; is also very appealing to Beam users. On top of this, features like side inputs and a rich connector ecosystem  [...]
+&lt;p&gt;With the introduction of schemas, a new format for handling type information, Beam is heading in a similar direction as Flink with its type system which is essential for the Table API or SQL. Speaking of, the next Flink release will include a Python version of the Table API which is based on the language portability of Beam. Looking ahead, the Beam community plans to extend the support for interactive programs like notebooks. TFX, which is built with Beam, is a very powerful way [...]
 
-With the introduction of schemas, a new format for handling type information, Beam is heading in a similar direction as Flink with its type system which is essential for the Table API or SQL. Speaking of, the next Flink release will include a Python version of the Table API which is based on the language portability of Beam. Looking ahead, the Beam community plans to extend the support for interactive programs like notebooks. TFX, which is built with Beam, is a very powerful way to solve [...]
-
-For many years, Beam and Flink have inspired and learned from each other. With the Python support being based on Beam in Flink, they only seem to come closer to each other. That&#39;s all the better for the community, and also users have more options and functionality to choose from.
+&lt;p&gt;For many years, Beam and Flink have inspired and learned from each other. With the Python support being based on Beam in Flink, they only seem to come closer to each other. That’s all the better for the community, and also users have more options and functionality to choose from.&lt;/p&gt;
 </description>
-<pubDate>Sat, 22 Feb 2020 12:00:00 +0000</pubDate>
+<pubDate>Sat, 22 Feb 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/ecosystem/2020/02/22/apache-beam-how-beam-runs-on-top-of-flink.html</link>
 <guid isPermaLink="true">/ecosystem/2020/02/22/apache-beam-how-beam-runs-on-top-of-flink.html</guid>
 </item>
 
 <item>
 <title>No Java Required: Configuring Sources and Sinks in SQL</title>
-<description># Introduction
+<description>&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;
 
-The recent [Apache Flink 1.10 release](https://flink.apache.org/news/2020/02/11/release-1.10.0.html) includes many exciting features.
-In particular, it marks the end of the community&#39;s year-long effort to merge in the [Blink SQL contribution](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html) from Alibaba.
+&lt;p&gt;The recent &lt;a href=&quot;https://flink.apache.org/news/2020/02/11/release-1.10.0.html&quot;&gt;Apache Flink 1.10 release&lt;/a&gt; includes many exciting features.
+In particular, it marks the end of the community’s year-long effort to merge in the &lt;a href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;Blink SQL contribution&lt;/a&gt; from Alibaba.
 The reason the community chose to spend so much time on the contribution is that SQL works.
 It allows Flink to offer a truly unified interface over batch and streaming and makes stream processing accessible to a broad audience of developers and analysts.
-Best of all, Flink SQL is ANSI-SQL compliant, which means if you&#39;ve ever used a database in the past, you already know it[^1]!
+Best of all, Flink SQL is ANSI-SQL compliant, which means if you’ve ever used a database in the past, you already know it&lt;sup id=&quot;fnref:1&quot;&gt;&lt;a href=&quot;#fn:1&quot; class=&quot;footnote&quot;&gt;1&lt;/a&gt;&lt;/sup&gt;!&lt;/p&gt;
 
-A lot of work focused on improving runtime performance and progressively extending its coverage of the SQL standard.
+&lt;p&gt;A lot of work focused on improving runtime performance and progressively extending its coverage of the SQL standard.
 Flink now supports the full TPC-DS query set for batch queries, reflecting the readiness of its SQL engine to address the needs of modern data warehouse-like workloads.
-Its streaming SQL supports an almost equal set of features - those that are well defined on a streaming runtime - including [complex joins](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.html) and [MATCH_RECOGNIZE](https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/match_recognize.html).
+Its streaming SQL supports an almost equal set of features - those that are well defined on a streaming runtime - including &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/streaming/joins.html&quot;&gt;complex joins&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/table/streaming/match_recognize.html&quot;&gt;MATCH_RECOGNIZE&lt;/a&gt;.&lt;/p&gt;
 
-As important as this work is, the community also strives to make these features generally accessible to the broadest audience possible.
-That is why the Flink community is excited in 1.10 to offer production-ready DDL syntax (e.g., `CREATE TABLE`, `DROP TABLE`) and a refactored catalog interface.
+&lt;p&gt;As important as this work is, the community also strives to make these features generally accessible to the broadest audience possible.
+That is why the Flink community is excited in 1.10 to offer production-ready DDL syntax (e.g., &lt;code&gt;CREATE TABLE&lt;/code&gt;, &lt;code&gt;DROP TABLE&lt;/code&gt;) and a refactored catalog interface.&lt;/p&gt;
 
-# Accessing Your Data Where It Lives
+&lt;h1 id=&quot;accessing-your-data-where-it-lives&quot;&gt;Accessing Your Data Where It Lives&lt;/h1&gt;
 
-Flink does not store data at rest; it is a compute engine and requires other systems to consume input from and write its output.
-Those that have used Flink&#39;s `DataStream` API in the past will be familiar with connectors that allow for interacting with external systems. 
-Flink has a vast connector ecosystem that includes all major message queues, filesystems, and databases.
+&lt;p&gt;Flink does not store data at rest; it is a compute engine and requires other systems to consume input from and write its output.
+Those that have used Flink’s &lt;code&gt;DataStream&lt;/code&gt; API in the past will be familiar with connectors that allow for interacting with external systems. 
+Flink has a vast connector ecosystem that includes all major message queues, filesystems, and databases.&lt;/p&gt;
 
 &lt;div class=&quot;alert alert-info&quot;&gt;
 If your favorite system does not have a connector maintained in the central Apache Flink repository, check out the &lt;a href=&quot;https://flink-packages.org&quot;&gt;flink packages website&lt;/a&gt;, which has a growing number of community-maintained components.
 &lt;/div&gt;
 
-While these connectors are battle-tested and production-ready, they are written in Java and configured in code, which means they are not amenable to pure SQL or Table applications.
-For a holistic SQL experience, not only queries need to be written in SQL, but also table definitions. 
+&lt;p&gt;While these connectors are battle-tested and production-ready, they are written in Java and configured in code, which means they are not amenable to pure SQL or Table applications.
+For a holistic SQL experience, not only queries need to be written in SQL, but also table definitions.&lt;/p&gt;
 
-# CREATE TABLE Statements
+&lt;h1 id=&quot;create-table-statements&quot;&gt;CREATE TABLE Statements&lt;/h1&gt;
 
-While Flink SQL has long provided table abstractions atop some of Flink&#39;s most popular connectors, configurations were not always so straightforward.
-Beginning in 1.10, Flink supports defining tables through `CREATE TABLE` statements.
-With this feature, users can now create logical tables, backed by various external systems, in pure SQL. 
+&lt;p&gt;While Flink SQL has long provided table abstractions atop some of Flink’s most popular connectors, configurations were not always so straightforward.
+Beginning in 1.10, Flink supports defining tables through &lt;code&gt;CREATE TABLE&lt;/code&gt; statements.
+With this feature, users can now create logical tables, backed by various external systems, in pure SQL.&lt;/p&gt;
 
-By defining tables in SQL, developers can write queries against logical schemas that are abstracted away from the underlying physical data store. Coupled with Flink SQL&#39;s unified approach to batch and stream processing, Flink provides a straight line from discovery to production.
+&lt;p&gt;By defining tables in SQL, developers can write queries against logical schemas that are abstracted away from the underlying physical data store. Coupled with Flink SQL’s unified approach to batch and stream processing, Flink provides a straight line from discovery to production.&lt;/p&gt;
 
-Users can define tables over static data sets, anything from a local CSV file to a full-fledged data lake or even Hive.
-Leveraging Flink&#39;s efficient batch processing capabilities, they can perform ad-hoc queries searching for exciting insights.
+&lt;p&gt;Users can define tables over static data sets, anything from a local CSV file to a full-fledged data lake or even Hive.
+Leveraging Flink’s efficient batch processing capabilities, they can perform ad-hoc queries searching for exciting insights.
 Once something interesting is identified, businesses can gain real-time and continuous insights by merely altering the table so that it is powered by a message queue such as Kafka.
-Because Flink guarantees SQL queries have unified semantics over batch and streaming, users can be confident that redeploying this query as a continuous streaming application over a message queue will output identical results.
-
-{% highlight sql %}
--- Define a table called orders that is backed by a Kafka topic
--- The definition includes all relevant Kafka properties,
--- the underlying format (JSON) and even defines a
--- watermarking algorithm based on one of the fields
--- so that this table can be used with event time.
-CREATE TABLE orders (
-	user_id    BIGINT,
-	product    STRING,
-	order_time TIMESTAMP(3),
-	WATERMARK FOR order_time AS order_time - &#39;5&#39; SECONDS
-) WITH (
-	&#39;connector.type&#39;    	 = &#39;kafka&#39;,
-	&#39;connector.version&#39; 	 = &#39;universal&#39;,
-	&#39;connector.topic&#39;   	 = &#39;orders&#39;,
-	&#39;connector.startup-mode&#39; = &#39;earliest-offset&#39;,
-	&#39;connector.properties.bootstrap.servers&#39; = &#39;localhost:9092&#39;,
-	&#39;format.type&#39; = &#39;json&#39; 
-);
-
--- Define a table called product_analysis
--- on top of ElasticSearch 7 where we 
--- can write the results of our query. 
-CREATE TABLE product_analysis (
-	product 	STRING,
-	tracking_time 	TIMESTAMP(3),
-	units_sold 	BIGINT
-) WITH (
-	&#39;connector.type&#39;    = &#39;elasticsearch&#39;,
-	&#39;connector.version&#39; = &#39;7&#39;,
-	&#39;connector.hosts&#39;   = &#39;localhost:9200&#39;,
-	&#39;connector.index&#39;   = &#39;ProductAnalysis&#39;,
-	&#39;connector.document.type&#39; = &#39;analysis&#39; 
-);
-
--- A simple query that analyzes order data
--- from Kafka and writes results into 
--- ElasticSearch. 
-INSERT INTO product_analysis
-SELECT
-	product_id,
-	TUMBLE_START(order_time, INTERVAL &#39;1&#39; DAY) as tracking_time,
-	COUNT(*) as units_sold
-FROM orders
-GROUP BY
-	product_id,
-	TUMBLE(order_time, INTERVAL &#39;1&#39; DAY);
-{% endhighlight %}
-
-# Catalogs
-
-While being able to create tables is important, it often isn&#39;t enough.
-A business analyst, for example, shouldn&#39;t have to know what properties to set for Kafka, or even have to know what the underlying data source is, to be able to write a query.
-
-To solve this problem, Flink 1.10 also ships with a revamped catalog system for managing metadata about tables and user definined functions.
+Because Flink guarantees SQL queries have unified semantics over batch and streaming, users can be confident that redeploying this query as a continuous streaming application over a message queue will output identical results.&lt;/p&gt;
+
+&lt;figure class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot; data-lang=&quot;sql&quot;&gt;&lt;span class=&quot;c1&quot;&gt;-- Define a table called orders that is backed by a Kafka topic&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- The definition includes all relevant Kafka properties,&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- the underlying format (JSON) and even defines a&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- watermarking algorithm based on one of the fields&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- so that this table can be used with event time.&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;orders&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;user_id&lt;/span&gt;    &lt;span class=&quot;nb&quot;&gt;BIGINT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;product&lt;/span&gt;    &lt;span class=&quot;n&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;order_time&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TIMESTAMP&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;WATERMARK&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FOR&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;order_time&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;AS&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;order_time&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;5&amp;#39;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;SECONDS&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.type&amp;#39;&lt;/span&gt;    	 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;kafka&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.version&amp;#39;&lt;/span&gt; 	 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;universal&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.topic&amp;#39;&lt;/span&gt;   	 &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;orders&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.startup-mode&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;earliest-offset&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.properties.bootstrap.servers&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;localhost:9092&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;format.type&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;json&amp;#39;&lt;/span&gt; 
+&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;-- Define a table called product_analysis&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- on top of ElasticSearch 7 where we &lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- can write the results of our query. &lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;product_analysis&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;product&lt;/span&gt; 	&lt;span class=&quot;n&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;tracking_time&lt;/span&gt; 	&lt;span class=&quot;k&quot;&gt;TIMESTAMP&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;3&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;),&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;units_sold&lt;/span&gt; 	&lt;span class=&quot;nb&quot;&gt;BIGINT&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.type&amp;#39;&lt;/span&gt;    &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;elasticsearch&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.version&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;7&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.hosts&amp;#39;&lt;/span&gt;   &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;localhost:9200&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.index&amp;#39;&lt;/span&gt;   &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;ProductAnalysis&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;s1&quot;&gt;&amp;#39;connector.document.type&amp;#39;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;analysis&amp;#39;&lt;/span&gt; 
+&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;-- A simple query that analyzes order data&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- from Kafka and writes results into &lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;-- ElasticSearch. &lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;INSERT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INTO&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;product_analysis&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;SELECT&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;product_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;TUMBLE_START&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;order_time&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;INTERVAL&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;1&amp;#39;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DAY&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n [...]
+	&lt;span class=&quot;k&quot;&gt;COUNT&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;*&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;units_sold&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;orders&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;GROUP&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;BY&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;product_id&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+	&lt;span class=&quot;n&quot;&gt;TUMBLE&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;order_time&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nb&quot;&gt;INTERVAL&lt;/span&gt; &lt;span class=&quot;s1&quot;&gt;&amp;#39;1&amp;#39;&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;DAY&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/figure&gt;
+
+&lt;h1 id=&quot;catalogs&quot;&gt;Catalogs&lt;/h1&gt;
+
+&lt;p&gt;While being able to create tables is important, it often isn’t enough.
+A business analyst, for example, shouldn’t have to know what properties to set for Kafka, or even have to know what the underlying data source is, to be able to write a query.&lt;/p&gt;
+
+&lt;p&gt;To solve this problem, Flink 1.10 also ships with a revamped catalog system for managing metadata about tables and user definined functions.
 With catalogs, users can create tables once and reuse them across Jobs and Sessions.
-Now, the team managing a data set can create a table and immediately make it accessible to other groups within their organization.
+Now, the team managing a data set can create a table and immediately make it accessible to other groups within their organization.&lt;/p&gt;
 
-The most notable catalog that Flink integrates with today is Hive Metastore.
+&lt;p&gt;The most notable catalog that Flink integrates with today is Hive Metastore.
 The Hive catalog allows Flink to fully interoperate with Hive and serve as a more efficient query engine.
-Flink supports reading and writing Hive tables, using Hive UDFs, and even leveraging Hive&#39;s metastore catalog to persist Flink specific metadata.
+Flink supports reading and writing Hive tables, using Hive UDFs, and even leveraging Hive’s metastore catalog to persist Flink specific metadata.&lt;/p&gt;
 
-# Looking Ahead
+&lt;h1 id=&quot;looking-ahead&quot;&gt;Looking Ahead&lt;/h1&gt;
 
-Flink SQL has made enormous strides to democratize stream processing, and 1.10 marks a significant milestone in that development.
+&lt;p&gt;Flink SQL has made enormous strides to democratize stream processing, and 1.10 marks a significant milestone in that development.
 However, we are not ones to rest on our laurels and, the community is committed to raising the bar on standards while lowering the barriers to entry.
 The community is looking to add more catalogs, such as JDBC and Apache Pulsar.
-We encourage you to sign up for the [mailing list](https://flink.apache.org/community.html) and stay on top of the announcements and new features in upcoming releases.
+We encourage you to sign up for the &lt;a href=&quot;https://flink.apache.org/community.html&quot;&gt;mailing list&lt;/a&gt; and stay on top of the announcements and new features in upcoming releases.&lt;/p&gt;
 
----
+&lt;hr /&gt;
 
-[^1]: My colleague Timo, whose worked on Flink SQL from the beginning, has the entire SQL standard printed on his desk and references it before any changes are merged. It&#39;s enormous.
+&lt;div class=&quot;footnotes&quot;&gt;
+  &lt;ol&gt;
+    &lt;li id=&quot;fn:1&quot;&gt;
+      &lt;p&gt;My colleague Timo, whose worked on Flink SQL from the beginning, has the entire SQL standard printed on his desk and references it before any changes are merged. It’s enormous. &lt;a href=&quot;#fnref:1&quot; class=&quot;reversefootnote&quot;&gt;&amp;#8617;&lt;/a&gt;&lt;/p&gt;
+    &lt;/li&gt;
+  &lt;/ol&gt;
+&lt;/div&gt;
 </description>
-<pubDate>Thu, 20 Feb 2020 12:00:00 +0000</pubDate>
+<pubDate>Thu, 20 Feb 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/02/20/ddl.html</link>
 <guid isPermaLink="true">/news/2020/02/20/ddl.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.10.0 Release Announcement</title>
-<description>The Apache Flink community is excited to hit the double digits and announce the release of Flink 1.10.0! As a result of the biggest community effort to date, with over 1.2k issues implemented and more than 200 contributors, this release introduces significant improvements to the overall performance and stability of Flink jobs, a preview of native Kubernetes integration and great advances in Python support (PyFlink). 
-
-Flink 1.10 also marks the completion of the [Blink integration](https://flink.apache.org/news/2019/08/22/release-1.9.0.html#preview-of-the-new-blink-sql-query-processor), hardening streaming SQL and bringing mature batch processing to Flink with production-ready Hive integration and TPC-DS coverage. This blog post describes all major new features and improvements, important changes to be aware of and what to expect moving forward.
-
-{% toc %}
-
-The binary distribution and source artifacts are now available on the updated [Downloads page]({{ site.baseurl }}/downloads.html) of the Flink website. For more details, check the complete [release changelog](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;version=12345845) and the [updated documentation]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/). We encourage you to download the release and share your feedback with the community through the [Flink m [...]
-
-
-## New Features and Improvements
+<description>&lt;p&gt;The Apache Flink community is excited to hit the double digits and announce the release of Flink 1.10.0! As a result of the biggest community effort to date, with over 1.2k issues implemented and more than 200 contributors, this release introduces significant improvements to the overall performance and stability of Flink jobs, a preview of native Kubernetes integration and great advances in Python support (PyFlink).&lt;/p&gt;
+
+&lt;p&gt;Flink 1.10 also marks the completion of the &lt;a href=&quot;https://flink.apache.org/news/2019/08/22/release-1.9.0.html#preview-of-the-new-blink-sql-query-processor&quot;&gt;Blink integration&lt;/a&gt;, hardening streaming SQL and bringing mature batch processing to Flink with production-ready Hive integration and TPC-DS coverage. This blog post describes all major new features and improvements, important changes to be aware of and what to expect moving forward.&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#new-features-and-improvements&quot; id=&quot;markdown-toc-new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#improved-memory-management-and-configuration&quot; id=&quot;markdown-toc-improved-memory-management-and-configuration&quot;&gt;Improved Memory Management and Configuration&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#unified-logic-for-job-submission&quot; id=&quot;markdown-toc-unified-logic-for-job-submission&quot;&gt;Unified Logic for Job Submission&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#native-kubernetes-integration-beta&quot; id=&quot;markdown-toc-native-kubernetes-integration-beta&quot;&gt;Native Kubernetes Integration (Beta)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#table-apisql-production-ready-hive-integration&quot; id=&quot;markdown-toc-table-apisql-production-ready-hive-integration&quot;&gt;Table API/SQL: Production-ready Hive Integration&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#other-improvements-to-the-table-apisql&quot; id=&quot;markdown-toc-other-improvements-to-the-table-apisql&quot;&gt;Other Improvements to the Table API/SQL&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#pyflink-support-for-native-user-defined-functions-udfs&quot; id=&quot;markdown-toc-pyflink-support-for-native-user-defined-functions-udfs&quot;&gt;PyFlink: Support for Native User Defined Functions (UDFs)&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#important-changes&quot; id=&quot;markdown-toc-important-changes&quot;&gt;Important Changes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#release-notes&quot; id=&quot;markdown-toc-release-notes&quot;&gt;Release Notes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#list-of-contributors&quot; id=&quot;markdown-toc-list-of-contributors&quot;&gt;List of Contributors&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
+&lt;/div&gt;
 
-### Improved Memory Management and Configuration
+&lt;p&gt;The binary distribution and source artifacts are now available on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt; of the Flink website. For more details, check the complete &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;amp;version=12345845&quot;&gt;release changelog&lt;/a&gt; and the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/&quot;&gt;updated documentation&lt;/a&gt [...]
 
-The current `TaskExecutor` memory configuration in Flink has some shortcomings that make it hard to reason about or optimize resource utilization, such as: 
+&lt;h2 id=&quot;new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/h2&gt;
 
-* Different configuration models for memory footprint in Streaming and Batch execution; 
+&lt;h3 id=&quot;improved-memory-management-and-configuration&quot;&gt;Improved Memory Management and Configuration&lt;/h3&gt;
 
-* Complex and user-dependent configuration of off-heap state backends (i.e. RocksDB) in Streaming execution.
+&lt;p&gt;The current &lt;code&gt;TaskExecutor&lt;/code&gt; memory configuration in Flink has some shortcomings that make it hard to reason about or optimize resource utilization, such as:&lt;/p&gt;
 
-To make memory options more explicit and intuitive to users, Flink 1.10 introduces significant changes to the `TaskExecutor` memory model and configuration logic ([FLIP-49](https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors)). These changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), giving users strict control over its memory consumption.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Different configuration models for memory footprint in Streaming and Batch execution;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Complex and user-dependent configuration of off-heap state backends (i.e. RocksDB) in Streaming execution.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-**Managed Memory Extension**
+&lt;p&gt;To make memory options more explicit and intuitive to users, Flink 1.10 introduces significant changes to the &lt;code&gt;TaskExecutor&lt;/code&gt; memory model and configuration logic (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-49%3A+Unified+Memory+Configuration+for+TaskExecutors&quot;&gt;FLIP-49&lt;/a&gt;). These changes make Flink more adaptable to all kinds of deployment environments (e.g. Kubernetes, Yarn, Mesos), giving users strict control ove [...]
 
-Managed memory was extended to also account for memory usage of `RocksDBStateBackend`. While batch jobs can use either on-heap or off-heap memory, streaming jobs with `RocksDBStateBackend` can use off-heap memory only. Therefore, to allow users to switch between Streaming and Batch execution without having to modify cluster configurations, managed memory is now always off-heap.
+&lt;p&gt;&lt;strong&gt;Managed Memory Extension&lt;/strong&gt;&lt;/p&gt;
 
-**Simplified RocksDB Configuration**
+&lt;p&gt;Managed memory was extended to also account for memory usage of &lt;code&gt;RocksDBStateBackend&lt;/code&gt;. While batch jobs can use either on-heap or off-heap memory, streaming jobs with &lt;code&gt;RocksDBStateBackend&lt;/code&gt; can use off-heap memory only. Therefore, to allow users to switch between Streaming and Batch execution without having to modify cluster configurations, managed memory is now always off-heap.&lt;/p&gt;
 
-Configuring an off-heap state backend like RocksDB used to involve a good deal of manual tuning, like decreasing the JVM heap size or setting Flink to use off-heap memory. This can now be achieved through Flink&#39;s out-of-box configuration, and adjusting the memory budget for `RocksDBStateBackend` is as simple as resizing the managed memory size. 
+&lt;p&gt;&lt;strong&gt;Simplified RocksDB Configuration&lt;/strong&gt;&lt;/p&gt;
 
-Another important improvement was to allow Flink to bind RocksDB native memory usage ([FLINK-7289](https://issues.apache.org/jira/browse/FLINK-7289)), preventing it from exceeding its total memory budget — this is especially relevant in containerized environments like Kubernetes. For details on how to enable and tune this feature, refer to [Tuning RocksDB]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/ops/state/large_state_tuning.html#tuning-rocksdb).
+&lt;p&gt;Configuring an off-heap state backend like RocksDB used to involve a good deal of manual tuning, like decreasing the JVM heap size or setting Flink to use off-heap memory. This can now be achieved through Flink’s out-of-box configuration, and adjusting the memory budget for &lt;code&gt;RocksDBStateBackend&lt;/code&gt; is as simple as resizing the managed memory size.&lt;/p&gt;
 
-&lt;span class=&quot;label label-danger&quot;&gt;Note&lt;/span&gt; FLIP-49 changes the process of cluster resource configuration, which may require tuning your clusters for upgrades from previous Flink versions. For a comprehensive overview of the changes introduced and tuning guidance, consult [this setup]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/ops/memory/mem_setup.html).
+&lt;p&gt;Another important improvement was to allow Flink to bind RocksDB native memory usage (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-7289&quot;&gt;FLINK-7289&lt;/a&gt;), preventing it from exceeding its total memory budget — this is especially relevant in containerized environments like Kubernetes. For details on how to enable and tune this feature, refer to &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/state/large_state_tuning.h [...]
 
+&lt;p&gt;&lt;span class=&quot;label label-danger&quot;&gt;Note&lt;/span&gt; FLIP-49 changes the process of cluster resource configuration, which may require tuning your clusters for upgrades from previous Flink versions. For a comprehensive overview of the changes introduced and tuning guidance, consult &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/memory/mem_setup.html&quot;&gt;this setup&lt;/a&gt;.&lt;/p&gt;
 
-### Unified Logic for Job Submission
+&lt;h3 id=&quot;unified-logic-for-job-submission&quot;&gt;Unified Logic for Job Submission&lt;/h3&gt;
 
-Prior to this release, job submission was part of the duties of the Execution Environments and closely tied to the different deployment targets (e.g. Yarn, Kubernetes, Mesos). This led to a poor separation of concerns and, over time, to a growing number of customized environments that users needed to configure and manage separately.
+&lt;p&gt;Prior to this release, job submission was part of the duties of the Execution Environments and closely tied to the different deployment targets (e.g. Yarn, Kubernetes, Mesos). This led to a poor separation of concerns and, over time, to a growing number of customized environments that users needed to configure and manage separately.&lt;/p&gt;
 
-In Flink 1.10, job submission logic is abstracted into the generic `Executor` interface ([FLIP-73](https://cwiki.apache.org/confluence/display/FLINK/FLIP-73%3A+Introducing+Executors+for+job+submission)). The addition of the `ExecutorCLI` ([FLIP-81](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=133631524)) introduces a unified way to specify configuration parameters for **any** [execution target]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/ops/cli.html#deployment-ta [...]
+&lt;p&gt;In Flink 1.10, job submission logic is abstracted into the generic &lt;code&gt;Executor&lt;/code&gt; interface (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-73%3A+Introducing+Executors+for+job+submission&quot;&gt;FLIP-73&lt;/a&gt;). The addition of the &lt;code&gt;ExecutorCLI&lt;/code&gt; (&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=133631524&quot;&gt;FLIP-81&lt;/a&gt;) introduces a unified way to specify configura [...]
 
-&lt;span&gt;
+&lt;p&gt;&lt;span&gt;
 	&lt;center&gt;
-	&lt;img vspace=&quot;8&quot; style=&quot;width:100%&quot; src=&quot;{{site.baseurl}}/img/blog/2020-02-11-release-1.10.0/flink_1.10_zeppelin.png&quot; /&gt;
+	&lt;img vspace=&quot;8&quot; style=&quot;width:100%&quot; src=&quot;/img/blog/2020-02-11-release-1.10.0/flink_1.10_zeppelin.png&quot; /&gt;
 	&lt;/center&gt;
-&lt;/span&gt;
+&lt;/span&gt;&lt;/p&gt;
 
-In particular, these changes make it much easier to programmatically use Flink in downstream frameworks — for example, Apache Beam or Zeppelin interactive notebooks — by providing users with a unified entry point to Flink. For users working with Flink across multiple target environments, the transition to a configuration-based execution process also significantly reduces boilerplate code and maintainability overhead.
+&lt;p&gt;In particular, these changes make it much easier to programmatically use Flink in downstream frameworks — for example, Apache Beam or Zeppelin interactive notebooks — by providing users with a unified entry point to Flink. For users working with Flink across multiple target environments, the transition to a configuration-based execution process also significantly reduces boilerplate code and maintainability overhead.&lt;/p&gt;
 
-### Native Kubernetes Integration (Beta)
+&lt;h3 id=&quot;native-kubernetes-integration-beta&quot;&gt;Native Kubernetes Integration (Beta)&lt;/h3&gt;
 
-For users looking to get started with Flink on a containerized environment, deploying and managing a standalone cluster on top of Kubernetes requires some upfront knowledge about containers, operators and environment-specific tools like `kubectl`.
+&lt;p&gt;For users looking to get started with Flink on a containerized environment, deploying and managing a standalone cluster on top of Kubernetes requires some upfront knowledge about containers, operators and environment-specific tools like &lt;code&gt;kubectl&lt;/code&gt;.&lt;/p&gt;
 
-In Flink 1.10, we rolled out the first phase of **Active Kubernetes Integration** ([FLINK-9953](https://jira.apache.org/jira/browse/FLINK-9953)) with support for session clusters (with per-job planned). In this context, “active” means that Flink’s ResourceManager (`K8sResMngr`) natively communicates with Kubernetes to allocate new pods on-demand, similar to Flink’s Yarn and Mesos integration. Users can also leverage namespaces to launch Flink clusters for multi-tenant environments with l [...]
+&lt;p&gt;In Flink 1.10, we rolled out the first phase of &lt;strong&gt;Active Kubernetes Integration&lt;/strong&gt; (&lt;a href=&quot;https://jira.apache.org/jira/browse/FLINK-9953&quot;&gt;FLINK-9953&lt;/a&gt;) with support for session clusters (with per-job planned). In this context, “active” means that Flink’s ResourceManager (&lt;code&gt;K8sResMngr&lt;/code&gt;) natively communicates with Kubernetes to allocate new pods on-demand, similar to Flink’s Yarn and Mesos integration. Users  [...]
 
-&lt;span&gt;
+&lt;p&gt;&lt;span&gt;
 	&lt;center&gt;
-	&lt;img vspace=&quot;8&quot; style=&quot;width:75%&quot; src=&quot;{{site.baseurl}}/img/blog/2020-02-11-release-1.10.0/flink_1.10_nativek8s.png&quot;/&gt;
+	&lt;img vspace=&quot;8&quot; style=&quot;width:75%&quot; src=&quot;/img/blog/2020-02-11-release-1.10.0/flink_1.10_nativek8s.png&quot; /&gt;
 	&lt;/center&gt;
-&lt;/span&gt;
+&lt;/span&gt;&lt;/p&gt;
 
-As introduced in [Unified Logic For Job Submission](#unified-logic-for-job-submission), all command-line options in Flink 1.10 are mapped to a unified configuration. For this reason, users can simply refer to the Kubernetes config options and submit a job to an existing Flink session on Kubernetes in the CLI using:
+&lt;p&gt;As introduced in &lt;a href=&quot;#unified-logic-for-job-submission&quot;&gt;Unified Logic For Job Submission&lt;/a&gt;, all command-line options in Flink 1.10 are mapped to a unified configuration. For this reason, users can simply refer to the Kubernetes config options and submit a job to an existing Flink session on Kubernetes in the CLI using:&lt;/p&gt;
 
-```bash
-./bin/flink run -d -e kubernetes-session -Dkubernetes.cluster-id=&lt;ClusterId&gt; examples/streaming/WindowJoin.jar
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;./bin/flink run -d -e kubernetes-session -Dkubernetes.cluster-id&lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;&amp;lt;ClusterId&amp;gt; examples/streaming/WindowJoin.jar&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-If you want to try out this preview feature, we encourage you to walk through the [Native Kubernetes setup]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/ops/deployment/native_kubernetes.html), play around with it and share feedback with the community.
+&lt;p&gt;If you want to try out this preview feature, we encourage you to walk through the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/deployment/native_kubernetes.html&quot;&gt;Native Kubernetes setup&lt;/a&gt;, play around with it and share feedback with the community.&lt;/p&gt;
 
-### Table API/SQL: Production-ready Hive Integration
+&lt;h3 id=&quot;table-apisql-production-ready-hive-integration&quot;&gt;Table API/SQL: Production-ready Hive Integration&lt;/h3&gt;
 
-Hive integration was announced as a preview feature in Flink 1.9. This preview allowed users to persist Flink-specific metadata (e.g. Kafka tables) in Hive Metastore using SQL DDL, call UDFs defined in Hive and use Flink for reading and writing Hive tables. Flink 1.10 rounds up this effort with further developments that bring production-ready Hive integration to Flink with full compatibility of [most Hive versions]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/dev/table/hive/#supported [...]
+&lt;p&gt;Hive integration was announced as a preview feature in Flink 1.9. This preview allowed users to persist Flink-specific metadata (e.g. Kafka tables) in Hive Metastore using SQL DDL, call UDFs defined in Hive and use Flink for reading and writing Hive tables. Flink 1.10 rounds up this effort with further developments that bring production-ready Hive integration to Flink with full compatibility of &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/tab [...]
 
-#### Native Partition Support for Batch SQL
+&lt;h4 id=&quot;native-partition-support-for-batch-sql&quot;&gt;Native Partition Support for Batch SQL&lt;/h4&gt;
 
-So far, only writes to non-partitioned Hive tables were supported. In Flink 1.10, the Flink SQL syntax has been extended with `INSERT OVERWRITE` and `PARTITION` ([FLIP-63](https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support)), enabling users to write into both static and dynamic partitions in Hive.
+&lt;p&gt;So far, only writes to non-partitioned Hive tables were supported. In Flink 1.10, the Flink SQL syntax has been extended with &lt;code&gt;INSERT OVERWRITE&lt;/code&gt; and &lt;code&gt;PARTITION&lt;/code&gt; (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-63%3A+Rework+table+partition+support&quot;&gt;FLIP-63&lt;/a&gt;), enabling users to write into both static and dynamic partitions in Hive.&lt;/p&gt;
 
-**Static Partition Writing**
+&lt;p&gt;&lt;strong&gt;Static Partition Writing&lt;/strong&gt;&lt;/p&gt;
 
-```sql
-INSERT { INTO | OVERWRITE } TABLE tablename1 [PARTITION (partcol1=val1, partcol2=val2 ...)] select_statement1 FROM from_statement;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;INSERT&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INTO&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;|&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;OVERWRITE&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tablename1&lt; [...]
 
-**Dynamic Partition Writing**
+&lt;p&gt;&lt;strong&gt;Dynamic Partition Writing&lt;/strong&gt;&lt;/p&gt;
 
-```sql
-INSERT { INTO | OVERWRITE } TABLE tablename1 select_statement1 FROM from_statement;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;INSERT&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;INTO&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;|&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;OVERWRITE&lt;/span&gt; &lt;span class=&quot;err&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tablename1&lt; [...]
 
-Fully supporting partitioned tables allows users to take advantage of partition pruning on read, which significantly increases the performance of these operations by reducing the amount of data that needs to be scanned.
+&lt;p&gt;Fully supporting partitioned tables allows users to take advantage of partition pruning on read, which significantly increases the performance of these operations by reducing the amount of data that needs to be scanned.&lt;/p&gt;
 
-#### Further Optimizations
+&lt;h4 id=&quot;further-optimizations&quot;&gt;Further Optimizations&lt;/h4&gt;
 
-Besides partition pruning, Flink 1.10 introduces more [read optimizations]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/dev/table/hive/read_write_hive.html#optimizations) to Hive integration, such as:
+&lt;p&gt;Besides partition pruning, Flink 1.10 introduces more &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/hive/read_write_hive.html#optimizations&quot;&gt;read optimizations&lt;/a&gt; to Hive integration, such as:&lt;/p&gt;
 
-* **Projection pushdown:** Flink leverages projection pushdown to minimize data transfer between Flink and Hive tables by omitting unnecessary fields from table scans. This is especially beneficial for tables with a large number of columns.
-
-* **LIMIT pushdown:** for queries with the `LIMIT` clause, Flink will limit the number of output records wherever possible to minimize the amount of data transferred across the network.
-
-* **ORC Vectorization on Read:** to boost read performance for ORC files, Flink now uses the native ORC Vectorized Reader by default for Hive versions above 2.0.0 and columns with non-complex data types.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Projection pushdown:&lt;/strong&gt; Flink leverages projection pushdown to minimize data transfer between Flink and Hive tables by omitting unnecessary fields from table scans. This is especially beneficial for tables with a large number of columns.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;LIMIT pushdown:&lt;/strong&gt; for queries with the &lt;code&gt;LIMIT&lt;/code&gt; clause, Flink will limit the number of output records wherever possible to minimize the amount of data transferred across the network.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;ORC Vectorization on Read:&lt;/strong&gt; to boost read performance for ORC files, Flink now uses the native ORC Vectorized Reader by default for Hive versions above 2.0.0 and columns with non-complex data types.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-#### Pluggable Modules as Flink System Objects (Beta)
+&lt;h4 id=&quot;pluggable-modules-as-flink-system-objects-beta&quot;&gt;Pluggable Modules as Flink System Objects (Beta)&lt;/h4&gt;
 
-Flink 1.10 introduces a generic mechanism for pluggable modules in the Flink table core, with a first focus on system functions ([FLIP-68](https://cwiki.apache.org/confluence/display/FLINK/FLIP-68%3A+Extend+Core+Table+System+with+Pluggable+Modules)). With modules, users can extend Flink’s system objects — for example use Hive built-in functions that behave like Flink system functions. This release ships with a pre-implemented `HiveModule`, supporting multiple Hive versions, but users are [...]
+&lt;p&gt;Flink 1.10 introduces a generic mechanism for pluggable modules in the Flink table core, with a first focus on system functions (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-68%3A+Extend+Core+Table+System+with+Pluggable+Modules&quot;&gt;FLIP-68&lt;/a&gt;). With modules, users can extend Flink’s system objects — for example use Hive built-in functions that behave like Flink system functions. This release ships with a pre-implemented &lt;code&gt;HiveModu [...]
 
-### Other Improvements to the Table API/SQL
+&lt;h3 id=&quot;other-improvements-to-the-table-apisql&quot;&gt;Other Improvements to the Table API/SQL&lt;/h3&gt;
 
-#### Watermarks and Computed Columns in SQL DDL
+&lt;h4 id=&quot;watermarks-and-computed-columns-in-sql-ddl&quot;&gt;Watermarks and Computed Columns in SQL DDL&lt;/h4&gt;
 
-Flink 1.10 supports stream-specific syntax extensions to define time attributes and watermark generation in Flink SQL DDL ([FLIP-66](https://cwiki.apache.org/confluence/display/FLINK/FLIP-66%3A+Support+Time+Attribute+in+SQL+DDL)). This allows time-based operations, like windowing, and the definition of [watermark strategies]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/dev/table/sql/create.html#create-table) on tables created using DDL statements.
+&lt;p&gt;Flink 1.10 supports stream-specific syntax extensions to define time attributes and watermark generation in Flink SQL DDL (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-66%3A+Support+Time+Attribute+in+SQL+DDL&quot;&gt;FLIP-66&lt;/a&gt;). This allows time-based operations, like windowing, and the definition of &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/create.html#create-table&quot;&gt;watermark strategies [...]
 
-```sql
-CREATE TABLE table_name (
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;TABLE&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;table_name&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
 
-  WATERMARK FOR columnName AS &lt;watermark_strategy_expression&gt;
+  &lt;span class=&quot;n&quot;&gt;WATERMARK&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FOR&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;columnName&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;AS&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;watermark_strategy_expression&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt;
 
-) WITH (
-  ...
-)
-```
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;WITH&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;
+  &lt;span class=&quot;p&quot;&gt;...&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-This release also introduces support for virtual computed columns ([FLIP-70](https://cwiki.apache.org/confluence/display/FLINK/FLIP-70%3A+Flink+SQL+Computed+Column+Design)) that can be derived based on other columns in the same table or deterministic expressions (i.e. literal values, UDFs and built-in functions). In Flink, computed columns are useful to define time attributes [upon table creation]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/dev/table/sql/create.html#create-table).
+&lt;p&gt;This release also introduces support for virtual computed columns (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-70%3A+Flink+SQL+Computed+Column+Design&quot;&gt;FLIP-70&lt;/a&gt;) that can be derived based on other columns in the same table or deterministic expressions (i.e. literal values, UDFs and built-in functions). In Flink, computed columns are useful to define time attributes &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-releas [...]
 
-#### Additional Extensions to SQL DDL
+&lt;h4 id=&quot;additional-extensions-to-sql-ddl&quot;&gt;Additional Extensions to SQL DDL&lt;/h4&gt;
 
-There is now a clear distinction between temporary/persistent and system/catalog functions ([FLIP-57](https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog)). This not only eliminates ambiguity in function reference, but also allows for deterministic function resolution order (i.e. in case of naming collision, system functions will precede catalog functions, with temporary functions taking precedence over persistent functions for both dimensions).
+&lt;p&gt;There is now a clear distinction between temporary/persistent and system/catalog functions (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-57%3A+Rework+FunctionCatalog&quot;&gt;FLIP-57&lt;/a&gt;). This not only eliminates ambiguity in function reference, but also allows for deterministic function resolution order (i.e. in case of naming collision, system functions will precede catalog functions, with temporary functions taking precedence over persistent  [...]
 
-Following the groundwork in FLIP-57, we extended the SQL DDL syntax to support the creation of catalog functions, temporary functions and temporary system functions ([FLIP-79](https://cwiki.apache.org/confluence/display/FLINK/FLIP-79+Flink+Function+DDL+Support)):
+&lt;p&gt;Following the groundwork in FLIP-57, we extended the SQL DDL syntax to support the creation of catalog functions, temporary functions and temporary system functions (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-79+Flink+Function+DDL+Support&quot;&gt;FLIP-79&lt;/a&gt;):&lt;/p&gt;
 
-```sql
-CREATE [TEMPORARY|TEMPORARY SYSTEM] FUNCTION 
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;CREATE&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;TEMPORARY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;TEMPORARY&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;SYSTEM&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FUNCTION&lt;/span&gt; 
 
-  [IF NOT EXISTS] [catalog_name.][db_name.]function_name 
+  &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;IF&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;NOT&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;EXISTS&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;catalog_name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.][&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;db_name&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;.]&lt;/span&gt [...]
 
-AS identifier [LANGUAGE JAVA|SCALA]
-```
+&lt;span class=&quot;k&quot;&gt;AS&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;identifier&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;LANGUAGE&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;JAVA&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;|&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;SCALA&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-For a complete overview of the current state of DDL support in Flink SQL, check the [updated documentation]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/dev/table/sql/).
+&lt;p&gt;For a complete overview of the current state of DDL support in Flink SQL, check the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/table/sql/&quot;&gt;updated documentation&lt;/a&gt;.&lt;/p&gt;
 
-&lt;span class=&quot;label label-danger&quot;&gt;Note&lt;/span&gt; In order to correctly handle and guarantee a consistent behavior across meta-objects (tables, views, functions) in the future, some object declaration methods in the Table API have been deprecated in favor of methods that are closer to standard SQL DDL ([FLIP-64](https://cwiki.apache.org/confluence/display/FLINK/FLIP-64%3A+Support+for+Temporary+Objects+in+Table+module)).
+&lt;p&gt;&lt;span class=&quot;label label-danger&quot;&gt;Note&lt;/span&gt; In order to correctly handle and guarantee a consistent behavior across meta-objects (tables, views, functions) in the future, some object declaration methods in the Table API have been deprecated in favor of methods that are closer to standard SQL DDL (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-64%3A+Support+for+Temporary+Objects+in+Table+module&quot;&gt;FLIP-64&lt;/a&gt;).&lt;/p&gt;
 
-#### Full TPC-DS Coverage for Batch
+&lt;h4 id=&quot;full-tpc-ds-coverage-for-batch&quot;&gt;Full TPC-DS Coverage for Batch&lt;/h4&gt;
 
-TPC-DS is a widely used industry-standard decision support benchmark to evaluate and measure the performance of SQL-based data processing engines. In Flink 1.10, all TPC-DS queries are supported end-to-end ([FLINK-11491](https://issues.apache.org/jira/browse/FLINK-11491)), reflecting the readiness of its SQL engine to address the needs of modern data warehouse-like workloads.
+&lt;p&gt;TPC-DS is a widely used industry-standard decision support benchmark to evaluate and measure the performance of SQL-based data processing engines. In Flink 1.10, all TPC-DS queries are supported end-to-end (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11491&quot;&gt;FLINK-11491&lt;/a&gt;), reflecting the readiness of its SQL engine to address the needs of modern data warehouse-like workloads.&lt;/p&gt;
 
-### PyFlink: Support for Native User Defined Functions (UDFs)
+&lt;h3 id=&quot;pyflink-support-for-native-user-defined-functions-udfs&quot;&gt;PyFlink: Support for Native User Defined Functions (UDFs)&lt;/h3&gt;
 
-A preview of PyFlink was introduced in the previous release, making headway towards the goal of full Python support in Flink. For this release, the focus was to enable users to register and use Python User-Defined Functions (UDF, with UDTF/UDAF planned) in the Table API/SQL ([FLIP-58](https://cwiki.apache.org/confluence/display/FLINK/FLIP-58%3A+Flink+Python+User-Defined+Stateless+Function+for+Table)).
+&lt;p&gt;A preview of PyFlink was introduced in the previous release, making headway towards the goal of full Python support in Flink. For this release, the focus was to enable users to register and use Python User-Defined Functions (UDF, with UDTF/UDAF planned) in the Table API/SQL (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-58%3A+Flink+Python+User-Defined+Stateless+Function+for+Table&quot;&gt;FLIP-58&lt;/a&gt;).&lt;/p&gt;
 
-&lt;span&gt;
+&lt;p&gt;&lt;span&gt;
 	&lt;center&gt;
-	&lt;img vspace=&quot;8&quot; hspace=&quot;100&quot; style=&quot;width:75%&quot; src=&quot;{{site.baseurl}}/img/blog/2020-02-11-release-1.10.0/flink_1.10_pyflink.gif&quot;/&gt;
+	&lt;img vspace=&quot;8&quot; hspace=&quot;100&quot; style=&quot;width:75%&quot; src=&quot;/img/blog/2020-02-11-release-1.10.0/flink_1.10_pyflink.gif&quot; /&gt;
 	&lt;/center&gt;
-&lt;/span&gt;
-
-If you are interested in the underlying implementation — leveraging Apache Beam’s [Portability Framework](https://beam.apache.org/roadmap/portability/) — refer to the “Architecture” section of FLIP-58 and also to [FLIP-78](https://cwiki.apache.org/confluence/display/FLINK/FLIP-78%3A+Flink+Python+UDF+Environment+and+Dependency+Management). These data structures lay the required foundation for Pandas support and for PyFlink to eventually reach the DataStream API. 
-
-From Flink 1.10, users can also easily install PyFlink through `pip` using:
-
-```bash
-pip install apache-flink
-```
-
-For a preview of other improvements planned for PyFlink, check [FLINK-14500](https://issues.apache.org/jira/browse/FLINK-14500) and get involved in the [discussion](http://apache-flink.147419.n8.nabble.com/Re-DISCUSS-What-parts-of-the-Python-API-should-we-focus-on-next-td1285.html) for requested user features.
-
-## Important Changes
-
- * [[FLINK-10725](https://issues.apache.org/jira/browse/FLINK-10725)] Flink can now be compiled and run on Java 11.
-
- * [[FLINK-15495](https://jira.apache.org/jira/browse/FLINK-15495)] The Blink planner is now the default in the SQL Client, so that users can benefit from all the latest features and improvements. The switch from the old planner in the Table API is also planned for the next release, so we recommend that users start getting familiar with the Blink planner.
-
- * [[FLINK-13025](https://issues.apache.org/jira/browse/FLINK-13025)] There is a [new Elasticsearch sink connector](https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/elasticsearch.html#elasticsearch-connector), fully supporting Elasticsearch 7.x versions.
+&lt;/span&gt;&lt;/p&gt;
 
- * [[FLINK-15115](https://issues.apache.org/jira/browse/FLINK-15115)] The connectors for Kafka 0.8 and 0.9 have been marked as deprecated and will no longer be actively supported. If you are still using these versions or have any other related concerns, please reach out to the @dev mailing list.
+&lt;p&gt;If you are interested in the underlying implementation — leveraging Apache Beam’s &lt;a href=&quot;https://beam.apache.org/roadmap/portability/&quot;&gt;Portability Framework&lt;/a&gt; — refer to the “Architecture” section of FLIP-58 and also to &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-78%3A+Flink+Python+UDF+Environment+and+Dependency+Management&quot;&gt;FLIP-78&lt;/a&gt;. These data structures lay the required foundation for Pandas support and for [...]
 
- * [[FLINK-14516](https://issues.apache.org/jira/browse/FLINK-14516)] The non-credit-based network flow control code was removed, along with the configuration option `taskmanager.network.credit.model`. Moving forward, Flink will always use credit-based flow control.
+&lt;p&gt;From Flink 1.10, users can also easily install PyFlink through &lt;code&gt;pip&lt;/code&gt; using:&lt;/p&gt;
 
- * [[FLINK-12122](https://issues.apache.org/jira/browse/FLINK-12122)] [FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077) was rolled out with Flink 1.5.0 and introduced a code regression related to the way slots are allocated from `TaskManagers`. To use a scheduling strategy that is closer to the pre-FLIP behavior, where Flink tries to spread out the workload across all currently available `TaskManagers`, users can set `cluster.evenly-spread-out-slots: tru [...]
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-bash&quot;&gt;pip install apache-flink&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
- * [[FLINK-11956](https://issues.apache.org/jira/browse/FLINK-11956)] `s3-hadoop` and `s3-presto` filesystems no longer use class relocations and should be loaded through [plugins]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/ops/filesystems/#pluggable-file-systems), but now seamlessly integrate with all credential providers. Other filesystems are strongly recommended to be used only as plugins, as we will continue to remove relocations.
+&lt;p&gt;For a preview of other improvements planned for PyFlink, check &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14500&quot;&gt;FLINK-14500&lt;/a&gt; and get involved in the &lt;a href=&quot;http://apache-flink.147419.n8.nabble.com/Re-DISCUSS-What-parts-of-the-Python-API-should-we-focus-on-next-td1285.html&quot;&gt;discussion&lt;/a&gt; for requested user features.&lt;/p&gt;
 
- * Flink 1.9 shipped with a refactored Web UI, with the legacy one being kept around as backup in case something wasn’t working as expected. No issues have been reported so far, so [the community voted](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Remove-old-WebUI-td35218.html) to drop the legacy Web UI in Flink 1.10.
+&lt;h2 id=&quot;important-changes&quot;&gt;Important Changes&lt;/h2&gt;
 
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10725&quot;&gt;FLINK-10725&lt;/a&gt;] Flink can now be compiled and run on Java 11.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://jira.apache.org/jira/browse/FLINK-15495&quot;&gt;FLINK-15495&lt;/a&gt;] The Blink planner is now the default in the SQL Client, so that users can benefit from all the latest features and improvements. The switch from the old planner in the Table API is also planned for the next release, so we recommend that users start getting familiar with the Blink planner.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13025&quot;&gt;FLINK-13025&lt;/a&gt;] There is a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/dev/connectors/elasticsearch.html#elasticsearch-connector&quot;&gt;new Elasticsearch sink connector&lt;/a&gt;, fully supporting Elasticsearch 7.x versions.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15115&quot;&gt;FLINK-15115&lt;/a&gt;] The connectors for Kafka 0.8 and 0.9 have been marked as deprecated and will no longer be actively supported. If you are still using these versions or have any other related concerns, please reach out to the @dev mailing list.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14516&quot;&gt;FLINK-14516&lt;/a&gt;] The non-credit-based network flow control code was removed, along with the configuration option &lt;code&gt;taskmanager.network.credit.model&lt;/code&gt;. Moving forward, Flink will always use credit-based flow control.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12122&quot;&gt;FLINK-12122&lt;/a&gt;] &lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077&quot;&gt;FLIP-6&lt;/a&gt; was rolled out with Flink 1.5.0 and introduced a code regression related to the way slots are allocated from &lt;code&gt;TaskManagers&lt;/code&gt;. To use a scheduling strategy that is closer to the pre-FLIP behavior, where Flink tries to spread out the workload [...]
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11956&quot;&gt;FLINK-11956&lt;/a&gt;] &lt;code&gt;s3-hadoop&lt;/code&gt; and &lt;code&gt;s3-presto&lt;/code&gt; filesystems no longer use class relocations and should be loaded through &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/ops/filesystems/#pluggable-file-systems&quot;&gt;plugins&lt;/a&gt;, but now seamlessly integrate with all credential providers. Other filesystems are stro [...]
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Flink 1.9 shipped with a refactored Web UI, with the legacy one being kept around as backup in case something wasn’t working as expected. No issues have been reported so far, so &lt;a href=&quot;http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Remove-old-WebUI-td35218.html&quot;&gt;the community voted&lt;/a&gt; to drop the legacy Web UI in Flink 1.10.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-## Release Notes
-
-Please review the [release notes]({{ site.DOCS_BASE_URL }}flink-docs-release-1.10/release-notes/flink-1.10.html) carefully for a detailed list of changes and new features if you plan to upgrade your setup to Flink 1.10. This version is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation.
+&lt;h2 id=&quot;release-notes&quot;&gt;Release Notes&lt;/h2&gt;
 
+&lt;p&gt;Please review the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.10/release-notes/flink-1.10.html&quot;&gt;release notes&lt;/a&gt; carefully for a detailed list of changes and new features if you plan to upgrade your setup to Flink 1.10. This version is API-compatible with previous 1.x releases for APIs annotated with the @Public annotation.&lt;/p&gt;
 
-## List of Contributors
+&lt;h2 id=&quot;list-of-contributors&quot;&gt;List of Contributors&lt;/h2&gt;
 
-The Apache Flink community would like to thank all contributors that have made this release possible:
+&lt;p&gt;The Apache Flink community would like to thank all contributors that have made this release possible:&lt;/p&gt;
 
-Achyuth Samudrala, Aitozi, Alberto Romero, Alec.Ch, Aleksey Pak, Alexander Fedulov, Alice Yan, Aljoscha Krettek, Aloys, Andrey Zagrebin, Arvid Heise, Benchao Li, Benoit Hanotte, Benoît Paris, Bhagavan Das, Biao Liu, Chesnay Schepler, Congxian Qiu, Cyrille Chépélov, César Soto Valero, David Anderson, David Hrbacek, David Moravek, Dawid Wysakowicz, Dezhi Cai, Dian Fu, Dyana Rose, Eamon Taaffe, Fabian Hueske, Fawad Halim, Fokko Driesprong, Frey Gao, Gabor Gevay, Gao Yun, Gary Yao, GatsbyNew [...]
+&lt;p&gt;Achyuth Samudrala, Aitozi, Alberto Romero, Alec.Ch, Aleksey Pak, Alexander Fedulov, Alice Yan, Aljoscha Krettek, Aloys, Andrey Zagrebin, Arvid Heise, Benchao Li, Benoit Hanotte, Benoît Paris, Bhagavan Das, Biao Liu, Chesnay Schepler, Congxian Qiu, Cyrille Chépélov, César Soto Valero, David Anderson, David Hrbacek, David Moravek, Dawid Wysakowicz, Dezhi Cai, Dian Fu, Dyana Rose, Eamon Taaffe, Fabian Hueske, Fawad Halim, Fokko Driesprong, Frey Gao, Gabor Gevay, Gao Yun, Gary Yao,  [...]
 </description>
-<pubDate>Tue, 11 Feb 2020 02:30:00 +0000</pubDate>
+<pubDate>Tue, 11 Feb 2020 03:30:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/02/11/release-1.10.0.html</link>
 <guid isPermaLink="true">/news/2020/02/11/release-1.10.0.html</guid>
 </item>
 
 <item>
 <title>A Guide for Unit Testing in Apache Flink</title>
-<description>Writing unit tests is one of the essential tasks of designing a production-grade application. Without tests, a single change in code can result in cascades of failure in production. Thus unit tests should be written for all types of applications, be it a simple job cleaning data and training a model or a complex multi-tenant, real-time data processing system. In the following sections, we provide a guide for unit testing of Apache Flink applications. 
-Apache Flink provides a robust unit testing framework to make sure your applications behave in production as expected during development. You need to include the following dependencies to utilize the provided framework.
-
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-test-utils_${scala.binary.version}&lt;/artifactId&gt;
-  &lt;version&gt;${flink.version}&lt;/version&gt;
-  &lt;scope&gt;test&lt;/scope&gt;
-&lt;/dependency&gt; 
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-runtime_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.9.0&lt;/version&gt;
-  &lt;scope&gt;test&lt;/scope&gt;
-  &lt;classifier&gt;tests&lt;/classifier&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.9.0&lt;/version&gt;
-  &lt;scope&gt;test&lt;/scope&gt;
-  &lt;classifier&gt;tests&lt;/classifier&gt;
-&lt;/dependency&gt;
-```
-
-The strategy of writing unit tests differs for various operators. You can break down the strategy into the following three buckets: 
-
-* Stateless Operators
-* Stateful Operators
-* Timed Process Operators
-
-
-# Stateless Operators
-
-Writing unit tests for a stateless operator is a breeze. You need to follow the basic norm of writing a test case, i.e., create an instance of the function class and test the appropriate methods. Let’s take an example of a simple `Map` operator.
-
-```java
-public class MyStatelessMap implements MapFunction&lt;String, String&gt; {
-  @Override
-  public String map(String in) throws Exception {
-    String out = &quot;hello &quot; + in;
-    return out;
-  }
-}
-```
-
-The test case for the above operator should look like
-
-```java
-@Test
-public void testMap() throws Exception {
-  MyStatelessMap statelessMap = new MyStatelessMap();
-  String out = statelessMap.map(&quot;world&quot;);
-  Assert.assertEquals(&quot;hello world&quot;, out);
-}
-```
-
-Pretty simple, right? Let’s take a look at one for the `FlatMap` operator.
-
-```java
-public class MyStatelessFlatMap implements FlatMapFunction&lt;String, String&gt; {
-  @Override
-  public void flatMap(String in, Collector&lt;String&gt; collector) throws Exception {
-    String out = &quot;hello &quot; + in;
-    collector.collect(out);
-  }
-}
-```
-
-`FlatMap` operators require a `Collector` object along with the input. For the test case, we have two options: 
-
-1. Mock the `Collector` object using Mockito
-2. Use the `ListCollector` provided by Flink
-
-I prefer the second method as it requires fewer lines of code and is suitable for most of the cases.
-
-```java
-@Test
-public void testFlatMap() throws Exception {
-  MyStatelessFlatMap statelessFlatMap = new MyStatelessFlatMap();
-  List&lt;String&gt; out = new ArrayList&lt;&gt;();
-  ListCollector&lt;String&gt; listCollector = new ListCollector&lt;&gt;(out);
-  statelessFlatMap.flatMap(&quot;world&quot;, listCollector);
-  Assert.assertEquals(Lists.newArrayList(&quot;hello world&quot;), out);
-}
-```
-
-
-# Stateful Operators
-
-Writing test cases for stateful operators requires more effort. You need to check whether the operator state is updated correctly and if it is cleaned up properly along with the output of the operator.
-
-Let’s take an example of stateful `FlatMap` function
-
-```java
-public class StatefulFlatMap extends RichFlatMapFunction&lt;String, String&gt; {
-  ValueState&lt;String&gt; previousInput;
-
-  @Override
-  public void open(Configuration parameters) throws Exception {
-    previousInput = getRuntimeContext().getState(
-      new ValueStateDescriptor&lt;String&gt;(&quot;previousInput&quot;, Types.STRING));
-  }
-
-  @Override
-  public void flatMap(String in, Collector&lt;String&gt; collector) throws Exception {
-    String out = &quot;hello &quot; + in;
-    if(previousInput.value() != null){
-      out = out + &quot; &quot; + previousInput.value();
-    }
-    previousInput.update(in);
-    collector.collect(out);
-  }
-}
-```
-
-The intricate part of writing tests for the above class is to mock the configuration as well as the runtime context of the application. Flink provides TestHarness classes so that users don’t have to create the mock objects themselves. Using the `KeyedOperatorHarness`, the test looks like:
-
-```java
-import org.apache.flink.streaming.api.operators.StreamFlatMap;
-import org.apache.flink.streaming.runtime.streamrecord.StreamRecord;
-import org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness;
-import org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness;
-
-@Test
-public void testFlatMap() throws Exception{
-  StatefulFlatMap statefulFlatMap = new StatefulFlatMap();
-
-  // OneInputStreamOperatorTestHarness takes the input and output types as type parameters     
-  OneInputStreamOperatorTestHarness&lt;String, String&gt; testHarness = 
-    // KeyedOneInputStreamOperatorTestHarness takes three arguments:
-    //   Flink operator object, key selector and key type
-    new KeyedOneInputStreamOperatorTestHarness&lt;&gt;(
-      new StreamFlatMap&lt;&gt;(statefulFlatMap), x -&gt; &quot;1&quot;, Types.STRING);
-  testHarness.open();
-
-  // test first record
-  testHarness.processElement(&quot;world&quot;, 10);
-  ValueState&lt;String&gt; previousInput = 
-    statefulFlatMap.getRuntimeContext().getState(
-      new ValueStateDescriptor&lt;&gt;(&quot;previousInput&quot;, Types.STRING));
-  String stateValue = previousInput.value();
-  Assert.assertEquals(
-    Lists.newArrayList(new StreamRecord&lt;&gt;(&quot;hello world&quot;, 10)), 
-    testHarness.extractOutputStreamRecords());
-  Assert.assertEquals(&quot;world&quot;, stateValue);
-
-  // test second record
-  testHarness.processElement(&quot;parallel&quot;, 20);
-  Assert.assertEquals(
-    Lists.newArrayList(
-      new StreamRecord&lt;&gt;(&quot;hello world&quot;, 10), 
-      new StreamRecord&lt;&gt;(&quot;hello parallel world&quot;, 20)), 
-    testHarness.extractOutputStreamRecords());
-  Assert.assertEquals(&quot;parallel&quot;, previousInput.value());
-}
-```
-
-The test harness provides many helper methods, three of which are being used here:
-
-1. `open`: calls the open of the `FlatMap` function with relevant parameters. It also initializes the context.
-2. `processElement`: allows users to pass an input element as well as the timestamp associated with the element.
-3. `extractOutputStreamRecords`: gets the output records along with their timestamps from the `Collector`.
-
-The test harness simplifies the unit testing for the stateful functions to a large extent. 
-
-You might also need to check whether the state value is being set correctly. You can get the state value directly from the operator using a mechanism similar to the one used while creating the state. This is also demonstrated in the previous example.
-
-
-# Timed Process Operators
-
-Writing tests for process functions, that work with time, is quite similar to writing tests for stateful functions because you can also use test harness.
-However, you need to take care of another aspect, which is providing timestamps for events and controlling the current time of the application. By setting the current (processing or event) time, you can trigger registered timers, which will call the `onTimer` method of the function
-
-```java
-public class MyProcessFunction extends KeyedProcessFunction&lt;String, String, String&gt; {
-  @Override
-  public void processElement(String in, Context context, Collector&lt;String&gt; collector) throws Exception {
-    context.timerService().registerProcessingTimeTimer(50);
-    String out = &quot;hello &quot; + in;
-    collector.collect(out);
-  }
-
-  @Override
-  public void onTimer(long timestamp, OnTimerContext ctx, Collector&lt;String&gt; out) throws Exception {
-    out.collect(String.format(&quot;Timer triggered at timestamp %d&quot;, timestamp));
-  }
-}
-```
-
-We need to test both the methods in the `KeyedProcessFunction`, i.e., `processElement` as well as `onTimer`. Using a test harness, we can control the current time of the function. Thus, we can trigger the timer at will rather than waiting for a specific time.
-
-Let’s take a look at the test case
-
-```java
-@Test
-public void testProcessElement() throws Exception{
-  MyProcessFunction myProcessFunction = new MyProcessFunction();
-  OneInputStreamOperatorTestHarness&lt;String, String&gt; testHarness = 
-    new KeyedOneInputStreamOperatorTestHarness&lt;&gt;(
-      new KeyedProcessOperator&lt;&gt;(myProcessFunction), x -&gt; &quot;1&quot;, Types.STRING);
-
-  // Function time is initialized to 0
-  testHarness.open();
-  testHarness.processElement(&quot;world&quot;, 10);
-
-  Assert.assertEquals(
-    Lists.newArrayList(new StreamRecord&lt;&gt;(&quot;hello world&quot;, 10)), 
-    testHarness.extractOutputStreamRecords());
-}
-
-@Test
-public void testOnTimer() throws Exception {
-  MyProcessFunction myProcessFunction = new MyProcessFunction();
-  OneInputStreamOperatorTestHarness&lt;String, String&gt; testHarness = 
-    new KeyedOneInputStreamOperatorTestHarness&lt;&gt;(
-      new KeyedProcessOperator&lt;&gt;(myProcessFunction), x -&gt; &quot;1&quot;, Types.STRING);
-
-  testHarness.open();
-  testHarness.processElement(&quot;world&quot;, 10);
-  Assert.assertEquals(1, testHarness.numProcessingTimeTimers());
+<description>&lt;p&gt;Writing unit tests is one of the essential tasks of designing a production-grade application. Without tests, a single change in code can result in cascades of failure in production. Thus unit tests should be written for all types of applications, be it a simple job cleaning data and training a model or a complex multi-tenant, real-time data processing system. In the following sections, we provide a guide for unit testing of Apache Flink applications. 
+Apache Flink provides a robust unit testing framework to make sure your applications behave in production as expected during development. You need to include the following dependencies to utilize the provided framework.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-test-utils_${scala.binary.version}&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;${flink.version}&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;scope&amp;gt;&lt;/span&gt;test&lt;span class=&quot;nt&quot;&gt;&amp;lt;/scope&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt; 
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-runtime_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.0&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;scope&amp;gt;&lt;/span&gt;test&lt;span class=&quot;nt&quot;&gt;&amp;lt;/scope&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;classifier&amp;gt;&lt;/span&gt;tests&lt;span class=&quot;nt&quot;&gt;&amp;lt;/classifier&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.0&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;scope&amp;gt;&lt;/span&gt;test&lt;span class=&quot;nt&quot;&gt;&amp;lt;/scope&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;classifier&amp;gt;&lt;/span&gt;tests&lt;span class=&quot;nt&quot;&gt;&amp;lt;/classifier&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The strategy of writing unit tests differs for various operators. You can break down the strategy into the following three buckets:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Stateless Operators&lt;/li&gt;
+  &lt;li&gt;Stateful Operators&lt;/li&gt;
+  &lt;li&gt;Timed Process Operators&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h1 id=&quot;stateless-operators&quot;&gt;Stateless Operators&lt;/h1&gt;
+
+&lt;p&gt;Writing unit tests for a stateless operator is a breeze. You need to follow the basic norm of writing a test case, i.e., create an instance of the function class and test the appropriate methods. Let’s take an example of a simple &lt;code&gt;Map&lt;/code&gt; operator.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyStatelessMap&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;implements&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;MapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class= [...]
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt [...]
+    &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;hello &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The test case for the above operator should look like&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;nd&quot;&gt;@Test&lt;/span&gt;
+&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;testMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;MyStatelessMap&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;statelessMap&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;MyStatelessMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;statelessMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Pretty simple, right? Let’s take a look at one for the &lt;code&gt;FlatMap&lt;/code&gt; operator.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyStatelessFlatMap&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;implements&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;FlatMapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;spa [...]
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;flatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;St [...]
+    &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;hello &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;&lt;code&gt;FlatMap&lt;/code&gt; operators require a &lt;code&gt;Collector&lt;/code&gt; object along with the input. For the test case, we have two options:&lt;/p&gt;
+
+&lt;ol&gt;
+  &lt;li&gt;Mock the &lt;code&gt;Collector&lt;/code&gt; object using Mockito&lt;/li&gt;
+  &lt;li&gt;Use the &lt;code&gt;ListCollector&lt;/code&gt; provided by Flink&lt;/li&gt;
+&lt;/ol&gt;
+
+&lt;p&gt;I prefer the second method as it requires fewer lines of code and is suitable for most of the cases.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;nd&quot;&gt;@Test&lt;/span&gt;
+&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;testFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;MyStatelessFlatMap&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;statelessFlatMap&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;MyStatelessFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;List&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;ListCollector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;listCollector&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ListCollector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt [...]
+  &lt;span class=&quot;n&quot;&gt;statelessFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;flatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;listCollector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Lists&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello world&amp;quot;&lt;/span&gt;&lt;span clas [...]
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h1 id=&quot;stateful-operators&quot;&gt;Stateful Operators&lt;/h1&gt;
+
+&lt;p&gt;Writing test cases for stateful operators requires more effort. You need to check whether the operator state is updated correctly and if it is cleaned up properly along with the output of the operator.&lt;/p&gt;
+
+&lt;p&gt;Let’s take an example of stateful &lt;code&gt;FlatMap&lt;/code&gt; function&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;StatefulFlatMap&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;RichFlatMapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span  [...]
+  &lt;span class=&quot;n&quot;&gt;ValueState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Configuration&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parameters&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot; [...]
+    &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getRuntimeContext&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;previousInput&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/s [...]
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;flatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;St [...]
+    &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;hello &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;){&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot; &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&qu [...]
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;update&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The intricate part of writing tests for the above class is to mock the configuration as well as the runtime context of the application. Flink provides TestHarness classes so that users don’t have to create the mock objects themselves. Using the &lt;code&gt;KeyedOperatorHarness&lt;/code&gt;, the test looks like:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.streaming.api.operators.StreamFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.streaming.runtime.streamrecord.StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.streaming.util.KeyedOneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.streaming.util.OneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+
+&lt;span class=&quot;nd&quot;&gt;@Test&lt;/span&gt;
+&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;testFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;StatefulFlatMap&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;statefulFlatMap&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;StatefulFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+
+  &lt;span class=&quot;c1&quot;&gt;// OneInputStreamOperatorTestHarness takes the input and output types as type parameters     &lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;OneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 
+    &lt;span class=&quot;c1&quot;&gt;// KeyedOneInputStreamOperatorTestHarness takes three arguments:&lt;/span&gt;
+    &lt;span class=&quot;c1&quot;&gt;//   Flink operator object, key selector and key type&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedOneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;statefulFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;1&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/spa [...]
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+
+  &lt;span class=&quot;c1&quot;&gt;// test first record&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;ValueState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 
+    &lt;span class=&quot;n&quot;&gt;statefulFlatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getRuntimeContext&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;previousInput&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;));&l [...]
+  &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;stateValue&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;Lists&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&g [...]
+    &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;extractOutputStreamRecords&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;stateValue&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+
+  &lt;span class=&quot;c1&quot;&gt;// test second record&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;parallel&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;20&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;Lists&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; 
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello parallel world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;20&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)),&lt;/span&gt; 
+    &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;extractOutputStreamRecords&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;parallel&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousInput&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class [...]
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The test harness provides many helper methods, three of which are being used here:&lt;/p&gt;
+
+&lt;ol&gt;
+  &lt;li&gt;&lt;code&gt;open&lt;/code&gt;: calls the open of the &lt;code&gt;FlatMap&lt;/code&gt; function with relevant parameters. It also initializes the context.&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;processElement&lt;/code&gt;: allows users to pass an input element as well as the timestamp associated with the element.&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;extractOutputStreamRecords&lt;/code&gt;: gets the output records along with their timestamps from the &lt;code&gt;Collector&lt;/code&gt;.&lt;/li&gt;
+&lt;/ol&gt;
+
+&lt;p&gt;The test harness simplifies the unit testing for the stateful functions to a large extent.&lt;/p&gt;
+
+&lt;p&gt;You might also need to check whether the state value is being set correctly. You can get the state value directly from the operator using a mechanism similar to the one used while creating the state. This is also demonstrated in the previous example.&lt;/p&gt;
+
+&lt;h1 id=&quot;timed-process-operators&quot;&gt;Timed Process Operators&lt;/h1&gt;
+
+&lt;p&gt;Writing tests for process functions, that work with time, is quite similar to writing tests for stateful functions because you can also use test harness.
+However, you need to take care of another aspect, which is providing timestamps for events and controlling the current time of the application. By setting the current (processing or event) time, you can trigger registered timers, which will call the &lt;code&gt;onTimer&lt;/code&gt; method of the function&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyProcessFunction&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;sp [...]
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;& [...]
+    &lt;span class=&quot;n&quot;&gt;context&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;timerService&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerProcessingTimeTimer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;50&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;hello &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;onTimer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;long&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;timestamp&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;OnTimerContext&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot [...]
+    &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;format&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;Timer triggered at timestamp %d&amp;quot;&lt;/span&gt;&lt; [...]
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;We need to test both the methods in the &lt;code&gt;KeyedProcessFunction&lt;/code&gt;, i.e., &lt;code&gt;processElement&lt;/code&gt; as well as &lt;code&gt;onTimer&lt;/code&gt;. Using a test harness, we can control the current time of the function. Thus, we can trigger the timer at will rather than waiting for a specific time.&lt;/p&gt;
+
+&lt;p&gt;Let’s take a look at the test case&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;nd&quot;&gt;@Test&lt;/span&gt;
+&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;testProcessElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;MyProcessFunction&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;myProcessFunction&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;MyProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;OneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 
+    &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedOneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedProcessOperator&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;myProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;1&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt; [...]
+
+  &lt;span class=&quot;c1&quot;&gt;// Function time is initialized to 0&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;Lists&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&g [...]
+    &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;extractOutputStreamRecords&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+&lt;span class=&quot;nd&quot;&gt;@Test&lt;/span&gt;
+&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;testOnTimer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Exception&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;MyProcessFunction&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;myProcessFunction&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;MyProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;OneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 
+    &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedOneInputStreamOperatorTestHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedProcessOperator&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;myProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;x&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;1&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt; [...]
+
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;numProcessingTimeTimers&lt;/span&gt;&lt;span class=&quot;o&q [...]
       
-  // Function time is set to 50
-  testHarness.setProcessingTime(50);
-  Assert.assertEquals(
-    Lists.newArrayList(
-      new StreamRecord&lt;&gt;(&quot;hello world&quot;, 10), 
-      new StreamRecord&lt;&gt;(&quot;Timer triggered at timestamp 50&quot;)), 
-    testHarness.extractOutputStreamRecords());
-}
-```
+  &lt;span class=&quot;c1&quot;&gt;// Function time is set to 50&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProcessingTime&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;50&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;Assert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;assertEquals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;Lists&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newArrayList&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;hello world&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; 
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamRecord&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;Timer triggered at timestamp 50&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)),&lt;/span&gt; 
+    &lt;span class=&quot;n&quot;&gt;testHarness&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;extractOutputStreamRecords&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-The mechanism to test the multi-input stream operators such as CoProcess functions is similar to the ones described in this article. You should use the TwoInput variant of the harness for these operators, such as `TwoInputStreamOperatorTestHarness`.
+&lt;p&gt;The mechanism to test the multi-input stream operators such as CoProcess functions is similar to the ones described in this article. You should use the TwoInput variant of the harness for these operators, such as &lt;code&gt;TwoInputStreamOperatorTestHarness&lt;/code&gt;.&lt;/p&gt;
 
-# Summary
+&lt;h1 id=&quot;summary&quot;&gt;Summary&lt;/h1&gt;
 
-In the previous sections we showcased how unit testing in Apache Flink works for stateless, stateful and times-aware-operators. We hope you found the steps easy to follow and execute while developing your Flink applications. If you have any questions or feedback you can reach out to me [here](https://www.kharekartik.dev/about/) or contact the community on the [Apache Flink user mailing list](https://flink.apache.org/community.html).
+&lt;p&gt;In the previous sections we showcased how unit testing in Apache Flink works for stateless, stateful and times-aware-operators. We hope you found the steps easy to follow and execute while developing your Flink applications. If you have any questions or feedback you can reach out to me &lt;a href=&quot;https://www.kharekartik.dev/about/&quot;&gt;here&lt;/a&gt; or contact the community on the &lt;a href=&quot;https://flink.apache.org/community.html&quot;&gt;Apache Flink user mail [...]
 </description>
-<pubDate>Fri, 07 Feb 2020 12:00:00 +0000</pubDate>
+<pubDate>Fri, 07 Feb 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/02/07/a-guide-for-unit-testing-in-apache-flink.html</link>
 <guid isPermaLink="true">/news/2020/02/07/a-guide-for-unit-testing-in-apache-flink.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.9.2 Released</title>
-<description>The Apache Flink community released the second bugfix version of the Apache Flink 1.9 series.
+<description>&lt;p&gt;The Apache Flink community released the second bugfix version of the Apache Flink 1.9 series.&lt;/p&gt;
 
-This release includes 117 fixes and minor improvements for Flink 1.9.1. The list below includes a detailed list of all fixes and improvements.
+&lt;p&gt;This release includes 117 fixes and minor improvements for Flink 1.9.1. The list below includes a detailed list of all fixes and improvements.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.9.2.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.9.2.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.9.2&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.9.2&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.9.2&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Sub-task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12122&#39;&gt;FLINK-12122&lt;/a&gt;] -         Spread out tasks evenly across all available registered TaskManagers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12122&quot;&gt;FLINK-12122&lt;/a&gt;] -         Spread out tasks evenly across all available registered TaskManagers
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13360&#39;&gt;FLINK-13360&lt;/a&gt;] -         Add documentation for HBase connector for Table API &amp;amp; SQL
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13360&quot;&gt;FLINK-13360&lt;/a&gt;] -         Add documentation for HBase connector for Table API &amp;amp; SQL
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13361&#39;&gt;FLINK-13361&lt;/a&gt;] -         Add documentation for JDBC connector for Table API &amp;amp; SQL
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13361&quot;&gt;FLINK-13361&lt;/a&gt;] -         Add documentation for JDBC connector for Table API &amp;amp; SQL
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13723&#39;&gt;FLINK-13723&lt;/a&gt;] -         Use liquid-c for faster doc generation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13723&quot;&gt;FLINK-13723&lt;/a&gt;] -         Use liquid-c for faster doc generation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13724&#39;&gt;FLINK-13724&lt;/a&gt;] -         Remove unnecessary whitespace from the docs&amp;#39; sidenav
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13724&quot;&gt;FLINK-13724&lt;/a&gt;] -         Remove unnecessary whitespace from the docs&amp;#39; sidenav
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13725&#39;&gt;FLINK-13725&lt;/a&gt;] -         Use sassc for faster doc generation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13725&quot;&gt;FLINK-13725&lt;/a&gt;] -         Use sassc for faster doc generation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13726&#39;&gt;FLINK-13726&lt;/a&gt;] -         Build docs with jekyll 4.0.0.pre.beta1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13726&quot;&gt;FLINK-13726&lt;/a&gt;] -         Build docs with jekyll 4.0.0.pre.beta1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13791&#39;&gt;FLINK-13791&lt;/a&gt;] -         Speed up sidenav by using group_by
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13791&quot;&gt;FLINK-13791&lt;/a&gt;] -         Speed up sidenav by using group_by
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13817&#39;&gt;FLINK-13817&lt;/a&gt;] -         Expose whether web submissions are enabled
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13817&quot;&gt;FLINK-13817&lt;/a&gt;] -         Expose whether web submissions are enabled
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13818&#39;&gt;FLINK-13818&lt;/a&gt;] -         Check whether web submission are enabled
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13818&quot;&gt;FLINK-13818&lt;/a&gt;] -         Check whether web submission are enabled
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14535&#39;&gt;FLINK-14535&lt;/a&gt;] -         Cast exception is thrown when count distinct on decimal fields
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14535&quot;&gt;FLINK-14535&lt;/a&gt;] -         Cast exception is thrown when count distinct on decimal fields
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14735&#39;&gt;FLINK-14735&lt;/a&gt;] -         Improve batch schedule check input consumable performance
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14735&quot;&gt;FLINK-14735&lt;/a&gt;] -         Improve batch schedule check input consumable performance
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10377&#39;&gt;FLINK-10377&lt;/a&gt;] -         Remove precondition in TwoPhaseCommitSinkFunction.notifyCheckpointComplete
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10377&quot;&gt;FLINK-10377&lt;/a&gt;] -         Remove precondition in TwoPhaseCommitSinkFunction.notifyCheckpointComplete
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10435&#39;&gt;FLINK-10435&lt;/a&gt;] -         Client sporadically hangs after Ctrl + C
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10435&quot;&gt;FLINK-10435&lt;/a&gt;] -         Client sporadically hangs after Ctrl + C
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11120&#39;&gt;FLINK-11120&lt;/a&gt;] -         TIMESTAMPADD function handles TIME incorrectly
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11120&quot;&gt;FLINK-11120&lt;/a&gt;] -         TIMESTAMPADD function handles TIME incorrectly
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11835&#39;&gt;FLINK-11835&lt;/a&gt;] -         ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange failed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11835&quot;&gt;FLINK-11835&lt;/a&gt;] -         ZooKeeperLeaderElectionITCase.testJobExecutionOnClusterWithLeaderChange failed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12342&#39;&gt;FLINK-12342&lt;/a&gt;] -         Yarn Resource Manager Acquires Too Many Containers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12342&quot;&gt;FLINK-12342&lt;/a&gt;] -         Yarn Resource Manager Acquires Too Many Containers
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12399&#39;&gt;FLINK-12399&lt;/a&gt;] -         FilterableTableSource does not use filters on job run
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12399&quot;&gt;FLINK-12399&lt;/a&gt;] -         FilterableTableSource does not use filters on job run
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13184&#39;&gt;FLINK-13184&lt;/a&gt;] -         Starting a TaskExecutor blocks the YarnResourceManager&amp;#39;s main thread
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13184&quot;&gt;FLINK-13184&lt;/a&gt;] -         Starting a TaskExecutor blocks the YarnResourceManager&amp;#39;s main thread
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13589&#39;&gt;FLINK-13589&lt;/a&gt;] -         DelimitedInputFormat index error on multi-byte delimiters with whole file input splits
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13589&quot;&gt;FLINK-13589&lt;/a&gt;] -         DelimitedInputFormat index error on multi-byte delimiters with whole file input splits
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13702&#39;&gt;FLINK-13702&lt;/a&gt;] -         BaseMapSerializerTest.testDuplicate fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13702&quot;&gt;FLINK-13702&lt;/a&gt;] -         BaseMapSerializerTest.testDuplicate fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13708&#39;&gt;FLINK-13708&lt;/a&gt;] -         Transformations should be cleared because a table environment could execute multiple job
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13708&quot;&gt;FLINK-13708&lt;/a&gt;] -         Transformations should be cleared because a table environment could execute multiple job
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13740&#39;&gt;FLINK-13740&lt;/a&gt;] -         TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13740&quot;&gt;FLINK-13740&lt;/a&gt;] -         TableAggregateITCase.testNonkeyedFlatAggregate failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13749&#39;&gt;FLINK-13749&lt;/a&gt;] -         Make Flink client respect classloading policy
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13749&quot;&gt;FLINK-13749&lt;/a&gt;] -         Make Flink client respect classloading policy
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13758&#39;&gt;FLINK-13758&lt;/a&gt;] -         Failed to submit JobGraph when registered hdfs file in DistributedCache 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13758&quot;&gt;FLINK-13758&lt;/a&gt;] -         Failed to submit JobGraph when registered hdfs file in DistributedCache 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13799&#39;&gt;FLINK-13799&lt;/a&gt;] -         Web Job Submit Page displays stream of error message when web submit is disables in the config
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13799&quot;&gt;FLINK-13799&lt;/a&gt;] -         Web Job Submit Page displays stream of error message when web submit is disables in the config
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13827&#39;&gt;FLINK-13827&lt;/a&gt;] -         Shell variable should be escaped in start-scala-shell.sh
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13827&quot;&gt;FLINK-13827&lt;/a&gt;] -         Shell variable should be escaped in start-scala-shell.sh
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13862&#39;&gt;FLINK-13862&lt;/a&gt;] -         Update Execution Plan docs
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13862&quot;&gt;FLINK-13862&lt;/a&gt;] -         Update Execution Plan docs
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13945&#39;&gt;FLINK-13945&lt;/a&gt;] -         Instructions for building flink-shaded against vendor repository don&amp;#39;t work
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13945&quot;&gt;FLINK-13945&lt;/a&gt;] -         Instructions for building flink-shaded against vendor repository don&amp;#39;t work
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13969&#39;&gt;FLINK-13969&lt;/a&gt;] -         Resuming Externalized Checkpoint (rocks, incremental, scale down) end-to-end test fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13969&quot;&gt;FLINK-13969&lt;/a&gt;] -         Resuming Externalized Checkpoint (rocks, incremental, scale down) end-to-end test fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13995&#39;&gt;FLINK-13995&lt;/a&gt;] -         Fix shading of the licence information of netty
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13995&quot;&gt;FLINK-13995&lt;/a&gt;] -         Fix shading of the licence information of netty
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13999&#39;&gt;FLINK-13999&lt;/a&gt;] -         Correct the documentation of MATCH_RECOGNIZE
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13999&quot;&gt;FLINK-13999&lt;/a&gt;] -         Correct the documentation of MATCH_RECOGNIZE
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14066&#39;&gt;FLINK-14066&lt;/a&gt;] -         Pyflink building failure in master and 1.9.0 version
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14066&quot;&gt;FLINK-14066&lt;/a&gt;] -         Pyflink building failure in master and 1.9.0 version
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14074&#39;&gt;FLINK-14074&lt;/a&gt;] -         MesosResourceManager can&amp;#39;t create new taskmanagers in Session Cluster Mode.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14074&quot;&gt;FLINK-14074&lt;/a&gt;] -         MesosResourceManager can&amp;#39;t create new taskmanagers in Session Cluster Mode.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14175&#39;&gt;FLINK-14175&lt;/a&gt;] -         Upgrade KPL version in flink-connector-kinesis to fix application OOM
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14175&quot;&gt;FLINK-14175&lt;/a&gt;] -         Upgrade KPL version in flink-connector-kinesis to fix application OOM
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14200&#39;&gt;FLINK-14200&lt;/a&gt;] -         Temporal Table Function Joins do not work on Tables (only TableSources) on the query side
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14200&quot;&gt;FLINK-14200&lt;/a&gt;] -         Temporal Table Function Joins do not work on Tables (only TableSources) on the query side
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14235&#39;&gt;FLINK-14235&lt;/a&gt;] -         Kafka010ProducerITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14235&quot;&gt;FLINK-14235&lt;/a&gt;] -         Kafka010ProducerITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14315&#39;&gt;FLINK-14315&lt;/a&gt;] -         NPE with JobMaster.disconnectTaskManager
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14315&quot;&gt;FLINK-14315&lt;/a&gt;] -         NPE with JobMaster.disconnectTaskManager
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14337&#39;&gt;FLINK-14337&lt;/a&gt;] -         HistoryServer does not handle NPE on corruped archives properly
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14337&quot;&gt;FLINK-14337&lt;/a&gt;] -         HistoryServer does not handle NPE on corruped archives properly
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14347&#39;&gt;FLINK-14347&lt;/a&gt;] -         YARNSessionFIFOITCase.checkForProhibitedLogContents found a log with prohibited string
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14347&quot;&gt;FLINK-14347&lt;/a&gt;] -         YARNSessionFIFOITCase.checkForProhibitedLogContents found a log with prohibited string
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14355&#39;&gt;FLINK-14355&lt;/a&gt;] -         Example code in state processor API docs doesn&amp;#39;t compile
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14355&quot;&gt;FLINK-14355&lt;/a&gt;] -         Example code in state processor API docs doesn&amp;#39;t compile
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14370&#39;&gt;FLINK-14370&lt;/a&gt;] -         KafkaProducerAtLeastOnceITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14370&quot;&gt;FLINK-14370&lt;/a&gt;] -         KafkaProducerAtLeastOnceITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14382&#39;&gt;FLINK-14382&lt;/a&gt;] -         Incorrect handling of FLINK_PLUGINS_DIR on Yarn
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14382&quot;&gt;FLINK-14382&lt;/a&gt;] -         Incorrect handling of FLINK_PLUGINS_DIR on Yarn
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14398&#39;&gt;FLINK-14398&lt;/a&gt;] -         Further split input unboxing code into separate methods
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14398&quot;&gt;FLINK-14398&lt;/a&gt;] -         Further split input unboxing code into separate methods
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14413&#39;&gt;FLINK-14413&lt;/a&gt;] -         Shade-plugin ApacheNoticeResourceTransformer uses platform-dependent encoding
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14413&quot;&gt;FLINK-14413&lt;/a&gt;] -         Shade-plugin ApacheNoticeResourceTransformer uses platform-dependent encoding
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14434&#39;&gt;FLINK-14434&lt;/a&gt;] -         Dispatcher#createJobManagerRunner should not start JobManagerRunner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14434&quot;&gt;FLINK-14434&lt;/a&gt;] -         Dispatcher#createJobManagerRunner should not start JobManagerRunner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14445&#39;&gt;FLINK-14445&lt;/a&gt;] -         Python module build failed when making sdist
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14445&quot;&gt;FLINK-14445&lt;/a&gt;] -         Python module build failed when making sdist
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14447&#39;&gt;FLINK-14447&lt;/a&gt;] -         Network metrics doc table render confusion
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14447&quot;&gt;FLINK-14447&lt;/a&gt;] -         Network metrics doc table render confusion
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14459&#39;&gt;FLINK-14459&lt;/a&gt;] -         Python module build hangs
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14459&quot;&gt;FLINK-14459&lt;/a&gt;] -         Python module build hangs
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14524&#39;&gt;FLINK-14524&lt;/a&gt;] -         PostgreSQL JDBC sink generates invalid SQL in upsert mode
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14524&quot;&gt;FLINK-14524&lt;/a&gt;] -         PostgreSQL JDBC sink generates invalid SQL in upsert mode
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14547&#39;&gt;FLINK-14547&lt;/a&gt;] -         UDF cannot be in the join condition in blink planner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14547&quot;&gt;FLINK-14547&lt;/a&gt;] -         UDF cannot be in the join condition in blink planner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14561&#39;&gt;FLINK-14561&lt;/a&gt;] -         Don&amp;#39;t write FLINK_PLUGINS_DIR ENV variable to Flink configuration
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14561&quot;&gt;FLINK-14561&lt;/a&gt;] -         Don&amp;#39;t write FLINK_PLUGINS_DIR ENV variable to Flink configuration
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14562&#39;&gt;FLINK-14562&lt;/a&gt;] -         RMQSource leaves idle consumer after closing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14562&quot;&gt;FLINK-14562&lt;/a&gt;] -         RMQSource leaves idle consumer after closing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14574&#39;&gt;FLINK-14574&lt;/a&gt;] -          flink-s3-fs-hadoop doesn&amp;#39;t work with plugins mechanism
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14574&quot;&gt;FLINK-14574&lt;/a&gt;] -          flink-s3-fs-hadoop doesn&amp;#39;t work with plugins mechanism
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14589&#39;&gt;FLINK-14589&lt;/a&gt;] -         Redundant slot requests with the same AllocationID leads to inconsistent slot table
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14589&quot;&gt;FLINK-14589&lt;/a&gt;] -         Redundant slot requests with the same AllocationID leads to inconsistent slot table
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14641&#39;&gt;FLINK-14641&lt;/a&gt;] -         Fix description of metric `fullRestarts`
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14641&quot;&gt;FLINK-14641&lt;/a&gt;] -         Fix description of metric `fullRestarts`
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14673&#39;&gt;FLINK-14673&lt;/a&gt;] -         Shouldn&amp;#39;t expect HMS client to throw NoSuchObjectException for non-existing function
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14673&quot;&gt;FLINK-14673&lt;/a&gt;] -         Shouldn&amp;#39;t expect HMS client to throw NoSuchObjectException for non-existing function
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14683&#39;&gt;FLINK-14683&lt;/a&gt;] -         RemoteStreamEnvironment&amp;#39;s construction function has a wrong method
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14683&quot;&gt;FLINK-14683&lt;/a&gt;] -         RemoteStreamEnvironment&amp;#39;s construction function has a wrong method
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14701&#39;&gt;FLINK-14701&lt;/a&gt;] -         Slot leaks if SharedSlotOversubscribedException happens
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14701&quot;&gt;FLINK-14701&lt;/a&gt;] -         Slot leaks if SharedSlotOversubscribedException happens
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14784&#39;&gt;FLINK-14784&lt;/a&gt;] -         CsvTableSink miss delimiter when row start with null member
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14784&quot;&gt;FLINK-14784&lt;/a&gt;] -         CsvTableSink miss delimiter when row start with null member
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14817&#39;&gt;FLINK-14817&lt;/a&gt;] -         &amp;quot;Streaming Aggregation&amp;quot; document contains misleading code examples
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14817&quot;&gt;FLINK-14817&lt;/a&gt;] -         &amp;quot;Streaming Aggregation&amp;quot; document contains misleading code examples
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14846&#39;&gt;FLINK-14846&lt;/a&gt;] -         Correct the default writerbuffer size documentation of RocksDB
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14846&quot;&gt;FLINK-14846&lt;/a&gt;] -         Correct the default writerbuffer size documentation of RocksDB
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14910&#39;&gt;FLINK-14910&lt;/a&gt;] -         DisableAutoGeneratedUIDs fails on keyBy
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14910&quot;&gt;FLINK-14910&lt;/a&gt;] -         DisableAutoGeneratedUIDs fails on keyBy
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14930&#39;&gt;FLINK-14930&lt;/a&gt;] -         OSS Filesystem Uses Wrong Shading Prefix
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14930&quot;&gt;FLINK-14930&lt;/a&gt;] -         OSS Filesystem Uses Wrong Shading Prefix
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14949&#39;&gt;FLINK-14949&lt;/a&gt;] -         Task cancellation can be stuck against out-of-thread error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14949&quot;&gt;FLINK-14949&lt;/a&gt;] -         Task cancellation can be stuck against out-of-thread error
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14951&#39;&gt;FLINK-14951&lt;/a&gt;] -         State TTL backend end-to-end test fail when taskManager has multiple slot
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14951&quot;&gt;FLINK-14951&lt;/a&gt;] -         State TTL backend end-to-end test fail when taskManager has multiple slot
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14953&#39;&gt;FLINK-14953&lt;/a&gt;] -         Parquet table source should use schema type to build FilterPredicate
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14953&quot;&gt;FLINK-14953&lt;/a&gt;] -         Parquet table source should use schema type to build FilterPredicate
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14960&#39;&gt;FLINK-14960&lt;/a&gt;] -         Dependency shading of table modules test fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14960&quot;&gt;FLINK-14960&lt;/a&gt;] -         Dependency shading of table modules test fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14976&#39;&gt;FLINK-14976&lt;/a&gt;] -         Cassandra Connector leaks Semaphore on Throwable; hangs on close
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14976&quot;&gt;FLINK-14976&lt;/a&gt;] -         Cassandra Connector leaks Semaphore on Throwable; hangs on close
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15001&#39;&gt;FLINK-15001&lt;/a&gt;] -         The digest of sub-plan reuse should contain retraction traits for stream physical nodes
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15001&quot;&gt;FLINK-15001&lt;/a&gt;] -         The digest of sub-plan reuse should contain retraction traits for stream physical nodes
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15013&#39;&gt;FLINK-15013&lt;/a&gt;] -         Flink (on YARN) sometimes needs too many slots
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15013&quot;&gt;FLINK-15013&lt;/a&gt;] -         Flink (on YARN) sometimes needs too many slots
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15030&#39;&gt;FLINK-15030&lt;/a&gt;] -         Potential deadlock for bounded blocking ResultPartition.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15030&quot;&gt;FLINK-15030&lt;/a&gt;] -         Potential deadlock for bounded blocking ResultPartition.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15036&#39;&gt;FLINK-15036&lt;/a&gt;] -         Container startup error will be handled out side of the YarnResourceManager&amp;#39;s main thread
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15036&quot;&gt;FLINK-15036&lt;/a&gt;] -         Container startup error will be handled out side of the YarnResourceManager&amp;#39;s main thread
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15063&#39;&gt;FLINK-15063&lt;/a&gt;] -         Input group and output group of the task metric are reversed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15063&quot;&gt;FLINK-15063&lt;/a&gt;] -         Input group and output group of the task metric are reversed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15065&#39;&gt;FLINK-15065&lt;/a&gt;] -         RocksDB configurable options doc description error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15065&quot;&gt;FLINK-15065&lt;/a&gt;] -         RocksDB configurable options doc description error
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15076&#39;&gt;FLINK-15076&lt;/a&gt;] -         Source thread should be interrupted during the Task cancellation 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15076&quot;&gt;FLINK-15076&lt;/a&gt;] -         Source thread should be interrupted during the Task cancellation 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15234&#39;&gt;FLINK-15234&lt;/a&gt;] -         Hive table created from flink catalog table shouldn&amp;#39;t have null properties in parameters
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15234&quot;&gt;FLINK-15234&lt;/a&gt;] -         Hive table created from flink catalog table shouldn&amp;#39;t have null properties in parameters
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15240&#39;&gt;FLINK-15240&lt;/a&gt;] -         is_generic key is missing for Flink table stored in HiveCatalog
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15240&quot;&gt;FLINK-15240&lt;/a&gt;] -         is_generic key is missing for Flink table stored in HiveCatalog
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15259&#39;&gt;FLINK-15259&lt;/a&gt;] -         HiveInspector.toInspectors() should convert Flink constant to Hive constant 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15259&quot;&gt;FLINK-15259&lt;/a&gt;] -         HiveInspector.toInspectors() should convert Flink constant to Hive constant 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15266&#39;&gt;FLINK-15266&lt;/a&gt;] -         NPE in blink planner code gen
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15266&quot;&gt;FLINK-15266&lt;/a&gt;] -         NPE in blink planner code gen
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15361&#39;&gt;FLINK-15361&lt;/a&gt;] -         ParquetTableSource should pass predicate in projectFields
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15361&quot;&gt;FLINK-15361&lt;/a&gt;] -         ParquetTableSource should pass predicate in projectFields
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15412&#39;&gt;FLINK-15412&lt;/a&gt;] -         LocalExecutorITCase#testParameterizedTypes failed in travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15412&quot;&gt;FLINK-15412&lt;/a&gt;] -         LocalExecutorITCase#testParameterizedTypes failed in travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15413&#39;&gt;FLINK-15413&lt;/a&gt;] -         ScalarOperatorsTest failed in travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15413&quot;&gt;FLINK-15413&lt;/a&gt;] -         ScalarOperatorsTest failed in travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15418&#39;&gt;FLINK-15418&lt;/a&gt;] -         StreamExecMatchRule not set FlinkRelDistribution
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15418&quot;&gt;FLINK-15418&lt;/a&gt;] -         StreamExecMatchRule not set FlinkRelDistribution
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15421&#39;&gt;FLINK-15421&lt;/a&gt;] -         GroupAggsHandler throws java.time.LocalDateTime cannot be cast to java.sql.Timestamp
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15421&quot;&gt;FLINK-15421&lt;/a&gt;] -         GroupAggsHandler throws java.time.LocalDateTime cannot be cast to java.sql.Timestamp
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15435&#39;&gt;FLINK-15435&lt;/a&gt;] -         ExecutionConfigTests.test_equals_and_hash in pyFlink fails when cpu core numbers is 6
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15435&quot;&gt;FLINK-15435&lt;/a&gt;] -         ExecutionConfigTests.test_equals_and_hash in pyFlink fails when cpu core numbers is 6
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15443&#39;&gt;FLINK-15443&lt;/a&gt;] -         Use JDBC connector write FLOAT value occur ClassCastException
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15443&quot;&gt;FLINK-15443&lt;/a&gt;] -         Use JDBC connector write FLOAT value occur ClassCastException
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15478&#39;&gt;FLINK-15478&lt;/a&gt;] -         FROM_BASE64 code gen type wrong
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15478&quot;&gt;FLINK-15478&lt;/a&gt;] -         FROM_BASE64 code gen type wrong
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15489&#39;&gt;FLINK-15489&lt;/a&gt;] -         WebUI log refresh not working
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15489&quot;&gt;FLINK-15489&lt;/a&gt;] -         WebUI log refresh not working
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15522&#39;&gt;FLINK-15522&lt;/a&gt;] -         Misleading root cause exception when cancelling the job
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15522&quot;&gt;FLINK-15522&lt;/a&gt;] -         Misleading root cause exception when cancelling the job
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15523&#39;&gt;FLINK-15523&lt;/a&gt;] -         ConfigConstants generally excluded from japicmp
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15523&quot;&gt;FLINK-15523&lt;/a&gt;] -         ConfigConstants generally excluded from japicmp
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15543&#39;&gt;FLINK-15543&lt;/a&gt;] -         Apache Camel not bundled but listed in flink-dist NOTICE
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15543&quot;&gt;FLINK-15543&lt;/a&gt;] -         Apache Camel not bundled but listed in flink-dist NOTICE
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15549&#39;&gt;FLINK-15549&lt;/a&gt;] -         Integer overflow in SpillingResettableMutableObjectIterator
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15549&quot;&gt;FLINK-15549&lt;/a&gt;] -         Integer overflow in SpillingResettableMutableObjectIterator
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15577&#39;&gt;FLINK-15577&lt;/a&gt;] -         WindowAggregate RelNodes missing Window specs in digest
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15577&quot;&gt;FLINK-15577&lt;/a&gt;] -         WindowAggregate RelNodes missing Window specs in digest
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15615&#39;&gt;FLINK-15615&lt;/a&gt;] -         Docs: wrong guarantees stated for the file sink
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15615&quot;&gt;FLINK-15615&lt;/a&gt;] -         Docs: wrong guarantees stated for the file sink
 &lt;/li&gt;
 &lt;/ul&gt;
-                
+
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11135&#39;&gt;FLINK-11135&lt;/a&gt;] -         Reorder Hadoop config loading in HadoopUtils
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11135&quot;&gt;FLINK-11135&lt;/a&gt;] -         Reorder Hadoop config loading in HadoopUtils
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12848&#39;&gt;FLINK-12848&lt;/a&gt;] -         Method equals() in RowTypeInfo should consider fieldsNames
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12848&quot;&gt;FLINK-12848&lt;/a&gt;] -         Method equals() in RowTypeInfo should consider fieldsNames
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13729&#39;&gt;FLINK-13729&lt;/a&gt;] -         Update website generation dependencies
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13729&quot;&gt;FLINK-13729&lt;/a&gt;] -         Update website generation dependencies
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14008&#39;&gt;FLINK-14008&lt;/a&gt;] -         Auto-generate binary licensing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14008&quot;&gt;FLINK-14008&lt;/a&gt;] -         Auto-generate binary licensing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14104&#39;&gt;FLINK-14104&lt;/a&gt;] -         Bump Jackson to 2.10.1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14104&quot;&gt;FLINK-14104&lt;/a&gt;] -         Bump Jackson to 2.10.1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14123&#39;&gt;FLINK-14123&lt;/a&gt;] -         Lower the default value of taskmanager.memory.fraction
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14123&quot;&gt;FLINK-14123&lt;/a&gt;] -         Lower the default value of taskmanager.memory.fraction
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14206&#39;&gt;FLINK-14206&lt;/a&gt;] -         Let fullRestart metric count fine grained restarts as well
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14206&quot;&gt;FLINK-14206&lt;/a&gt;] -         Let fullRestart metric count fine grained restarts as well
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14215&#39;&gt;FLINK-14215&lt;/a&gt;] -         Add Docs for TM and JM Environment Variable Setting
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14215&quot;&gt;FLINK-14215&lt;/a&gt;] -         Add Docs for TM and JM Environment Variable Setting
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14251&#39;&gt;FLINK-14251&lt;/a&gt;] -         Add FutureUtils#forward utility
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14251&quot;&gt;FLINK-14251&lt;/a&gt;] -         Add FutureUtils#forward utility
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14334&#39;&gt;FLINK-14334&lt;/a&gt;] -         ElasticSearch docs refer to non-existent ExceptionUtils.containsThrowable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14334&quot;&gt;FLINK-14334&lt;/a&gt;] -         ElasticSearch docs refer to non-existent ExceptionUtils.containsThrowable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14335&#39;&gt;FLINK-14335&lt;/a&gt;] -         ExampleIntegrationTest in testing docs is incorrect
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14335&quot;&gt;FLINK-14335&lt;/a&gt;] -         ExampleIntegrationTest in testing docs is incorrect
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14408&#39;&gt;FLINK-14408&lt;/a&gt;] -         In OldPlanner, UDF open method can not be invoke when SQL is optimized
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14408&quot;&gt;FLINK-14408&lt;/a&gt;] -         In OldPlanner, UDF open method can not be invoke when SQL is optimized
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14557&#39;&gt;FLINK-14557&lt;/a&gt;] -         Clean up the package of py4j
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14557&quot;&gt;FLINK-14557&lt;/a&gt;] -         Clean up the package of py4j
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14639&#39;&gt;FLINK-14639&lt;/a&gt;] -         Metrics User Scope docs refer to wrong class
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14639&quot;&gt;FLINK-14639&lt;/a&gt;] -         Metrics User Scope docs refer to wrong class
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14646&#39;&gt;FLINK-14646&lt;/a&gt;] -         Check non-null for key in KeyGroupStreamPartitioner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14646&quot;&gt;FLINK-14646&lt;/a&gt;] -         Check non-null for key in KeyGroupStreamPartitioner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14825&#39;&gt;FLINK-14825&lt;/a&gt;] -         Rework state processor api documentation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14825&quot;&gt;FLINK-14825&lt;/a&gt;] -         Rework state processor api documentation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14995&#39;&gt;FLINK-14995&lt;/a&gt;] -         Kinesis NOTICE is incorrect
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14995&quot;&gt;FLINK-14995&lt;/a&gt;] -         Kinesis NOTICE is incorrect
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15113&#39;&gt;FLINK-15113&lt;/a&gt;] -         fs.azure.account.key not hidden from global configuration
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15113&quot;&gt;FLINK-15113&lt;/a&gt;] -         fs.azure.account.key not hidden from global configuration
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15554&#39;&gt;FLINK-15554&lt;/a&gt;] -         Bump jetty-util-ajax to 9.3.24
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15554&quot;&gt;FLINK-15554&lt;/a&gt;] -         Bump jetty-util-ajax to 9.3.24
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15657&#39;&gt;FLINK-15657&lt;/a&gt;] -         Fix the python table api doc link in Python API tutorial
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15657&quot;&gt;FLINK-15657&lt;/a&gt;] -         Fix the python table api doc link in Python API tutorial
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15700&#39;&gt;FLINK-15700&lt;/a&gt;] -         Improve Python API Tutorial doc
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15700&quot;&gt;FLINK-15700&lt;/a&gt;] -         Improve Python API Tutorial doc
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15726&#39;&gt;FLINK-15726&lt;/a&gt;] -         Fixing error message in StreamExecTableSourceScan
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15726&quot;&gt;FLINK-15726&lt;/a&gt;] -         Fixing error message in StreamExecTableSourceScan
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Thu, 30 Jan 2020 12:00:00 +0000</pubDate>
+<pubDate>Thu, 30 Jan 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/01/30/release-1.9.2.html</link>
 <guid isPermaLink="true">/news/2020/01/30/release-1.9.2.html</guid>
 </item>
 
 <item>
 <title>State Unlocked: Interacting with State in Apache Flink</title>
-<description># Introduction
+<description>&lt;h1 id=&quot;introduction&quot;&gt;Introduction&lt;/h1&gt;
 
-With stateful stream-processing becoming the norm for complex event-driven applications and real-time analytics, [Apache Flink](https://flink.apache.org/) is often the backbone for running business logic and managing an organization’s most valuable asset — its data — as application state in Flink. 
+&lt;p&gt;With stateful stream-processing becoming the norm for complex event-driven applications and real-time analytics, &lt;a href=&quot;https://flink.apache.org/&quot;&gt;Apache Flink&lt;/a&gt; is often the backbone for running business logic and managing an organization’s most valuable asset — its data — as application state in Flink.&lt;/p&gt;
 
-In order to provide a state-of-the-art experience to Flink developers, the Apache Flink community makes significant efforts to provide the safety and future-proof guarantees organizations need while managing state in Flink. In particular, Flink developers should have sufficient means to access and modify their state, as well as making bootstrapping state with existing data from external systems a piece-of-cake. These efforts span multiple Flink major releases and consist of the following:
+&lt;p&gt;In order to provide a state-of-the-art experience to Flink developers, the Apache Flink community makes significant efforts to provide the safety and future-proof guarantees organizations need while managing state in Flink. In particular, Flink developers should have sufficient means to access and modify their state, as well as making bootstrapping state with existing data from external systems a piece-of-cake. These efforts span multiple Flink major releases and consist of the  [...]
 
-1. Evolvable state schema in Apache Flink
-2. Flexibility in swapping state backends, and
-3. The State processor API, an offline tool to read, write and modify state in Flink
+&lt;ol&gt;
+  &lt;li&gt;Evolvable state schema in Apache Flink&lt;/li&gt;
+  &lt;li&gt;Flexibility in swapping state backends, and&lt;/li&gt;
+  &lt;li&gt;The State processor API, an offline tool to read, write and modify state in Flink&lt;/li&gt;
+&lt;/ol&gt;
 
-This post discusses the community’s efforts related to state management in Flink, provides some practical examples of how the different features and APIs can be utilized and covers some future ideas for new and improved ways of managing state in Apache Flink.
+&lt;p&gt;This post discusses the community’s efforts related to state management in Flink, provides some practical examples of how the different features and APIs can be utilized and covers some future ideas for new and improved ways of managing state in Apache Flink.&lt;/p&gt;
 
+&lt;h1 id=&quot;stream-processing-what-is-state&quot;&gt;Stream processing: What is State?&lt;/h1&gt;
 
-# Stream processing: What is State?
+&lt;p&gt;To set the tone for the remaining of the post, let us first try to explain the very definition of state in stream processing. When it comes to stateful stream processing, state comprises of the information that an application or stream processing engine will remember across events and streams as more realtime (unbounded) and/or offline (bounded) data flow through the system. Most trivial applications are inherently stateful; even the example of a simple COUNT operation, whereby  [...]
 
-To set the tone for the remaining of the post, let us first try to explain the very definition of state in stream processing. When it comes to stateful stream processing, state comprises of the information that an application or stream processing engine will remember across events and streams as more realtime (unbounded) and/or offline (bounded) data flow through the system. Most trivial applications are inherently stateful; even the example of a simple COUNT operation, whereby when coun [...]
-
-To better understand how Flink manages state, one can think of Flink like a three-layered state abstraction, as illustrated in the diagram below. 
+&lt;p&gt;To better understand how Flink manages state, one can think of Flink like a three-layered state abstraction, as illustrated in the diagram below.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-1.png&quot; width=&quot;600px&quot; alt=&quot;State in Apache Flink&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-1.png&quot; width=&quot;600px&quot; alt=&quot;State in Apache Flink&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-On the top layer, sits the Flink user code, for example, a `KeyedProcessFunction` that contains some value state. This is a simple variable whose value state annotations makes it automatically fault-tolerant, re-scalable and queryable by the runtime. These variables are backed by the configured state backend that sits either on-heap or on-disk (RocksDB State Backend) and provides data locality, proximity to the computation and speed when it comes to per-record computations. Finally, when [...]
+&lt;p&gt;On the top layer, sits the Flink user code, for example, a &lt;code&gt;KeyedProcessFunction&lt;/code&gt; that contains some value state. This is a simple variable whose value state annotations makes it automatically fault-tolerant, re-scalable and queryable by the runtime. These variables are backed by the configured state backend that sits either on-heap or on-disk (RocksDB State Backend) and provides data locality, proximity to the computation and speed when it comes to per-re [...]
 
-A savepoint is a snapshot of the distributed, global state of an application at a logical point-in-time and is stored in an external distributed file system or blob storage such as HDFS, or S3. Upon upgrading an application or implementing a code change  — such as adding a new operator or changing a field — the Flink job can restart by re-loading the application state from the savepoint into the state backend, making it local and available for the computation and continue processing as i [...]
+&lt;p&gt;A savepoint is a snapshot of the distributed, global state of an application at a logical point-in-time and is stored in an external distributed file system or blob storage such as HDFS, or S3. Upon upgrading an application or implementing a code change  — such as adding a new operator or changing a field — the Flink job can restart by re-loading the application state from the savepoint into the state backend, making it local and available for the computation and continue proces [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-2.png&quot; width=&quot;600px&quot; alt=&quot;State in Apache Flink&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-visual-2.png&quot; width=&quot;600px&quot; alt=&quot;State in Apache Flink&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;div class=&quot;alert alert-info&quot;&gt;
  It is important to remember here that &lt;b&gt;state is one of the most valuable components of a Flink application&lt;/b&gt; carrying all the information about both where you are now and where you are going. State is among the most long-lived components in a Flink service since it can be carried across jobs, operators, configurations, new features and bug fixes.
 &lt;/div&gt;
 
-# Schema Evolution with Apache Flink
+&lt;h1 id=&quot;schema-evolution-with-apache-flink&quot;&gt;Schema Evolution with Apache Flink&lt;/h1&gt;
+
+&lt;p&gt;In the previous section, we explained how state is stored and persisted in a Flink application. Let’s now take a look at what happens when evolving state in a stateful Flink streaming application becomes necessary.&lt;/p&gt;
+
+&lt;p&gt;Imagine an Apache Flink application that implements a &lt;code&gt;KeyedProcessFunction&lt;/code&gt; and contains some &lt;code&gt;ValueState&lt;/code&gt;. As illustrated below, within the state descriptor, when registering the type, Flink users specify their &lt;code&gt;TypeInformation&lt;/code&gt; that informs Flink about how to serialize the bytes and represents Flink’s internal type system, used to serialize data when shipped across the network or stored in state backends. Fl [...]
+
+&lt;h2 id=&quot;state-registration-with-built-in-serialization-in-apache-flink&quot;&gt;State registration with built-in serialization in Apache Flink&lt;/h2&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;MyFunction&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Key&lt;/span&gt;&lt;span class=& [...]
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;transient&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ValueState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;MyState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;valueState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Configuration&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parameters&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;MyState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;descriptor&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;my-state&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;TypeInformation&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt [...]
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;valueState&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getRuntimeContext&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;descriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Typically, evolving the schema of an application’s state happens because of some business logic change (adding or dropping fields or changing data types). In all cases, the schema is determined by means of its serializer, and can be thought of in terms of an alter table statement when compared with a database. When a state variable is first introduced it is like running a &lt;code&gt;CREATE_TABLE&lt;/code&gt; command, there is a lot of freedom with its execution. However, having [...]
 
-In the previous section, we explained how state is stored and persisted in a Flink application. Let’s now take a look at what happens when evolving state in a stateful Flink streaming application becomes necessary. 
+&lt;p&gt;&lt;a href=&quot;https://flink.apache.org/downloads.html#apache-flink-182&quot;&gt;Flink 1.8&lt;/a&gt; comes with built-in support for &lt;a href=&quot;https://avro.apache.org/&quot;&gt;Apache Avro&lt;/a&gt; (specifically the &lt;a href=&quot;https://avro.apache.org/docs/1.7.7/spec.html&quot;&gt;1.7.7 specification&lt;/a&gt;) and evolves state schema according to Avro specifications by adding and removing types or even by swapping between generic and specific Avro record types.& [...]
 
-Imagine an Apache Flink application that implements a `KeyedProcessFunction` and contains some `ValueState`. As illustrated below, within the state descriptor, when registering the type, Flink users specify their `TypeInformation` that informs Flink about how to serialize the bytes and represents Flink’s internal type system, used to serialize data when shipped across the network or stored in state backends. Flink’s type system has built-in support for all the basic types such as longs,  [...]
+&lt;p&gt;In &lt;a href=&quot;https://flink.apache.org/downloads.html#apache-flink-191&quot;&gt;Flink 1.9&lt;/a&gt; the community added support for schema evolution for POJOs, including the ability to remove existing fields from POJO types or add new fields. The POJO schema evolution tends to be less flexible — when compared to Avro — since it is not possible to change neither the declared field types nor the class name of a POJO type, including its namespace.&lt;/p&gt;
 
-## State registration with built-in serialization in Apache Flink
+&lt;p&gt;With the community’s efforts related to schema evolution, Flink developers can now expect out-of-the-box support for both Avro and POJO formats, with backwards compatibility for all Flink state backends. Future work revolves around adding support for Scala Case Classes, Tuples and other formats. Make sure to subscribe to the &lt;a href=&quot;https://flink.apache.org/community.html&quot;&gt;Flink mailing list&lt;/a&gt; to contribute and stay on top of any upcoming additions in th [...]
 
-```java
-public class MyFunction extends KeyedProcessFunction&lt;Key, Input, Output&gt; {
-​
-  private transient ValueState&lt;MyState&gt; valueState;
-​
-  public void open(Configuration parameters) {
-    ValueStateDescriptor&lt;MyState&gt; descriptor =
-      new ValueStateDescriptor&lt;&gt;(&quot;my-state&quot;, TypeInformation.of(MyState.class));
-​
-    valueState = getRuntimeContext().getState(descriptor);
-  }
-}
-```
-
-Typically, evolving the schema of an application’s state happens because of some business logic change (adding or dropping fields or changing data types). In all cases, the schema is determined by means of its serializer, and can be thought of in terms of an alter table statement when compared with a database. When a state variable is first introduced it is like running a `CREATE_TABLE` command, there is a lot of freedom with its execution. However, having data in that table (registered  [...]
-
-[Flink 1.8](https://flink.apache.org/downloads.html#apache-flink-182) comes with built-in support for [Apache Avro](https://avro.apache.org/) (specifically the [1.7.7 specification](https://avro.apache.org/docs/1.7.7/spec.html)) and evolves state schema according to Avro specifications by adding and removing types or even by swapping between generic and specific Avro record types.
-
-In [Flink 1.9](https://flink.apache.org/downloads.html#apache-flink-191) the community added support for schema evolution for POJOs, including the ability to remove existing fields from POJO types or add new fields. The POJO schema evolution tends to be less flexible — when compared to Avro — since it is not possible to change neither the declared field types nor the class name of a POJO type, including its namespace. 
-
-With the community’s efforts related to schema evolution, Flink developers can now expect out-of-the-box support for both Avro and POJO formats, with backwards compatibility for all Flink state backends. Future work revolves around adding support for Scala Case Classes, Tuples and other formats. Make sure to subscribe to the [Flink mailing list](https://flink.apache.org/community.html) to contribute and stay on top of any upcoming additions in this space.
-
-## Peeking Under the Hood
-
-Now that we have explained how schema evolution in Flink works, let’s describe the challenges of performing schema serialization with Flink under the hood. Flink considers state as a core part of its API stability, in a way that developers should always be able to take a savepoint from one version of Flink and restart it on the next. With schema evolution, every migration needs to be backwards compatible and also compatible with the different state backends. While in the Flink code the s [...]
-
-For instance, the heap state backend supports lazy serialization and eager deserialization, making the per-record code path always working with Java objects, serializing on a background thread.  When restoring, Flink will eagerly deserialize all the data and then start the user code. If a developer plugs in a new serializer, the deserialization happens before Flink ever receives the information. 
-
-The RocksDB state backend behaves in the exact opposite manner: it supports eager serialization — because of items being stored on disk and RocksDB only consuming byte arrays. RocksDB provides lazy deserialization simply by downloading files to the local disk, making Flink unaware of what the bytes mean until a serializer is registered.  
-
-An additional challenge stems from the fact that different versions of user code contain different classes on their classpath making the serializer used to write into a savepoint likely potentially unavailable at runtime.
-
-To overcome the previously mentioned challenges, we introduced what we call `TypeSerializerSnapshot`. The `TypeSerializerSnapshot` stores the configuration of the writer serializer in the snapshot. When restoring it will use that configuration to read back the previous state and check its compatibility with the current version. Using such operation allows Flink to:
-
-* Read the configuration used to write out a snapshot
-* Consume the new user code 
-* Check if both items above are compatible 
-* Consume the bytes from the snapshot and move forward or alert the user otherwise
-
-```java
-public interface TypeSerializerSnapshot&lt;T&gt; {
-​
-  int getCurrentVersion();
-​
-  void writeSnapshot(DataOutputView out) throws IOException;
-​
-  void readSnapshot(
-      int readVersion,
-      DataInputView in,
-      ClassLoader userCodeClassLoader) throws IOException;
-​
-  TypeSerializer&lt;T&gt; restoreSerializer();
-​
-  TypeSerializerSchemaCompatibility&lt;T&gt; resolveSchemaCompatibility(
-      TypeSerializer&lt;T&gt; newSerializer);
-}
-```
-
-## Implementing Apache Avro Serialization in Flink
-
-Apache Avro is a data serialization format that has very well-defined schema migration semantics and supports both reader and writer schemas. During normal Flink execution the reader and writer schemas will be the same. However, when upgrading an application they may be different and with schema evolution, Flink will be able to migrate objects with their schemas.
-
-```java
-public class AvroSerializerSnapshot&lt;T&gt; implements TypeSerializerSnapshot&lt;T&gt; {
-  private Schema runtimeSchema;
-  private Schema previousSchema;
-​
-  @SuppressWarnings(&quot;WeakerAccess&quot;)
-  public AvroSerializerSnapshot() { }
-​
-  AvroSerializerSnapshot(Schema schema) {
-    this.runtimeSchema = schema;
-  }
-```
-
-This is a sketch of our Avro serializer. It uses the provided schemas and delegates to Apache Avro for all (de)-serialization. Let’s take a look at one possible implementation of a `TypeSerializerSnapshot` that supports schema migration for Avro. 
-
-
-# Writing out the snapshot
-
-When serializing out the snapshot, the snapshot configuration will write two pieces of information; the current snapshot configuration version and the serializer configuration.
-
-```java
-  @Override
-  public int getCurrentVersion() {
-    return 1;
-  }
-​
-  @Override
-  public void writeSnapshot(DataOutputView out) throws IOException {
-    out.writeUTF(runtimeSchema.toString(false));
-  }
-```
+&lt;h2 id=&quot;peeking-under-the-hood&quot;&gt;Peeking Under the Hood&lt;/h2&gt;
 
-The version is used to version the snapshot configuration object itself while the `writeSnapshot` method writes out all the information we need to understand the current format; the runtime schema.
+&lt;p&gt;Now that we have explained how schema evolution in Flink works, let’s describe the challenges of performing schema serialization with Flink under the hood. Flink considers state as a core part of its API stability, in a way that developers should always be able to take a savepoint from one version of Flink and restart it on the next. With schema evolution, every migration needs to be backwards compatible and also compatible with the different state backends. While in the Flink c [...]
 
-```java
-  @Override
-  public void readSnapshot(
-      int readVersion,
-      DataInputView in,
-      ClassLoader userCodeClassLoader) throws IOException {
-
-    assert readVersion == 1;
-    final String previousSchemaDefinition = in.readUTF();
-    this.previousSchema = parseAvroSchema(previousSchemaDefinition);
-    this.runtimeType = findClassOrFallbackToGeneric(
-      userCodeClassLoader,
-      previousSchema.getFullName());
-​
-    this.runtimeSchema = tryExtractAvroSchema(userCodeClassLoader, runtimeType);
-  }
-```
-Now when Flink restores it is able to read back in the writer schema used to serialize the data. The current runtime schema is discovered on the class path using some Java reflection magic.
+&lt;p&gt;For instance, the heap state backend supports lazy serialization and eager deserialization, making the per-record code path always working with Java objects, serializing on a background thread.  When restoring, Flink will eagerly deserialize all the data and then start the user code. If a developer plugs in a new serializer, the deserialization happens before Flink ever receives the information.&lt;/p&gt;
 
-Once we have both of these we can compare them for compatibility. Perhaps nothing has changed and the schemas are compatible as is.
+&lt;p&gt;The RocksDB state backend behaves in the exact opposite manner: it supports eager serialization — because of items being stored on disk and RocksDB only consuming byte arrays. RocksDB provides lazy deserialization simply by downloading files to the local disk, making Flink unaware of what the bytes mean until a serializer is registered.&lt;/p&gt;
 
-```java
-  @Override
-  public TypeSerializerSchemaCompatibility&lt;T&gt; resolveSchemaCompatibility(
-      TypeSerializer&lt;T&gt; newSerializer) {
-​
-    if (!(newSerializer instanceof AvroSerializer)) {
-      return TypeSerializerSchemaCompatibility.incompatible();
-    }
-​
-    if (Objects.equals(previousSchema, runtimeSchema)) {
-      return TypeSerializerSchemaCompatibility.compatibleAsIs();
-    }
-```
-
-Otherwise, the schemas are compared using Avro’s compatibility checks and they may either be compatible with a migration or incompatible.
-
-```java
-  final SchemaPairCompatibility compatibility = SchemaCompatibility
-    .checkReaderWriterCompatibility(previousSchema, runtimeSchema);
-​
-    return avroCompatibilityToFlinkCompatibility(compatibility);
-  }
-```
-
-If they are compatible with migration then Flink will restore a new serializer that can read the old schema and deserialize into the new runtime type which is in effect a migration.
-
-```java
-  @Override
-  public TypeSerializer&lt;T&gt; restoreSerializer() {
-    if (previousSchema != null) {
-      return new AvroSerializer&lt;&gt;(runtimeType, runtimeSchema, previousSchema);
-    } else {
-      return new AvroSerializer&lt;&gt;(runtimeType, runtimeSchema, runtimeSchema);
-    }
-  }
-}
-```
+&lt;p&gt;An additional challenge stems from the fact that different versions of user code contain different classes on their classpath making the serializer used to write into a savepoint likely potentially unavailable at runtime.&lt;/p&gt;
 
-# The State Processor API: Reading, writing and modifying Flink state
+&lt;p&gt;To overcome the previously mentioned challenges, we introduced what we call &lt;code&gt;TypeSerializerSnapshot&lt;/code&gt;. The &lt;code&gt;TypeSerializerSnapshot&lt;/code&gt; stores the configuration of the writer serializer in the snapshot. When restoring it will use that configuration to read back the previous state and check its compatibility with the current version. Using such operation allows Flink to:&lt;/p&gt;
 
-The State Processor API allows reading from and writing to Flink savepoints. Some of the interesting use cases it can be used for are:
+&lt;ul&gt;
+  &lt;li&gt;Read the configuration used to write out a snapshot&lt;/li&gt;
+  &lt;li&gt;Consume the new user code&lt;/li&gt;
+  &lt;li&gt;Check if both items above are compatible&lt;/li&gt;
+  &lt;li&gt;Consume the bytes from the snapshot and move forward or alert the user otherwise&lt;/li&gt;
+&lt;/ul&gt;
 
-* Analyzing state for interesting patterns
-* Troubleshooting or auditing jobs by checking for state discrepancies
-* Bootstrapping state for new applications
-* Modifying savepoints such as:
-  * Changing the maximum parallelism of a savepoint after deploying a Flink job
-  * Introducing breaking schema updates to a Flink application 
-  * Correcting invalid state in a Flink savepoint
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;interface&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;TypeSerializerSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;getCurrentVersion&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;writeSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DataOutputView&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;IOException&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;readSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;readVersion&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;DataInputView&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;ClassLoader&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;userCodeClassLoader&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;IOException&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;TypeSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;restoreSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;TypeSerializerSchemaCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;resolveSchemaCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;TypeSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;newSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h2 id=&quot;implementing-apache-avro-serialization-in-flink&quot;&gt;Implementing Apache Avro Serialization in Flink&lt;/h2&gt;
+
+&lt;p&gt;Apache Avro is a data serialization format that has very well-defined schema migration semantics and supports both reader and writer schemas. During normal Flink execution the reader and writer schemas will be the same. However, when upgrading an application they may be different and with schema evolution, Flink will be able to migrate objects with their schemas.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;AvroSerializerSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;implements&lt;/span&gt; &lt;span class= [...]
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Schema&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Schema&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;nd&quot;&gt;@SuppressWarnings&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;WeakerAccess&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;AvroSerializerSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;AvroSerializerSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Schema&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;schema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;this&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;runtimeSchema&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;schema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;This is a sketch of our Avro serializer. It uses the provided schemas and delegates to Apache Avro for all (de)-serialization. Let’s take a look at one possible implementation of a &lt;code&gt;TypeSerializerSnapshot&lt;/code&gt; that supports schema migration for Avro.&lt;/p&gt;
+
+&lt;h1 id=&quot;writing-out-the-snapshot&quot;&gt;Writing out the snapshot&lt;/h1&gt;
+
+&lt;p&gt;When serializing out the snapshot, the snapshot configuration will write two pieces of information; the current snapshot configuration version and the serializer configuration.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;getCurrentVersion&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;writeSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;DataOutputView&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;IOException&lt;/span&gt; &lt;span class=& [...]
+    &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;writeUTF&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;toString&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;false&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;));&lt; [...]
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The version is used to version the snapshot configuration object itself while the &lt;code&gt;writeSnapshot&lt;/code&gt; method writes out all the information we need to understand the current format; the runtime schema.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;readSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;kt&quot;&gt;int&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;readVersion&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;DataInputView&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;ClassLoader&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;userCodeClassLoader&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;throws&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;IOException&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+
+    &lt;span class=&quot;k&quot;&gt;assert&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;readVersion&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+    &lt;span class=&quot;kd&quot;&gt;final&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousSchemaDefinition&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;in&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;readUTF&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;this&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;previousSchema&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parseAvroSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;previousSchemaDefinition&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;this&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;runtimeType&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;findClassOrFallbackToGeneric&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;userCodeClassLoader&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;previousSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getFullName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;this&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;runtimeSchema&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tryExtractAvroSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;userCodeClassLoader&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeType&lt;/span&gt;&lt;span [...]
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+&lt;p&gt;Now when Flink restores it is able to read back in the writer schema used to serialize the data. The current runtime schema is discovered on the class path using some Java reflection magic.&lt;/p&gt;
+
+&lt;p&gt;Once we have both of these we can compare them for compatibility. Perhaps nothing has changed and the schemas are compatible as is.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;TypeSerializerSchemaCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;resolveSchemaCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;TypeSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;newSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(!(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;newSerializer&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;instanceof&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AvroSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;TypeSerializerSchemaCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;incompatible&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Objects&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;equals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;previousSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)) [...]
+      &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;TypeSerializerSchemaCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;compatibleAsIs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Otherwise, the schemas are compared using Avro’s compatibility checks and they may either be compatible with a migration or incompatible.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;  &lt;span class=&quot;kd&quot;&gt;final&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;SchemaPairCompatibility&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;compatibility&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;SchemaCompatibility&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;checkReaderWriterCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;previousSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+&lt;span class=&quot;err&quot;&gt;​&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;avroCompatibilityToFlinkCompatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;compatibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;If they are compatible with migration then Flink will restore a new serializer that can read the old schema and deserialize into the new runtime type which is in effect a migration.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;TypeSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;T&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;restoreSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;previousSchema&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AvroSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;runtimeType&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;previousSchema&lt;/span& [...]
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;else&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;AvroSerializer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;runtimeType&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;runtimeSchema&lt;/span&g [...]
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h1 id=&quot;the-state-processor-api-reading-writing-and-modifying-flink-state&quot;&gt;The State Processor API: Reading, writing and modifying Flink state&lt;/h1&gt;
+
+&lt;p&gt;The State Processor API allows reading from and writing to Flink savepoints. Some of the interesting use cases it can be used for are:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Analyzing state for interesting patterns&lt;/li&gt;
+  &lt;li&gt;Troubleshooting or auditing jobs by checking for state discrepancies&lt;/li&gt;
+  &lt;li&gt;Bootstrapping state for new applications&lt;/li&gt;
+  &lt;li&gt;Modifying savepoints such as:
+    &lt;ul&gt;
+      &lt;li&gt;Changing the maximum parallelism of a savepoint after deploying a Flink job&lt;/li&gt;
+      &lt;li&gt;Introducing breaking schema updates to a Flink application&lt;/li&gt;
+      &lt;li&gt;Correcting invalid state in a Flink savepoint&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-In a [previous blog post](https://flink.apache.org/feature/2019/09/13/state-processor-api.html), we discussed the State Processor API in detail, the community’s motivation behind introducing the feature in Flink 1.9, what you can use the API for and how you can use it. Essentially, the State Processor API is based around a relational model of mapping your Flink job state to a database, as illustrated in the diagram below. We encourage you to [read the previous story](https://flink.apache [...]
+&lt;p&gt;In a &lt;a href=&quot;https://flink.apache.org/feature/2019/09/13/state-processor-api.html&quot;&gt;previous blog post&lt;/a&gt;, we discussed the State Processor API in detail, the community’s motivation behind introducing the feature in Flink 1.9, what you can use the API for and how you can use it. Essentially, the State Processor API is based around a relational model of mapping your Flink job state to a database, as illustrated in the diagram below. We encourage you to &lt; [...]
 
-* Reading Keyed and Operator State with the State Processor API and 
-* Writing and Bootstrapping Keyed and Operator State with the State Processor API
+&lt;ul&gt;
+  &lt;li&gt;Reading Keyed and Operator State with the State Processor API and&lt;/li&gt;
+  &lt;li&gt;Writing and Bootstrapping Keyed and Operator State with the State Processor API&lt;/li&gt;
+&lt;/ul&gt;
 
-Stay tuned for more details and guidance around this feature of Flink.
+&lt;p&gt;Stay tuned for more details and guidance around this feature of Flink.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-1.png&quot; width=&quot;600px&quot; alt=&quot;State Processor API in Apache Flink&quot;/&gt;
+&lt;img src=&quot;/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-1.png&quot; width=&quot;600px&quot; alt=&quot;State Processor API in Apache Flink&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-2.png&quot; width=&quot;600px&quot; alt=&quot;State Processor API in Apache Flink&quot;/&gt;
-&lt;/center&gt;
-&lt;br&gt;
-
-# Looking ahead: More ways to interact with State in Flink
-
-There is a lot of discussion happening in the community related to extending the way Flink developers interact with state in their Flink applications. Regarding the State Processor API, some thoughts revolve around further broadening the API’s scope beyond its current ability to read from and write to both keyed and operator state. In upcoming releases, the State processor API will be extended to support both reading from and writing to windows and have a first-class integration with Fli [...]
+&lt;img src=&quot;/img/blog/2020-01-29-state-unlocked-interacting-with-state-in-apache-flink/managing-state-in-flink-state-processor-api-visual-2.png&quot; width=&quot;600px&quot; alt=&quot;State Processor API in Apache Flink&quot; /&gt;
+&lt;/center&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Beyond widening the scope of the State Processor API, the Flink community is discussing a few additional ways to improve the way developers interact with state in Flink. One of them is the proposal for a Unified Savepoint Format ([FLIP-41](https://cwiki.apache.org/confluence/display/FLINK/FLIP-41%3A+Unify+Binary+format+for+Keyed+State)) for all keyed state backends. Such improvement aims at introducing a unified binary format across all savepoints in all keyed state backends, something t [...]
+&lt;h1 id=&quot;looking-ahead-more-ways-to-interact-with-state-in-flink&quot;&gt;Looking ahead: More ways to interact with State in Flink&lt;/h1&gt;
 
-The community is also discussing the ability to have upgradability dry runs in upcoming Flink releases. Having such functionality in Flink allows developers to detect incompatible updates offline without the need of starting a new Flink job from scratch. For example, Flink users will be able to uncover topology or schema incompatibilities upon upgrading a Flink job, without having to load the state back to a running Flink job in the first place. Additionally, with upgradability dry runs  [...]
+&lt;p&gt;There is a lot of discussion happening in the community related to extending the way Flink developers interact with state in their Flink applications. Regarding the State Processor API, some thoughts revolve around further broadening the API’s scope beyond its current ability to read from and write to both keyed and operator state. In upcoming releases, the State processor API will be extended to support both reading from and writing to windows and have a first-class integration [...]
 
-With all  the exciting new functionality added in Flink 1.9 as well as some solid ideas and discussions around bringing state in Flink to the next level, the community is committed to making state in Apache Flink a fundamental element of the framework, something that is ever-present across versions and upgrades of your application and a component that is a true first-class citizen in Apache Flink. We encourage you to sign up to the [mailing list](https://flink.apache.org/community.html)  [...]
+&lt;p&gt;Beyond widening the scope of the State Processor API, the Flink community is discussing a few additional ways to improve the way developers interact with state in Flink. One of them is the proposal for a Unified Savepoint Format (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-41%3A+Unify+Binary+format+for+Keyed+State&quot;&gt;FLIP-41&lt;/a&gt;) for all keyed state backends. Such improvement aims at introducing a unified binary format across all savepoint [...]
+
+&lt;p&gt;The community is also discussing the ability to have upgradability dry runs in upcoming Flink releases. Having such functionality in Flink allows developers to detect incompatible updates offline without the need of starting a new Flink job from scratch. For example, Flink users will be able to uncover topology or schema incompatibilities upon upgrading a Flink job, without having to load the state back to a running Flink job in the first place. Additionally, with upgradability  [...]
+
+&lt;p&gt;With all  the exciting new functionality added in Flink 1.9 as well as some solid ideas and discussions around bringing state in Flink to the next level, the community is committed to making state in Apache Flink a fundamental element of the framework, something that is ever-present across versions and upgrades of your application and a component that is a true first-class citizen in Apache Flink. We encourage you to sign up to the &lt;a href=&quot;https://flink.apache.org/commu [...]
 </description>
-<pubDate>Wed, 29 Jan 2020 12:00:00 +0000</pubDate>
+<pubDate>Wed, 29 Jan 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/01/29/state-unlocked-interacting-with-state-in-apache-flink.html</link>
 <guid isPermaLink="true">/news/2020/01/29/state-unlocked-interacting-with-state-in-apache-flink.html</guid>
 </item>
 
 <item>
 <title>Advanced Flink Application Patterns Vol.1: Case Study of a Fraud Detection System</title>
-<description>In this series of blog posts you will learn about three powerful Flink patterns for building streaming applications:
+<description>&lt;p&gt;In this series of blog posts you will learn about three powerful Flink patterns for building streaming applications:&lt;/p&gt;
 
- - Dynamic updates of application logic
- - Dynamic data partitioning (shuffle), controlled at runtime
- - Low latency alerting based on custom windowing logic (without using the window API)
-
-These patterns expand the possibilities of what is achievable with statically defined data flows and provide the building blocks to fulfill complex business requirements.
+&lt;ul&gt;
+  &lt;li&gt;Dynamic updates of application logic&lt;/li&gt;
+  &lt;li&gt;Dynamic data partitioning (shuffle), controlled at runtime&lt;/li&gt;
+  &lt;li&gt;Low latency alerting based on custom windowing logic (without using the window API)&lt;/li&gt;
+&lt;/ul&gt;
 
-**Dynamic updates of application logic** allow Flink jobs to change at runtime, without downtime from stopping and resubmitting the code.  
-&lt;br&gt;
-**Dynamic data partitioning** provides the ability to change how events are distributed and grouped by Flink at runtime. Such functionality often becomes a natural requirement when building jobs with dynamically reconfigurable application logic.  
-&lt;br&gt;
-**Custom window management** demonstrates how you can utilize the low level [process function API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html), when the native [window API](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html) is not exactly matching your requirements. Specifically, you will learn how to implement low latency alerting on windows and how to limit state growth with timers.    
+&lt;p&gt;These patterns expand the possibilities of what is achievable with statically defined data flows and provide the building blocks to fulfill complex business requirements.&lt;/p&gt;
 
-These patterns build on top of core Flink functionality, however, they might not be immediately apparent from the framework&#39;s documentation as explaining and presenting the motivation behind them is not always trivial without a concrete use case. That is why we will showcase these patterns with a practical example that offers a real-world usage scenario for Apache Flink — a _Fraud Detection_ engine.
-We hope that this series will place these powerful approaches into your tool belt and enable you to take on new and exciting tasks.
+&lt;p&gt;&lt;strong&gt;Dynamic updates of application logic&lt;/strong&gt; allow Flink jobs to change at runtime, without downtime from stopping and resubmitting the code.&lt;br /&gt;
+&lt;br /&gt;
+&lt;strong&gt;Dynamic data partitioning&lt;/strong&gt; provides the ability to change how events are distributed and grouped by Flink at runtime. Such functionality often becomes a natural requirement when building jobs with dynamically reconfigurable application logic.&lt;br /&gt;
+&lt;br /&gt;
+&lt;strong&gt;Custom window management&lt;/strong&gt; demonstrates how you can utilize the low level &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/process_function.html&quot;&gt;process function API&lt;/a&gt;, when the native &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/operators/windows.html&quot;&gt;window API&lt;/a&gt; is not exactly matching your requirements. Specifically, you will learn how to impl [...]
 
-In the first blog post of the series we will look at the high-level architecture of the demo application, describe its components and their interactions. We will then deep dive into the implementation details of the first pattern in the series - **dynamic data partitioning**.
+&lt;p&gt;These patterns build on top of core Flink functionality, however, they might not be immediately apparent from the framework’s documentation as explaining and presenting the motivation behind them is not always trivial without a concrete use case. That is why we will showcase these patterns with a practical example that offers a real-world usage scenario for Apache Flink — a &lt;em&gt;Fraud Detection&lt;/em&gt; engine.
+We hope that this series will place these powerful approaches into your tool belt and enable you to take on new and exciting tasks.&lt;/p&gt;
 
+&lt;p&gt;In the first blog post of the series we will look at the high-level architecture of the demo application, describe its components and their interactions. We will then deep dive into the implementation details of the first pattern in the series - &lt;strong&gt;dynamic data partitioning&lt;/strong&gt;.&lt;/p&gt;
 
-You will be able to run the full Fraud Detection Demo application locally and look into the details of the implementation by using the accompanying GitHub repository.
+&lt;p&gt;You will be able to run the full Fraud Detection Demo application locally and look into the details of the implementation by using the accompanying GitHub repository.&lt;/p&gt;
 
-### Fraud Detection Demo
+&lt;h3 id=&quot;fraud-detection-demo&quot;&gt;Fraud Detection Demo&lt;/h3&gt;
 
-The full source code for our fraud detection demo is open source and available online. To run it locally, check out the following repository and follow the steps in the README:
+&lt;p&gt;The full source code for our fraud detection demo is open source and available online. To run it locally, check out the following repository and follow the steps in the README:&lt;/p&gt;
 
-[https://github.com/afedulov/fraud-detection-demo](https://github.com/afedulov/fraud-detection-demo)
+&lt;p&gt;&lt;a href=&quot;https://github.com/afedulov/fraud-detection-demo&quot;&gt;https://github.com/afedulov/fraud-detection-demo&lt;/a&gt;&lt;/p&gt;
 
-You will see the demo is a self-contained application - it only requires `docker` and `docker-compose` to be built from sources and includes the following components:
+&lt;p&gt;You will see the demo is a self-contained application - it only requires &lt;code&gt;docker&lt;/code&gt; and &lt;code&gt;docker-compose&lt;/code&gt; to be built from sources and includes the following components:&lt;/p&gt;
 
- - Apache Kafka (message broker) with ZooKeeper
- - Apache Flink ([application cluster](https://ci.apache.org/projects/flink/flink-docs-stable/concepts/glossary.html#flink-application-cluster))
- - Fraud Detection Web App
+&lt;ul&gt;
+  &lt;li&gt;Apache Kafka (message broker) with ZooKeeper&lt;/li&gt;
+  &lt;li&gt;Apache Flink (&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/concepts/glossary.html#flink-application-cluster&quot;&gt;application cluster&lt;/a&gt;)&lt;/li&gt;
+  &lt;li&gt;Fraud Detection Web App&lt;/li&gt;
+&lt;/ul&gt;
 
-The high-level goal of the Fraud Detection engine is to consume a stream of financial transactions and evaluate them against a set of rules. These rules are subject to frequent changes and tweaks. In a real production system, it is important to be able to add and remove them at runtime, without incurring an expensive penalty of stopping and restarting the job.
+&lt;p&gt;The high-level goal of the Fraud Detection engine is to consume a stream of financial transactions and evaluate them against a set of rules. These rules are subject to frequent changes and tweaks. In a real production system, it is important to be able to add and remove them at runtime, without incurring an expensive penalty of stopping and restarting the job.&lt;/p&gt;
 
-When you navigate to the demo URL in your browser, you will be presented with the following UI:
+&lt;p&gt;When you navigate to the demo URL in your browser, you will be presented with the following UI:&lt;/p&gt;
 
- &lt;center&gt;
- &lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-11-19-demo-fraud-detection/ui.png&quot; width=&quot;800px&quot; alt=&quot;Figure 1: Demo UI&quot;/&gt;
- &lt;br/&gt;
+&lt;center&gt;
+ &lt;img src=&quot;/img/blog/2019-11-19-demo-fraud-detection/ui.png&quot; width=&quot;800px&quot; alt=&quot;Figure 1: Demo UI&quot; /&gt;
+ &lt;br /&gt;
  &lt;i&gt;&lt;small&gt;Figure 1: Fraud Detection Demo UI&lt;/small&gt;&lt;/i&gt;
  &lt;/center&gt;
- &lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-On the left side, you can see a visual representation of financial transactions flowing through the system after you click the &quot;Start&quot; button. The slider at the top allows you to control the number of generated transactions per second. The middle section is devoted to managing the rules evaluated by Flink. From here, you can create new rules as well as issue control commands, such as clearing Flink&#39;s state.
+&lt;p&gt;On the left side, you can see a visual representation of financial transactions flowing through the system after you click the “Start” button. The slider at the top allows you to control the number of generated transactions per second. The middle section is devoted to managing the rules evaluated by Flink. From here, you can create new rules as well as issue control commands, such as clearing Flink’s state.&lt;/p&gt;
 
-The demo out-of-the-box comes with a set of predefined sample rules. You can click the _Start_ button and, after some time, will observe alerts displayed in the right section of the UI. These alerts are the result of Flink evaluating the generated transactions stream against the predefined rules.
+&lt;p&gt;The demo out-of-the-box comes with a set of predefined sample rules. You can click the &lt;em&gt;Start&lt;/em&gt; button and, after some time, will observe alerts displayed in the right section of the UI. These alerts are the result of Flink evaluating the generated transactions stream against the predefined rules.&lt;/p&gt;
 
- Our sample fraud detection system consists of three main components:
+&lt;p&gt;Our sample fraud detection system consists of three main components:&lt;/p&gt;
 
-  1. Frontend (React)  
-  1. Backend (SpringBoot)  
-  1. Fraud Detection application (Apache Flink)  
+&lt;ol&gt;
+  &lt;li&gt;Frontend (React)&lt;/li&gt;
+  &lt;li&gt;Backend (SpringBoot)&lt;/li&gt;
+  &lt;li&gt;Fraud Detection application (Apache Flink)&lt;/li&gt;
+&lt;/ol&gt;
 
-Interactions between the main elements are depicted in _Figure 2_.
+&lt;p&gt;Interactions between the main elements are depicted in &lt;em&gt;Figure 2&lt;/em&gt;.&lt;/p&gt;
 
- &lt;center&gt;
- &lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-11-19-demo-fraud-detection/architecture.png&quot; width=&quot;800px&quot; alt=&quot;Figure 2: Demo Components&quot;/&gt;
- &lt;br/&gt;
+&lt;center&gt;
+ &lt;img src=&quot;/img/blog/2019-11-19-demo-fraud-detection/architecture.png&quot; width=&quot;800px&quot; alt=&quot;Figure 2: Demo Components&quot; /&gt;
+ &lt;br /&gt;
  &lt;i&gt;&lt;small&gt;Figure 2: Fraud Detection Demo Components&lt;/small&gt;&lt;/i&gt;
  &lt;/center&gt;
- &lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
- The Backend exposes a REST API to the Frontend for creating/deleting rules as well as issuing control commands for managing the demo execution. It then relays those Frontend actions to Flink by sending them via a &quot;Control&quot; Kafka topic. The Backend additionally includes a _Transactions Generator_ component, which sends an emulated stream of money transfer events to Flink via a separate &quot;Transactions&quot; topic. Alerts generated by Flink are consumed by the Backend from &q [...]
+&lt;p&gt;The Backend exposes a REST API to the Frontend for creating/deleting rules as well as issuing control commands for managing the demo execution. It then relays those Frontend actions to Flink by sending them via a “Control” Kafka topic. The Backend additionally includes a &lt;em&gt;Transactions Generator&lt;/em&gt; component, which sends an emulated stream of money transfer events to Flink via a separate “Transactions” topic. Alerts generated by Flink are consumed by the Backend  [...]
 
-Now that you are familiar with the overall layout and the goal of our Fraud Detection engine, let&#39;s now go into the details of what is required to implement such a system.
+&lt;p&gt;Now that you are familiar with the overall layout and the goal of our Fraud Detection engine, let’s now go into the details of what is required to implement such a system.&lt;/p&gt;
 
-### Dynamic Data Partitioning
+&lt;h3 id=&quot;dynamic-data-partitioning&quot;&gt;Dynamic Data Partitioning&lt;/h3&gt;
 
-The first pattern we will look into is Dynamic Data Partitioning.
+&lt;p&gt;The first pattern we will look into is Dynamic Data Partitioning.&lt;/p&gt;
 
-If you have used Flink&#39;s DataStream API in the past, you are undoubtedly familiar with the **keyBy** method. Keying a stream shuffles all the records such that elements with the same key are assigned to the same partition. This means all records with the same key are processed by the same physical instance of the next operator.
+&lt;p&gt;If you have used Flink’s DataStream API in the past, you are undoubtedly familiar with the &lt;strong&gt;keyBy&lt;/strong&gt; method. Keying a stream shuffles all the records such that elements with the same key are assigned to the same partition. This means all records with the same key are processed by the same physical instance of the next operator.&lt;/p&gt;
 
-In a typical streaming application, the choice of key is fixed, determined by some static field within the elements. For instance, when building a simple window-based aggregation of a stream of transactions, we might always group by the transactions account id.
+&lt;p&gt;In a typical streaming application, the choice of key is fixed, determined by some static field within the elements. For instance, when building a simple window-based aggregation of a stream of transactions, we might always group by the transactions account id.&lt;/p&gt;
 
-```java
-DataStream&lt;Transaction&gt; input = // [...]
-DataStream&lt;...&gt; windowed = input
-  .keyBy(Transaction::getAccountId)
-  .window(/*window specification*/);
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;input&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// [...]&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;...&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;windowed&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;input&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nl&quot;&gt;Transaction:&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;getAccountId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;cm&quot;&gt;/*window specification*/&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-This approach is the main building block for achieving horizontal scalability in a wide range of use cases. However, in the case of an application striving to provide flexibility in business logic at runtime, this is not enough.
-To understand why this is the case, let us start with articulating a realistic sample rule definition for our fraud detection system in the form of a functional requirement:  
+&lt;p&gt;This approach is the main building block for achieving horizontal scalability in a wide range of use cases. However, in the case of an application striving to provide flexibility in business logic at runtime, this is not enough.
+To understand why this is the case, let us start with articulating a realistic sample rule definition for our fraud detection system in the form of a functional requirement:&lt;/p&gt;
 
-*&quot;Whenever the **sum** of the accumulated **payment amount** from the same **payer** to the same **beneficiary** within the **duration of a week** is **greater** than **1 000 000 $** - fire an alert.&quot;*
+&lt;p&gt;&lt;em&gt;“Whenever the &lt;strong&gt;sum&lt;/strong&gt; of the accumulated &lt;strong&gt;payment amount&lt;/strong&gt; from the same &lt;strong&gt;payer&lt;/strong&gt; to the same &lt;strong&gt;beneficiary&lt;/strong&gt; within the &lt;strong&gt;duration of a week&lt;/strong&gt; is &lt;strong&gt;greater&lt;/strong&gt; than &lt;strong&gt;1 000 000 $&lt;/strong&gt; - fire an alert.”&lt;/em&gt;&lt;/p&gt;
 
-In this formulation we can spot a number of parameters that we would like to be able to specify in a newly-submitted rule and possibly even later modify or tweak at runtime:
+&lt;p&gt;In this formulation we can spot a number of parameters that we would like to be able to specify in a newly-submitted rule and possibly even later modify or tweak at runtime:&lt;/p&gt;
 
-1. Aggregation field (payment amount)  
-1. Grouping fields (payer + beneficiary)  
-1. Aggregation function (sum)  
-1. Window duration (1 week)  
-1. Limit (1 000 000)  
-1. Limit operator (greater)  
+&lt;ol&gt;
+  &lt;li&gt;Aggregation field (payment amount)&lt;/li&gt;
+  &lt;li&gt;Grouping fields (payer + beneficiary)&lt;/li&gt;
+  &lt;li&gt;Aggregation function (sum)&lt;/li&gt;
+  &lt;li&gt;Window duration (1 week)&lt;/li&gt;
+  &lt;li&gt;Limit (1 000 000)&lt;/li&gt;
+  &lt;li&gt;Limit operator (greater)&lt;/li&gt;
+&lt;/ol&gt;
 
-Accordingly, we will use the following simple JSON format to define the aforementioned parameters:
+&lt;p&gt;Accordingly, we will use the following simple JSON format to define the aforementioned parameters:&lt;/p&gt;
 
-```json  
-{
-  &quot;ruleId&quot;: 1,
-  &quot;ruleState&quot;: &quot;ACTIVE&quot;,
-  &quot;groupingKeyNames&quot;: [&quot;payerId&quot;, &quot;beneficiaryId&quot;],
-  &quot;aggregateFieldName&quot;: &quot;paymentAmount&quot;,
-  &quot;aggregatorFunctionType&quot;: &quot;SUM&quot;,
-  &quot;limitOperatorType&quot;: &quot;GREATER&quot;,
-  &quot;limit&quot;: 1000000,
-  &quot;windowMinutes&quot;: 10080
-}
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-json&quot;&gt;&lt;span class=&quot;p&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;ruleId&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;ruleState&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&amp;quot;ACTIVE&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;groupingKeyNames&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;p&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;s2&quot;&gt;&amp;quot;payerId&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&amp;quot;beneficiaryId&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;],&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;aggregateFieldName&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&amp;quot;paymentAmount&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;aggregatorFunctionType&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&amp;quot;SUM&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;limitOperatorType&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;s2&quot;&gt;&amp;quot;GREATER&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;limit&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1000000&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;quot;windowMinutes&amp;quot;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;10080&lt;/span&gt;
+&lt;span class=&quot;p&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-At this point, it is important to understand that **`groupingKeyNames`** determine the actual physical grouping of events - all Transactions with the same values of specified parameters (e.g. _payer #25 -&gt; beneficiary #12_) have to be aggregated in the same physical instance of the evaluating operator. Naturally, the process of distributing data in such a way in Flink&#39;s API is realised by a `keyBy()` function.
+&lt;p&gt;At this point, it is important to understand that &lt;strong&gt;&lt;code&gt;groupingKeyNames&lt;/code&gt;&lt;/strong&gt; determine the actual physical grouping of events - all Transactions with the same values of specified parameters (e.g. &lt;em&gt;payer #25 -&amp;gt; beneficiary #12&lt;/em&gt;) have to be aggregated in the same physical instance of the evaluating operator. Naturally, the process of distributing data in such a way in Flink’s API is realised by a &lt;code&gt;key [...]
 
-Most examples in Flink&#39;s `keyBy()`[documentation](https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-field-expressions) use a hard-coded `KeySelector`, which extracts specific fixed events&#39; fields. However, to support the desired flexibility, we have to extract them in a more dynamic fashion based on the specifications of the rules. For this, we will have to use one additional operator that prepares every event for dispatching to a corr [...]
+&lt;p&gt;Most examples in Flink’s &lt;code&gt;keyBy()&lt;/code&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-field-expressions&quot;&gt;documentation&lt;/a&gt; use a hard-coded &lt;code&gt;KeySelector&lt;/code&gt;, which extracts specific fixed events’ fields. However, to support the desired flexibility, we have to extract them in a more dynamic fashion based on the specifications of the rules. For this, we will have to [...]
 
-On a high level, our main processing pipeline looks like this:
+&lt;p&gt;On a high level, our main processing pipeline looks like this:&lt;/p&gt;
 
-```java
-DataStream&lt;Alert&gt; alerts =
-    transactions
-        .process(new DynamicKeyFunction())
-        .keyBy(/* some key selector */);
-        .process(/* actual calculations and alerting */)
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Alert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;alerts&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;transactions&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicKeyFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;cm&quot;&gt;/* some key selector */&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;cm&quot;&gt;/* actual calculations and alerting */&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-We have previously established that each rule defines a **`groupingKeyNames`** parameter that specifies which combination of fields will be used for the incoming events&#39; grouping. Each rule might use an arbitrary combination of these fields. At the same time, every incoming event potentially needs to be evaluated against multiple rules. This implies that events might simultaneously need to be present at multiple parallel instances of evaluating operators that correspond to different  [...]
+&lt;p&gt;We have previously established that each rule defines a &lt;strong&gt;&lt;code&gt;groupingKeyNames&lt;/code&gt;&lt;/strong&gt; parameter that specifies which combination of fields will be used for the incoming events’ grouping. Each rule might use an arbitrary combination of these fields. At the same time, every incoming event potentially needs to be evaluated against multiple rules. This implies that events might simultaneously need to be present at multiple parallel instances  [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-11-19-demo-fraud-detection/shuffle_function_1.png&quot; width=&quot;800px&quot; alt=&quot;Figure 3: Forking events with Dynamic Key Function&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-11-19-demo-fraud-detection/shuffle_function_1.png&quot; width=&quot;800px&quot; alt=&quot;Figure 3: Forking events with Dynamic Key Function&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 3: Forking events with Dynamic Key Function&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
- `DynamicKeyFunction` iterates over a set of defined rules and prepares every event to be processed by a `keyBy()` function by extracting the required grouping keys:
+&lt;p&gt;&lt;code&gt;DynamicKeyFunction&lt;/code&gt; iterates over a set of defined rules and prepares every event to be processed by a &lt;code&gt;keyBy()&lt;/code&gt; function by extracting the required grouping keys:&lt;/p&gt;
 
-```java
-public class DynamicKeyFunction
-    extends ProcessFunction&lt;Transaction, Keyed&lt;Transaction, String, Integer&gt;&gt; {
-   ...
-  /* Simplified */
-  List&lt;Rule&gt; rules = /* Rules that are initialized somehow.
-                        Details will be discussed in a future blog post. */;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;DynamicKeyFunction&lt;/span&gt;
+    &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span cla [...]
+   &lt;span class=&quot;o&quot;&gt;...&lt;/span&gt;
+  &lt;span class=&quot;cm&quot;&gt;/* Simplified */&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;List&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rules&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;cm&quot;&gt;/* Rules that are initialized somehow.&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;                        Details will be discussed in a future blog post. */&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
 
-  @Override
-  public void processElement(
-      Transaction event,
-      Context ctx,
-      Collector&lt;Keyed&lt;Transaction, String, Integer&gt;&gt; out) {
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+      &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Transaction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o& [...]
 
-      for (Rule rule :rules) {
-       out.collect(
-           new Keyed&lt;&gt;(
-               event,
-               KeysExtractor.getKey(rule.getGroupingKeyNames(), event),
-               rule.getRuleId()));
-      }
-  }
-  ...
-}
-```
- `KeysExtractor.getKey()` uses reflection to extract the required values of `groupingKeyNames` fields from events and combines them as a single concatenated String key, e.g `&quot;{payerId=25;beneficiaryId=12}&quot;`. Flink will calculate the hash of this key and assign the processing of this particular combination to a specific server in the cluster. This will allow tracking all transactions between _payer #25_ and _beneficiary #12_ and evaluating defined rules within the desired time window.
+      &lt;span class=&quot;k&quot;&gt;for&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Rule&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;:&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rules&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+           &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+               &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+               &lt;span class=&quot;n&quot;&gt;KeysExtractor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getGroupingKeyNames&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;event&lt;/span&gt;&lt;span class=& [...]
+               &lt;span class=&quot;n&quot;&gt;rule&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getRuleId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()));&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;...&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+&lt;p&gt;&lt;code&gt;KeysExtractor.getKey()&lt;/code&gt; uses reflection to extract the required values of &lt;code&gt;groupingKeyNames&lt;/code&gt; fields from events and combines them as a single concatenated String key, e.g &lt;code&gt;&quot;{payerId=25;beneficiaryId=12}&quot;&lt;/code&gt;. Flink will calculate the hash of this key and assign the processing of this particular combination to a specific server in the cluster. This will allow tracking all transactions between &lt;em&gt;p [...]
 
-Notice that a wrapper class `Keyed` with the following signature was introduced as the output type of `DynamicKeyFunction`:  
+&lt;p&gt;Notice that a wrapper class &lt;code&gt;Keyed&lt;/code&gt; with the following signature was introduced as the output type of &lt;code&gt;DynamicKeyFunction&lt;/code&gt;:&lt;/p&gt;
 
-```java   
-public class Keyed&lt;IN, KEY, ID&gt; {
-  private IN wrapped;
-  private KEY key;
-  private ID id;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;IN&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KEY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;  [...]
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;IN&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wrapped&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ID&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
 
-  ...
-  public KEY getKey(){
-      return key;
-  }
-}
-```
+  &lt;span class=&quot;o&quot;&gt;...&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KEY&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;getKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(){&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;key&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Fields of this POJO carry the following information: `wrapped` is the original transaction event, `key` is the result of using `KeysExtractor` and `id` is the ID of the Rule that caused the dispatch of the event (according to the rule-specific grouping logic).
+&lt;p&gt;Fields of this POJO carry the following information: &lt;code&gt;wrapped&lt;/code&gt; is the original transaction event, &lt;code&gt;key&lt;/code&gt; is the result of using &lt;code&gt;KeysExtractor&lt;/code&gt; and &lt;code&gt;id&lt;/code&gt; is the ID of the Rule that caused the dispatch of the event (according to the rule-specific grouping logic).&lt;/p&gt;
 
-Events of this type will be the input to the `keyBy()` function in the main processing pipeline and allow the use of a simple lambda-expression as a [`KeySelector`](https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-key-selector-functions) for the final step of implementing dynamic data shuffle.
+&lt;p&gt;Events of this type will be the input to the &lt;code&gt;keyBy()&lt;/code&gt; function in the main processing pipeline and allow the use of a simple lambda-expression as a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/api_concepts.html#define-keys-using-key-selector-functions&quot;&gt;&lt;code&gt;KeySelector&lt;/code&gt;&lt;/a&gt; for the final step of implementing dynamic data shuffle.&lt;/p&gt;
 
-```java
-DataStream&lt;Alert&gt; alerts =
-    transactions
-        .process(new DynamicKeyFunction())
-        .keyBy((keyed) -&gt; keyed.getKey());
-        .process(new DynamicAlertFunction())
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Alert&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;alerts&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;transactions&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicKeyFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;keyed&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getKey&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt; [...]
+        &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;DynamicAlertFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-By applying `DynamicKeyFunction` we are implicitly copying events for performing parallel per-rule evaluation within a Flink cluster. By doing so, we achieve an important property - horizontal scalability of rules&#39; processing. Our system will be capable of handling more rules by adding more servers to the cluster, i.e. increasing the parallelism. This property is achieved at the cost of data duplication, which might become an issue depending on the specific set of parameters, such as [...]
+&lt;p&gt;By applying &lt;code&gt;DynamicKeyFunction&lt;/code&gt; we are implicitly copying events for performing parallel per-rule evaluation within a Flink cluster. By doing so, we achieve an important property - horizontal scalability of rules’ processing. Our system will be capable of handling more rules by adding more servers to the cluster, i.e. increasing the parallelism. This property is achieved at the cost of data duplication, which might become an issue depending on the specifi [...]
 
-### Summary:
+&lt;h3 id=&quot;summary&quot;&gt;Summary:&lt;/h3&gt;
 
-In this blog post, we have discussed the motivation behind supporting dynamic, runtime changes to a Flink application by looking at a sample use case - a Fraud Detection engine. We have described the overall architecture and interactions between its components as well as provided references for building and running a demo Fraud Detection application in a dockerized setup. We then showed the details of implementing a  **dynamic data partitioning pattern** as the first underlying building  [...]
+&lt;p&gt;In this blog post, we have discussed the motivation behind supporting dynamic, runtime changes to a Flink application by looking at a sample use case - a Fraud Detection engine. We have described the overall architecture and interactions between its components as well as provided references for building and running a demo Fraud Detection application in a dockerized setup. We then showed the details of implementing a  &lt;strong&gt;dynamic data partitioning pattern&lt;/strong&gt; [...]
 
-To remain focused on describing the core mechanics of the pattern, we kept the complexity of the DSL and the underlying rules engine to a minimum. Going forward, it is easy to imagine adding extensions such as allowing more sophisticated rule definitions, including filtering of certain events, logical rules chaining, and other more advanced functionality.
+&lt;p&gt;To remain focused on describing the core mechanics of the pattern, we kept the complexity of the DSL and the underlying rules engine to a minimum. Going forward, it is easy to imagine adding extensions such as allowing more sophisticated rule definitions, including filtering of certain events, logical rules chaining, and other more advanced functionality.&lt;/p&gt;
 
-In the second part of this series, we will describe how the rules make their way into the running Fraud Detection engine. Additionally, we will go over the implementation details of the main processing function of the pipeline - _DynamicAlertFunction()_.
+&lt;p&gt;In the second part of this series, we will describe how the rules make their way into the running Fraud Detection engine. Additionally, we will go over the implementation details of the main processing function of the pipeline - &lt;em&gt;DynamicAlertFunction()&lt;/em&gt;.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-11-19-demo-fraud-detection/end-to-end.png&quot; width=&quot;800px&quot; alt=&quot;Figure 4: End-to-end pipeline&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-11-19-demo-fraud-detection/end-to-end.png&quot; width=&quot;800px&quot; alt=&quot;Figure 4: End-to-end pipeline&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Figure 4: End-to-end pipeline&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-In the next article, we will see how Flink&#39;s broadcast streams can be utilized to help steer the processing within the Fraud Detection engine at runtime (Dynamic Application Updates pattern).
+&lt;p&gt;In the next article, we will see how Flink’s broadcast streams can be utilized to help steer the processing within the Fraud Detection engine at runtime (Dynamic Application Updates pattern).&lt;/p&gt;
 </description>
-<pubDate>Wed, 15 Jan 2020 12:00:00 +0000</pubDate>
+<pubDate>Wed, 15 Jan 2020 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2020/01/15/demo-fraud-detection.html</link>
 <guid isPermaLink="true">/news/2020/01/15/demo-fraud-detection.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.8.3 Released</title>
-<description>The Apache Flink community released the third bugfix version of the Apache Flink 1.8 series.
+<description>&lt;p&gt;The Apache Flink community released the third bugfix version of the Apache Flink 1.8 series.&lt;/p&gt;
 
-This release includes 45 fixes and minor improvements for Flink 1.8.2. The list below includes a detailed list of all fixes and improvements.
+&lt;p&gt;This release includes 45 fixes and minor improvements for Flink 1.8.2. The list below includes a detailed list of all fixes and improvements.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.8.3.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.8.3.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.8.3&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.8.3&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.8.3&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.3&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.3&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.3&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Sub-task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13723&#39;&gt;FLINK-13723&lt;/a&gt;] -         Use liquid-c for faster doc generation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13723&quot;&gt;FLINK-13723&lt;/a&gt;] -         Use liquid-c for faster doc generation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13724&#39;&gt;FLINK-13724&lt;/a&gt;] -         Remove unnecessary whitespace from the docs&amp;#39; sidenav
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13724&quot;&gt;FLINK-13724&lt;/a&gt;] -         Remove unnecessary whitespace from the docs&amp;#39; sidenav
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13725&#39;&gt;FLINK-13725&lt;/a&gt;] -         Use sassc for faster doc generation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13725&quot;&gt;FLINK-13725&lt;/a&gt;] -         Use sassc for faster doc generation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13726&#39;&gt;FLINK-13726&lt;/a&gt;] -         Build docs with jekyll 4.0.0.pre.beta1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13726&quot;&gt;FLINK-13726&lt;/a&gt;] -         Build docs with jekyll 4.0.0.pre.beta1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13791&#39;&gt;FLINK-13791&lt;/a&gt;] -         Speed up sidenav by using group_by
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13791&quot;&gt;FLINK-13791&lt;/a&gt;] -         Speed up sidenav by using group_by
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12342&#39;&gt;FLINK-12342&lt;/a&gt;] -         Yarn Resource Manager Acquires Too Many Containers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12342&quot;&gt;FLINK-12342&lt;/a&gt;] -         Yarn Resource Manager Acquires Too Many Containers
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13184&#39;&gt;FLINK-13184&lt;/a&gt;] -         Starting a TaskExecutor blocks the YarnResourceManager&amp;#39;s main thread
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13184&quot;&gt;FLINK-13184&lt;/a&gt;] -         Starting a TaskExecutor blocks the YarnResourceManager&amp;#39;s main thread
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13728&#39;&gt;FLINK-13728&lt;/a&gt;] -         Fix wrong closing tag order in sidenav
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13728&quot;&gt;FLINK-13728&lt;/a&gt;] -         Fix wrong closing tag order in sidenav
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13746&#39;&gt;FLINK-13746&lt;/a&gt;] -         Elasticsearch (v2.3.5) sink end-to-end test fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13746&quot;&gt;FLINK-13746&lt;/a&gt;] -         Elasticsearch (v2.3.5) sink end-to-end test fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13749&#39;&gt;FLINK-13749&lt;/a&gt;] -         Make Flink client respect classloading policy
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13749&quot;&gt;FLINK-13749&lt;/a&gt;] -         Make Flink client respect classloading policy
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13892&#39;&gt;FLINK-13892&lt;/a&gt;] -         HistoryServerTest failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13892&quot;&gt;FLINK-13892&lt;/a&gt;] -         HistoryServerTest failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13936&#39;&gt;FLINK-13936&lt;/a&gt;] -         NOTICE-binary is outdated
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13936&quot;&gt;FLINK-13936&lt;/a&gt;] -         NOTICE-binary is outdated
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13966&#39;&gt;FLINK-13966&lt;/a&gt;] -         Jar sorting in collect_license_files.sh is locale dependent
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13966&quot;&gt;FLINK-13966&lt;/a&gt;] -         Jar sorting in collect_license_files.sh is locale dependent
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13995&#39;&gt;FLINK-13995&lt;/a&gt;] -         Fix shading of the licence information of netty
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13995&quot;&gt;FLINK-13995&lt;/a&gt;] -         Fix shading of the licence information of netty
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13999&#39;&gt;FLINK-13999&lt;/a&gt;] -         Correct the documentation of MATCH_RECOGNIZE
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13999&quot;&gt;FLINK-13999&lt;/a&gt;] -         Correct the documentation of MATCH_RECOGNIZE
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14009&#39;&gt;FLINK-14009&lt;/a&gt;] -         Cron jobs broken due to verifying incorrect NOTICE-binary file
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14009&quot;&gt;FLINK-14009&lt;/a&gt;] -         Cron jobs broken due to verifying incorrect NOTICE-binary file
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14010&#39;&gt;FLINK-14010&lt;/a&gt;] -         Dispatcher &amp;amp; JobManagers don&amp;#39;t give up leadership when AM is shut down
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14010&quot;&gt;FLINK-14010&lt;/a&gt;] -         Dispatcher &amp;amp; JobManagers don&amp;#39;t give up leadership when AM is shut down
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14043&#39;&gt;FLINK-14043&lt;/a&gt;] -         SavepointMigrationTestBase is super slow
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14043&quot;&gt;FLINK-14043&lt;/a&gt;] -         SavepointMigrationTestBase is super slow
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14107&#39;&gt;FLINK-14107&lt;/a&gt;] -         Kinesis consumer record emitter deadlock under event time alignment
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14107&quot;&gt;FLINK-14107&lt;/a&gt;] -         Kinesis consumer record emitter deadlock under event time alignment
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14175&#39;&gt;FLINK-14175&lt;/a&gt;] -         Upgrade KPL version in flink-connector-kinesis to fix application OOM
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14175&quot;&gt;FLINK-14175&lt;/a&gt;] -         Upgrade KPL version in flink-connector-kinesis to fix application OOM
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14235&#39;&gt;FLINK-14235&lt;/a&gt;] -         Kafka010ProducerITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14235&quot;&gt;FLINK-14235&lt;/a&gt;] -         Kafka010ProducerITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceCustomOperator fails on travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14315&#39;&gt;FLINK-14315&lt;/a&gt;] -         NPE with JobMaster.disconnectTaskManager
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14315&quot;&gt;FLINK-14315&lt;/a&gt;] -         NPE with JobMaster.disconnectTaskManager
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14337&#39;&gt;FLINK-14337&lt;/a&gt;] -         HistoryServerTest.testHistoryServerIntegration failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14337&quot;&gt;FLINK-14337&lt;/a&gt;] -         HistoryServerTest.testHistoryServerIntegration failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14347&#39;&gt;FLINK-14347&lt;/a&gt;] -         YARNSessionFIFOITCase.checkForProhibitedLogContents found a log with prohibited string
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14347&quot;&gt;FLINK-14347&lt;/a&gt;] -         YARNSessionFIFOITCase.checkForProhibitedLogContents found a log with prohibited string
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14370&#39;&gt;FLINK-14370&lt;/a&gt;] -         KafkaProducerAtLeastOnceITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14370&quot;&gt;FLINK-14370&lt;/a&gt;] -         KafkaProducerAtLeastOnceITCase&amp;gt;KafkaProducerTestBase.testOneToOneAtLeastOnceRegularSink fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14398&#39;&gt;FLINK-14398&lt;/a&gt;] -         Further split input unboxing code into separate methods
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14398&quot;&gt;FLINK-14398&lt;/a&gt;] -         Further split input unboxing code into separate methods
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14413&#39;&gt;FLINK-14413&lt;/a&gt;] -         shade-plugin ApacheNoticeResourceTransformer uses platform-dependent encoding
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14413&quot;&gt;FLINK-14413&lt;/a&gt;] -         shade-plugin ApacheNoticeResourceTransformer uses platform-dependent encoding
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14434&#39;&gt;FLINK-14434&lt;/a&gt;] -         Dispatcher#createJobManagerRunner should not start JobManagerRunner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14434&quot;&gt;FLINK-14434&lt;/a&gt;] -         Dispatcher#createJobManagerRunner should not start JobManagerRunner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14562&#39;&gt;FLINK-14562&lt;/a&gt;] -         RMQSource leaves idle consumer after closing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14562&quot;&gt;FLINK-14562&lt;/a&gt;] -         RMQSource leaves idle consumer after closing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14589&#39;&gt;FLINK-14589&lt;/a&gt;] -         Redundant slot requests with the same AllocationID leads to inconsistent slot table
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14589&quot;&gt;FLINK-14589&lt;/a&gt;] -         Redundant slot requests with the same AllocationID leads to inconsistent slot table
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-15036&#39;&gt;FLINK-15036&lt;/a&gt;] -         Container startup error will be handled out side of the YarnResourceManager&amp;#39;s main thread
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-15036&quot;&gt;FLINK-15036&lt;/a&gt;] -         Container startup error will be handled out side of the YarnResourceManager&amp;#39;s main thread
 &lt;/li&gt;
 &lt;/ul&gt;
-                
+
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12848&#39;&gt;FLINK-12848&lt;/a&gt;] -         Method equals() in RowTypeInfo should consider fieldsNames
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12848&quot;&gt;FLINK-12848&lt;/a&gt;] -         Method equals() in RowTypeInfo should consider fieldsNames
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13729&#39;&gt;FLINK-13729&lt;/a&gt;] -         Update website generation dependencies
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13729&quot;&gt;FLINK-13729&lt;/a&gt;] -         Update website generation dependencies
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13965&#39;&gt;FLINK-13965&lt;/a&gt;] -         Keep hasDeprecatedKeys and deprecatedKeys methods in ConfigOption and mark it with @Deprecated annotation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13965&quot;&gt;FLINK-13965&lt;/a&gt;] -         Keep hasDeprecatedKeys and deprecatedKeys methods in ConfigOption and mark it with @Deprecated annotation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13967&#39;&gt;FLINK-13967&lt;/a&gt;] -         Generate full binary licensing via collect_license_files.sh
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13967&quot;&gt;FLINK-13967&lt;/a&gt;] -         Generate full binary licensing via collect_license_files.sh
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13968&#39;&gt;FLINK-13968&lt;/a&gt;] -         Add travis check for the correctness of the binary licensing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13968&quot;&gt;FLINK-13968&lt;/a&gt;] -         Add travis check for the correctness of the binary licensing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13991&#39;&gt;FLINK-13991&lt;/a&gt;] -         Add git exclusion for 1.9+ features to 1.8
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13991&quot;&gt;FLINK-13991&lt;/a&gt;] -         Add git exclusion for 1.9+ features to 1.8
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14008&#39;&gt;FLINK-14008&lt;/a&gt;] -         Auto-generate binary licensing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14008&quot;&gt;FLINK-14008&lt;/a&gt;] -         Auto-generate binary licensing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14104&#39;&gt;FLINK-14104&lt;/a&gt;] -         Bump Jackson to 2.10.1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14104&quot;&gt;FLINK-14104&lt;/a&gt;] -         Bump Jackson to 2.10.1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14123&#39;&gt;FLINK-14123&lt;/a&gt;] -         Lower the default value of taskmanager.memory.fraction
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14123&quot;&gt;FLINK-14123&lt;/a&gt;] -         Lower the default value of taskmanager.memory.fraction
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14215&#39;&gt;FLINK-14215&lt;/a&gt;] -         Add Docs for TM and JM Environment Variable Setting
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14215&quot;&gt;FLINK-14215&lt;/a&gt;] -         Add Docs for TM and JM Environment Variable Setting
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14334&#39;&gt;FLINK-14334&lt;/a&gt;] -         ElasticSearch docs refer to non-existent ExceptionUtils.containsThrowable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14334&quot;&gt;FLINK-14334&lt;/a&gt;] -         ElasticSearch docs refer to non-existent ExceptionUtils.containsThrowable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14639&#39;&gt;FLINK-14639&lt;/a&gt;] -         Fix the document of Metrics  that has an error for `User Scope` 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14639&quot;&gt;FLINK-14639&lt;/a&gt;] -         Fix the document of Metrics  that has an error for `User Scope` 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14646&#39;&gt;FLINK-14646&lt;/a&gt;] -         Check non-null for key in KeyGroupStreamPartitioner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14646&quot;&gt;FLINK-14646&lt;/a&gt;] -         Check non-null for key in KeyGroupStreamPartitioner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14995&#39;&gt;FLINK-14995&lt;/a&gt;] -         Kinesis NOTICE is incorrect
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14995&quot;&gt;FLINK-14995&lt;/a&gt;] -         Kinesis NOTICE is incorrect
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 11 Dec 2019 12:00:00 +0000</pubDate>
+<pubDate>Wed, 11 Dec 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/12/11/release-1.8.3.html</link>
 <guid isPermaLink="true">/news/2019/12/11/release-1.8.3.html</guid>
 </item>
 
 <item>
 <title>Running Apache Flink on Kubernetes with KUDO</title>
-<description>A common use case for Apache Flink is streaming data analytics together with Apache Kafka, which provides a pub/sub model and durability for data streams. To achieve elastic scalability, both are typically deployed in clustered environments, and increasingly on top of container orchestration platforms like Kubernetes. The [Operator pattern](https://kubernetes.io/docs/concepts/extend-kubernetes/operator/) provides an extension mechanism to Kubernetes that captures human opera [...]
+<description>&lt;p&gt;A common use case for Apache Flink is streaming data analytics together with Apache Kafka, which provides a pub/sub model and durability for data streams. To achieve elastic scalability, both are typically deployed in clustered environments, and increasingly on top of container orchestration platforms like Kubernetes. The &lt;a href=&quot;https://kubernetes.io/docs/concepts/extend-kubernetes/operator/&quot;&gt;Operator pattern&lt;/a&gt; provides an extension mechani [...]
 
-In this blog post we demonstrate how to orchestrate a streaming data analytics application based on Flink and Kafka with KUDO. It consists of a Flink job that checks financial transactions for fraud, and two microservices that generate and display the transactions. You can find more details about this demo in the [KUDO Operators repository](https://github.com/kudobuilder/operators/tree/master/repository/flink/docs/demo/financial-fraud), including instructions for installing the dependencies.
+&lt;p&gt;In this blog post we demonstrate how to orchestrate a streaming data analytics application based on Flink and Kafka with KUDO. It consists of a Flink job that checks financial transactions for fraud, and two microservices that generate and display the transactions. You can find more details about this demo in the &lt;a href=&quot;https://github.com/kudobuilder/operators/tree/master/repository/flink/docs/demo/financial-fraud&quot;&gt;KUDO Operators repository&lt;/a&gt;, including [...]
 
 &lt;p style=&quot;display: block; text-align: center; margin-top: 20px; margin-bottom: 20px&quot;&gt;
-	&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-11-06-flink-kubernetes-kudo/flink-kudo-architecture.png&quot; width=&quot;600px&quot; alt=&quot;Application: My App&quot;/&gt;
+	&lt;img src=&quot;/img/blog/2019-11-06-flink-kubernetes-kudo/flink-kudo-architecture.png&quot; width=&quot;600px&quot; alt=&quot;Application: My App&quot; /&gt;
 &lt;/p&gt;
 
-## Prerequisites
+&lt;h2 id=&quot;prerequisites&quot;&gt;Prerequisites&lt;/h2&gt;
 
-You can run this demo on your local machine using [minikube](https://github.com/kubernetes/minikube). The instructions below were tested with minikube v1.5.1 and Kubernetes v1.16.2 but should work on any Kubernetes version above v1.15.0. First, start a minikube cluster with enough capacity:
+&lt;p&gt;You can run this demo on your local machine using &lt;a href=&quot;https://github.com/kubernetes/minikube&quot;&gt;minikube&lt;/a&gt;. The instructions below were tested with minikube v1.5.1 and Kubernetes v1.16.2 but should work on any Kubernetes version above v1.15.0. First, start a minikube cluster with enough capacity:&lt;/p&gt;
 
-`minikube start --cpus=6 --memory=9216 --disk-size=10g`
+&lt;p&gt;&lt;code&gt;minikube start --cpus=6 --memory=9216 --disk-size=10g&lt;/code&gt;&lt;/p&gt;
 
-If you’re using a different way to provision Kubernetes, make sure you have at least 6 CPU Cores, 9 GB of RAM and 10 GB of disk space available.
+&lt;p&gt;If you’re using a different way to provision Kubernetes, make sure you have at least 6 CPU Cores, 9 GB of RAM and 10 GB of disk space available.&lt;/p&gt;
 
-Install the `kubectl` CLI tool. The KUDO CLI is a plugin for the Kubernetes CLI. The official instructions for installing and setting up kubectl are [here](https://kubernetes.io/docs/tasks/tools/install-kubectl/).
+&lt;p&gt;Install the &lt;code&gt;kubectl&lt;/code&gt; CLI tool. The KUDO CLI is a plugin for the Kubernetes CLI. The official instructions for installing and setting up kubectl are &lt;a href=&quot;https://kubernetes.io/docs/tasks/tools/install-kubectl/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 
-Next, let’s install the KUDO CLI. At the time of this writing, the latest KUDO version is v0.10.0. You can find the CLI binaries for download [here](https://github.com/kudobuilder/kudo/releases). Download the `kubectl-kudo` binary for your OS and architecture.
+&lt;p&gt;Next, let’s install the KUDO CLI. At the time of this writing, the latest KUDO version is v0.10.0. You can find the CLI binaries for download &lt;a href=&quot;https://github.com/kudobuilder/kudo/releases&quot;&gt;here&lt;/a&gt;. Download the &lt;code&gt;kubectl-kudo&lt;/code&gt; binary for your OS and architecture.&lt;/p&gt;
 
-If you’re using Homebrew on MacOS, you can install the CLI via:
+&lt;p&gt;If you’re using Homebrew on MacOS, you can install the CLI via:&lt;/p&gt;
 
-```
-$ brew tap kudobuilder/tap
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ brew tap kudobuilder/tap
 $ brew install kudo-cli
-```
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Now, let’s initialize KUDO on our Kubernetes cluster:
+&lt;p&gt;Now, let’s initialize KUDO on our Kubernetes cluster:&lt;/p&gt;
 
-```
-$ kubectl kudo init
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl kudo init
 $KUDO_HOME has been configured at /Users/gerred/.kudo
-```
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-This will create several resources. First, it will create the [Custom Resource Definitions](https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/), [service account](https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/), and [role bindings](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) necessary for KUDO to operate. It will also create an instance of the [KUDO controller](https://kudo.dev/docs/architecture.ht [...]
+&lt;p&gt;This will create several resources. First, it will create the &lt;a href=&quot;https://kubernetes.io/docs/concepts/extend-kubernetes/api-extension/custom-resources/&quot;&gt;Custom Resource Definitions&lt;/a&gt;, &lt;a href=&quot;https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/&quot;&gt;service account&lt;/a&gt;, and &lt;a href=&quot;https://kubernetes.io/docs/reference/access-authn-authz/rbac/&quot;&gt;role bindings&lt;/a&gt; necessary for KUD [...]
 
-The KUDO CLI leverages the kubectl plugin system, which gives you all its functionality under `kubectl kudo`. This is a convenient way to install and deal with your KUDO Operators. For our demo, we use Kafka and Flink which depend on ZooKeeper. To make the ZooKeeper Operator available on the cluster, run:
+&lt;p&gt;The KUDO CLI leverages the kubectl plugin system, which gives you all its functionality under &lt;code&gt;kubectl kudo&lt;/code&gt;. This is a convenient way to install and deal with your KUDO Operators. For our demo, we use Kafka and Flink which depend on ZooKeeper. To make the ZooKeeper Operator available on the cluster, run:&lt;/p&gt;
 
-```
-$ kubectl kudo install zookeeper --version=0.3.0 --skip-instance
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl kudo install zookeeper --version=0.3.0 --skip-instance
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-The --skip-instance flag skips the creation of a ZooKeeper instance. The flink-demo Operator that we’re going to install below will create it as a dependency instead. Now let’s make the Kafka and Flink Operators available the same way:
+&lt;p&gt;The –skip-instance flag skips the creation of a ZooKeeper instance. The flink-demo Operator that we’re going to install below will create it as a dependency instead. Now let’s make the Kafka and Flink Operators available the same way:&lt;/p&gt;
 
-```
-$ kubectl kudo install kafka --version=1.2.0 --skip-instance
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl kudo install kafka --version=1.2.0 --skip-instance
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-```
-$ kubectl kudo install flink --version=0.2.1 --skip-instance
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl kudo install flink --version=0.2.1 --skip-instance
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-This installs all the Operator versions needed for our demo.
+&lt;p&gt;This installs all the Operator versions needed for our demo.&lt;/p&gt;
 
-## Financial Fraud Demo
+&lt;h2 id=&quot;financial-fraud-demo&quot;&gt;Financial Fraud Demo&lt;/h2&gt;
 
-In our financial fraud demo we have two microservices, called “generator” and “actor”. The generator produces transactions with random amounts and writes them into a Kafka topic. Occasionally, the value will be over 10,000 which is considered fraud for the purpose of this demo. The Flink job subscribes to the Kafka topic and detects fraudulent transactions. When it does, it submits them to another Kafka topic which the actor consumes. The actor simply displays each fraudulent transaction.
+&lt;p&gt;In our financial fraud demo we have two microservices, called “generator” and “actor”. The generator produces transactions with random amounts and writes them into a Kafka topic. Occasionally, the value will be over 10,000 which is considered fraud for the purpose of this demo. The Flink job subscribes to the Kafka topic and detects fraudulent transactions. When it does, it submits them to another Kafka topic which the actor consumes. The actor simply displays each fraudulent tr [...]
 
-The KUDO CLI by default installs Operators from the [official repository](https://github.com/kudobuilder/operators/), but it also supports installation from your local filesystem. This is useful if you want to develop your own Operator, or modify this demo for your own purposes.
+&lt;p&gt;The KUDO CLI by default installs Operators from the &lt;a href=&quot;https://github.com/kudobuilder/operators/&quot;&gt;official repository&lt;/a&gt;, but it also supports installation from your local filesystem. This is useful if you want to develop your own Operator, or modify this demo for your own purposes.&lt;/p&gt;
 
-First, clone the “kudobuilder/operators” repository via: 
+&lt;p&gt;First, clone the “kudobuilder/operators” repository via:&lt;/p&gt;
 
-```
-$ git clone https://github.com/kudobuilder/operators.git
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ git clone https://github.com/kudobuilder/operators.git
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Next, change into the “operators” directory and install the demo-operator from your local filesystem:
+&lt;p&gt;Next, change into the “operators” directory and install the demo-operator from your local filesystem:&lt;/p&gt;
 
-```
-$ cd operators
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ cd operators
 $ kubectl kudo install repository/flink/docs/demo/financial-fraud/demo-operator --instance flink-demo
 instance.kudo.dev/v1beta1/flink-demo created
-```
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-This time we didn’t include the --skip-instance flag, so KUDO will actually deploy all the components, including Flink, Kafka, and ZooKeeper. KUDO orchestrates deployments and other lifecycle operations using [plans](https://kudo.dev/docs/concepts.html#plan) that were defined by the Operator developer. Plans are similar to [runbooks](https://en.wikipedia.org/wiki/Runbook) and encapsulate all the procedures required to operate the software. We can track the status of the deployment using  [...]
+&lt;p&gt;This time we didn’t include the –skip-instance flag, so KUDO will actually deploy all the components, including Flink, Kafka, and ZooKeeper. KUDO orchestrates deployments and other lifecycle operations using &lt;a href=&quot;https://kudo.dev/docs/concepts.html#plan&quot;&gt;plans&lt;/a&gt; that were defined by the Operator developer. Plans are similar to &lt;a href=&quot;https://en.wikipedia.org/wiki/Runbook&quot;&gt;runbooks&lt;/a&gt; and encapsulate all the procedures required [...]
 
-```
-$ kubectl kudo plan status --instance flink-demo
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl kudo plan status --instance flink-demo
 Plan(s) for &quot;flink-demo&quot; in namespace &quot;default&quot;:
 .
 └── flink-demo (Operator-Version: &quot;flink-demo-0.1.4&quot; Active-Plan: &quot;deploy&quot;)
@@ -2024,12 +2023,11 @@ Plan(s) for &quot;flink-demo&quot; in namespace &quot;default&quot;:
     	│   └── Step act (PENDING)
     	└── Phase flink-job [PENDING]
         	└── Step submit (PENDING)
-```
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-The output shows that the “deploy” plan is in progress and that it consists of 4 phases: “dependencies”, “flink-cluster”, “demo” and “flink-job”. The “dependencies” phase includes steps for “zookeeper” and “kafka”. This is where both dependencies get installed, before KUDO continues to install the Flink cluster and the demo itself. We also see that ZooKeeper installation completed, and that Kafka installation is currently in progress. We can view details about Kafka’s deployment plan via:
+&lt;p&gt;The output shows that the “deploy” plan is in progress and that it consists of 4 phases: “dependencies”, “flink-cluster”, “demo” and “flink-job”. The “dependencies” phase includes steps for “zookeeper” and “kafka”. This is where both dependencies get installed, before KUDO continues to install the Flink cluster and the demo itself. We also see that ZooKeeper installation completed, and that Kafka installation is currently in progress. We can view details about Kafka’s deployment [...]
 
-```
-$ kubectl kudo plan status --instance flink-demo-kafka
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl kudo plan status --instance flink-demo-kafka
 Plan(s) for &quot;flink-demo-kafka&quot; in namespace &quot;default&quot;:
 .
 └── flink-demo-kafka (Operator-Version: &quot;kafka-1.2.0&quot; Active-Plan: &quot;deploy&quot;)
@@ -2040,28 +2038,26 @@ Plan(s) for &quot;flink-demo-kafka&quot; in namespace &quot;default&quot;:
     	└── Phase not-allowed (serial strategy) [NOT ACTIVE]
         	└── Step not-allowed (serial strategy) [NOT ACTIVE]
             	└── not-allowed [NOT ACTIVE]
-```
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-After Kafka was successfully installed the next phase “flink-cluster” will start and bring up, you guessed it, your flink-cluster. After this is done, the demo phase creates the generator and actor pods that generate and display transactions for this demo. Lastly, we have the flink-job phase in which we submit the actual FinancialFraudJob to the Flink cluster. Once the flink job is submitted, we will be able to see fraud logs in our actor pod shortly after.
+&lt;p&gt;After Kafka was successfully installed the next phase “flink-cluster” will start and bring up, you guessed it, your flink-cluster. After this is done, the demo phase creates the generator and actor pods that generate and display transactions for this demo. Lastly, we have the flink-job phase in which we submit the actual FinancialFraudJob to the Flink cluster. Once the flink job is submitted, we will be able to see fraud logs in our actor pod shortly after.&lt;/p&gt;
 
-After a while, the state of all plans, phases and steps will change to “COMPLETE”. Now we can view the Flink dashboard to verify that our job is running. To access it from outside the Kubernetes cluster, first start the client proxy, then open the URL below in your browser:
+&lt;p&gt;After a while, the state of all plans, phases and steps will change to “COMPLETE”. Now we can view the Flink dashboard to verify that our job is running. To access it from outside the Kubernetes cluster, first start the client proxy, then open the URL below in your browser:&lt;/p&gt;
 
-```
-$ kubectl proxy
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl proxy
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-[http://127.0.0.1:8001/api/v1/namespaces/default/services/flink-demo-flink-jobmanager:ui/proxy/#/overview](http://127.0.0.1:8001/api/v1/namespaces/default/services/flink-demo-flink-jobmanager:ui/proxy/#/overview)
+&lt;p&gt;&lt;a href=&quot;http://127.0.0.1:8001/api/v1/namespaces/default/services/flink-demo-flink-jobmanager:ui/proxy/#/overview&quot;&gt;http://127.0.0.1:8001/api/v1/namespaces/default/services/flink-demo-flink-jobmanager:ui/proxy/#/overview&lt;/a&gt;&lt;/p&gt;
 
-It should look similar to this, depending on your local machine and how many cores you have available:
+&lt;p&gt;It should look similar to this, depending on your local machine and how many cores you have available:&lt;/p&gt;
 
 &lt;p style=&quot;display: block; text-align: center; margin-top: 20px; margin-bottom: 20px&quot;&gt;
-	&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-11-06-flink-kubernetes-kudo/flink-dashboard-ui.png&quot; width=&quot;600px&quot; alt=&quot;Application: My App&quot;/&gt;
+	&lt;img src=&quot;/img/blog/2019-11-06-flink-kubernetes-kudo/flink-dashboard-ui.png&quot; width=&quot;600px&quot; alt=&quot;Application: My App&quot; /&gt;
 &lt;/p&gt;
 
-The job is up and running and we should now be able to see fraudulent transaction in the logs of the actor pod:
+&lt;p&gt;The job is up and running and we should now be able to see fraudulent transaction in the logs of the actor pod:&lt;/p&gt;
 
-```
-$ kubectl logs $(kubectl get pod -l actor=flink-demo -o jsonpath=&quot;{.items[0].metadata.name}&quot;)
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;$ kubectl logs $(kubectl get pod -l actor=flink-demo -o jsonpath=&quot;{.items[0].metadata.name}&quot;)
 Broker:   flink-demo-kafka-kafka-0.flink-demo-kafka-svc:9093
 Topic:   fraud
 
@@ -2070,901 +2066,960 @@ Transaction{timestamp=1563395778000, origin=1, target=&#39;3&#39;, amount=8341}
 Transaction{timestamp=1563395813000, origin=1, target=&#39;3&#39;, amount=8592}
 Transaction{timestamp=1563395817000, origin=1, target=&#39;3&#39;, amount=2802}
 Transaction{timestamp=1563395831000, origin=1, target=&#39;3&#39;, amount=160}}
-```
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-If you add the “-f” flag to the previous command, you can follow along as more transactions are streaming in and are evaluated by our Flink job.
+&lt;p&gt;If you add the “-f” flag to the previous command, you can follow along as more transactions are streaming in and are evaluated by our Flink job.&lt;/p&gt;
 
-## Conclusion
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-In this blog post we demonstrated how to easily deploy an end-to-end streaming data application on Kubernetes using KUDO. We deployed a Flink job and two microservices, as well as all the required infrastructure - Flink, Kafka, and ZooKeeper using just a few kubectl commands. To find out more about KUDO, visit the [project website](https://kudo.dev) or join the community on [Slack](https://kubernetes.slack.com/messages/kudo/).
+&lt;p&gt;In this blog post we demonstrated how to easily deploy an end-to-end streaming data application on Kubernetes using KUDO. We deployed a Flink job and two microservices, as well as all the required infrastructure - Flink, Kafka, and ZooKeeper using just a few kubectl commands. To find out more about KUDO, visit the &lt;a href=&quot;https://kudo.dev&quot;&gt;project website&lt;/a&gt; or join the community on &lt;a href=&quot;https://kubernetes.slack.com/messages/kudo/&quot;&gt;Sla [...]
 </description>
-<pubDate>Mon, 09 Dec 2019 12:00:00 +0000</pubDate>
+<pubDate>Mon, 09 Dec 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/12/09/flink-kubernetes-kudo.html</link>
 <guid isPermaLink="true">/news/2019/12/09/flink-kubernetes-kudo.html</guid>
 </item>
 
 <item>
 <title>How to query Pulsar Streams using Apache Flink</title>
-<description>In a previous [story](https://flink.apache.org/2019/05/03/pulsar-flink.html) on the  Flink blog, we explained the different ways that [Apache Flink](https://flink.apache.org/) and [Apache Pulsar](https://pulsar.apache.org/) can integrate to provide elastic data processing at large scale. This blog post discusses the new developments and integrations between the two frameworks and showcases how you can leverage Pulsar’s built-in schema to query Pulsar streams in real time usi [...]
+<description>&lt;p&gt;In a previous &lt;a href=&quot;https://flink.apache.org/2019/05/03/pulsar-flink.html&quot;&gt;story&lt;/a&gt; on the  Flink blog, we explained the different ways that &lt;a href=&quot;https://flink.apache.org/&quot;&gt;Apache Flink&lt;/a&gt; and &lt;a href=&quot;https://pulsar.apache.org/&quot;&gt;Apache Pulsar&lt;/a&gt; can integrate to provide elastic data processing at large scale. This blog post discusses the new developments and integrations between the two fra [...]
 
+&lt;h1 id=&quot;a-short-intro-to-apache-pulsar&quot;&gt;A short intro to Apache Pulsar&lt;/h1&gt;
 
-# A short intro to Apache Pulsar
+&lt;p&gt;Apache Pulsar is a flexible pub/sub messaging system, backed by durable log storage. Some of the framework’s highlights include multi-tenancy, a unified message model, structured event streams and a cloud-native architecture that make it a perfect fit for a wide set of use cases, ranging from billing, payments and trading services all the way to the unification of the different messaging architectures in an organization. If you are interested in finding out more about Pulsar, yo [...]
 
-Apache Pulsar is a flexible pub/sub messaging system, backed by durable log storage. Some of the framework’s highlights include multi-tenancy, a unified message model, structured event streams and a cloud-native architecture that make it a perfect fit for a wide set of use cases, ranging from billing, payments and trading services all the way to the unification of the different messaging architectures in an organization. If you are interested in finding out more about Pulsar, you can vis [...]
+&lt;h1 id=&quot;existing-pulsar--flink-integration-apache-flink-16&quot;&gt;Existing Pulsar &amp;amp; Flink integration (Apache Flink 1.6+)&lt;/h1&gt;
 
+&lt;p&gt;The existing integration between Pulsar and Flink exploits Pulsar as a message queue in a Flink application. Flink developers can utilize Pulsar as a streaming source and streaming sink for their Flink applications by selecting a specific Pulsar source and connecting to their desired Pulsar cluster and topic:&lt;/p&gt;
 
-# Existing Pulsar &amp; Flink integration (Apache Flink 1.6+)
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// create and configure Pulsar consumer&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;PulsarSourceBuilder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;builder&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;PulsarSourceBuilder&lt;/span&gt;  
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;builder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;SimpleStringSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt; 
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;topic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;inputTopic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;subsciptionName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;subscription&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;SourceFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;src&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;builder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&g [...]
+&lt;span class=&quot;c1&quot;&gt;// ingest DataStream with Pulsar consumer&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;words&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt; [...]
 
-The existing integration between Pulsar and Flink exploits Pulsar as a message queue in a Flink application. Flink developers can utilize Pulsar as a streaming source and streaming sink for their Flink applications by selecting a specific Pulsar source and connecting to their desired Pulsar cluster and topic:
+&lt;p&gt;Pulsar streams can then get connected to the Flink processing logic…&lt;/p&gt;
 
-```java
-// create and configure Pulsar consumer
-PulsarSourceBuilder&lt;String&gt;builder = PulsarSourceBuilder  
-  .builder(new SimpleStringSchema()) 
-  .serviceUrl(serviceUrl)
-  .topic(inputTopic)
-  .subsciptionName(subscription);
-SourceFunction&lt;String&gt; src = builder.build();
-// ingest DataStream with Pulsar consumer
-DataStream&lt;String&gt; words = env.addSource(src);
-```
-
-Pulsar streams can then get connected to the Flink processing logic…
-
-```java
-// perform computation on DataStream (here a simple WordCount)
-DataStream&lt;WordWithCount&gt; wc = words
-  .flatmap((FlatMapFunction&lt;String, WordWithCount&gt;) (word, collector) -&gt; {
-    collector.collect(new WordWithCount(word, 1));
-  })
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// perform computation on DataStream (here a simple WordCount)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wc&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;words&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;flatmap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;FlatMapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt; &lt;span class=&quot [...]
+    &lt;span class=&quot;n&quot;&gt;collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;word&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&lt;/ [...]
+  &lt;span class=&quot;o&quot;&gt;})&lt;/span&gt;
  
-  .returns(WordWithCount.class)
-  .keyBy(&quot;word&quot;)
-  .timeWindow(Time.seconds(5))
-  .reduce((ReduceFunction&lt;WordWithCount&gt;) (c1, c2) -&gt;
-    new WordWithCount(c1.word, c1.count + c2.count));
-```
-
-...and then get emitted back to Pulsar (used now as a sink), sending one’s computation results downstream, back to a Pulsar topic: 
-
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;returns&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;class&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;word&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;timeWindow&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;seconds&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;reduce&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ReduceFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;o&quot [...]
+    &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;word&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;count&lt;/span&gt; [...]
 
-```java
-// emit result via Pulsar producer 
-wc.addSink(new FlinkPulsarProducer&lt;&gt;(
-  serviceUrl,
-  outputTopic,
-  new AuthentificationDisabled(),
-  wordWithCount -&gt; wordWithCount.toString().getBytes(UTF_8),
-  wordWithCount -&gt; wordWithCount.word)
-);
-```
+&lt;p&gt;…and then get emitted back to Pulsar (used now as a sink), sending one’s computation results downstream, back to a Pulsar topic:&lt;/p&gt;
 
-Although this is a great first integration step, the existing design is not leveraging the full power of Pulsar. Some shortcomings of the integration with Flink 1.6.0 relate to Pulsar neither being utilized as durable storage nor having schema integration with Flink, resulting in manual input when describing an application’s schema registry.
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// emit result via Pulsar producer &lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;wc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;FlinkPulsarProducer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;outputTopic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;AuthentificationDisabled&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;toString&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getBytes&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;UTF_8&lt;/span&gt;&lt;span class=&quot [...]
+  &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;word&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
+&lt;p&gt;Although this is a great first integration step, the existing design is not leveraging the full power of Pulsar. Some shortcomings of the integration with Flink 1.6.0 relate to Pulsar neither being utilized as durable storage nor having schema integration with Flink, resulting in manual input when describing an application’s schema registry.&lt;/p&gt;
 
-# Pulsar’s integration with Flink 1.9: Using Pulsar as a Flink catalog
+&lt;h1 id=&quot;pulsars-integration-with-flink-19-using-pulsar-as-a-flink-catalog&quot;&gt;Pulsar’s integration with Flink 1.9: Using Pulsar as a Flink catalog&lt;/h1&gt;
 
-The latest integration between [Flink 1.9.0](https://flink.apache.org/downloads.html#apache-flink-191) and Pulsar addresses most of the previously mentioned shortcomings. The [contribution of Alibaba’s Blink to the Flink repository](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html) adds many enhancements and new features to the processing framework that make the integration with Pulsar significantly more powerful and impactful. Flink 1.9.0 brings Pulsar schema  [...]
+&lt;p&gt;The latest integration between &lt;a href=&quot;https://flink.apache.org/downloads.html#apache-flink-191&quot;&gt;Flink 1.9.0&lt;/a&gt; and Pulsar addresses most of the previously mentioned shortcomings. The &lt;a href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;contribution of Alibaba’s Blink to the Flink repository&lt;/a&gt; adds many enhancements and new features to the processing framework that make the integration with Pulsar s [...]
 
+&lt;h1 id=&quot;leveraging-the-flink--pulsar-schema-integration&quot;&gt;Leveraging the Flink &amp;lt;&amp;gt; Pulsar Schema Integration&lt;/h1&gt;
 
-# Leveraging the Flink &lt;&gt; Pulsar Schema Integration
+&lt;p&gt;Before delving into the integration details and how you can use Pulsar schema with Flink, let us describe how schema in Pulsar works. Schema in Apache Pulsar already co-exists and serves as the representation of the data on the broker side of the framework, something that makes schema registry with external systems obsolete. Additionally, the data schema in Pulsar is associated with each topic so both producers and consumers send data with predefined schema information, while th [...]
 
-Before delving into the integration details and how you can use Pulsar schema with Flink, let us describe how schema in Pulsar works. Schema in Apache Pulsar already co-exists and serves as the representation of the data on the broker side of the framework, something that makes schema registry with external systems obsolete. Additionally, the data schema in Pulsar is associated with each topic so both producers and consumers send data with predefined schema information, while the broker  [...]
- 
-Below you can find an example of Pulsar’s schema on both the producer and consumer side. On the producer side, you can specify which schema you want to use and Pulsar then sends a POJO class without the need to perform any serialization/deserialization. Similarly, on the consumer end, you can also specify the data schema and upon receiving the data, Pulsar will automatically validate the schema information, fetch the schema of the given version and then deserialize the data back to a POJ [...]
+&lt;p&gt;Below you can find an example of Pulsar’s schema on both the producer and consumer side. On the producer side, you can specify which schema you want to use and Pulsar then sends a POJO class without the need to perform any serialization/deserialization. Similarly, on the consumer end, you can also specify the data schema and upon receiving the data, Pulsar will automatically validate the schema information, fetch the schema of the given version and then deserialize the data back [...]
 
-```java
-// Create producer with Struct schema and send messages
-Producer&lt;User&gt; producer = client.newProducer(Schema.AVRO(User.class)).create();
-producer.newMessage()
-  .value(User.builder()
-    .userName(“pulsar-user”)
-    .userId(1L)
-    .build())
-  .send();
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// Create producer with Struct schema and send messages&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;Producer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;User&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;producer&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;client&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newProducer&lt;/span&gt;&lt;span class=&quot;o&quot; [...]
+&lt;span class=&quot;n&quot;&gt;producer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newMessage&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;User&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;builder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;userName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;“&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;pulsar&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;-&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;user&lt;/span&gt;&lt;span class=&quot;err&quot;&gt;”&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;userId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1L&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;send&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-```java
-// Create consumer with Struct schema and receive messages
-Consumer&lt;User&gt; consumer = client.newCOnsumer(Schema.AVRO(User.class)).create();
-consumer.receive();
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// Create consumer with Struct schema and receive messages&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;Consumer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;User&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;consumer&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;client&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newCOnsumer&lt;/span&gt;&lt;span class=&quot;o&quot; [...]
+&lt;span class=&quot;n&quot;&gt;consumer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;receive&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Let’s assume we have an application that specifies a schema to the producer and/or consumer. Upon receiving the schema information, the producer (or consumer) — that is connected to the broker — will transfer such information so that the broker can then perform schema registration, validations and schema compatibility checks before returning or rejecting the schema as illustrated in the diagram below: 
+&lt;p&gt;Let’s assume we have an application that specifies a schema to the producer and/or consumer. Upon receiving the schema information, the producer (or consumer) — that is connected to the broker — will transfer such information so that the broker can then perform schema registration, validations and schema compatibility checks before returning or rejecting the schema as illustrated in the diagram below:&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/flink-pulsar-sql-blog-post-visual.png&quot; width=&quot;600px&quot; alt=&quot;Pulsar Schema&quot;/&gt;
+&lt;img src=&quot;/img/blog/flink-pulsar-sql-blog-post-visual.png&quot; width=&quot;600px&quot; alt=&quot;Pulsar Schema&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Not only is Pulsar able to handle and store the schema information, but is additionally able to handle any schema evolution — where necessary. Pulsar will effectively manage any schema evolution in the broker, keeping track of all different versions of your schema while performing any necessary compatibility checks. 
- 
-Moreover, when messages are published on the producer side, Pulsar will tag each message with the schema version as part of each message’s metadata. On the consumer side, when the message is received and the metadata is deserialized, Pulsar will check the schema version associated with this message and will fetch the corresponding schema information from the broker. As a result, when Pulsar integrates with a Flink application it uses the pre-existing schema information and maps individua [...]
- 
-For the cases when Flink users do not interact with schema directly or make use of primitive schema (for example, using a topic to store a string or long number), Pulsar will either convert the message payload into a Flink row, called ‘value’ or — for the cases of structured schema types, like JSON and AVRO —  Pulsar will extract the individual fields from the schema information and will map the fields to Flink’s type system. Finally, all metadata information associated with each message [...]
+&lt;p&gt;Not only is Pulsar able to handle and store the schema information, but is additionally able to handle any schema evolution — where necessary. Pulsar will effectively manage any schema evolution in the broker, keeping track of all different versions of your schema while performing any necessary compatibility checks.&lt;/p&gt;
+
+&lt;p&gt;Moreover, when messages are published on the producer side, Pulsar will tag each message with the schema version as part of each message’s metadata. On the consumer side, when the message is received and the metadata is deserialized, Pulsar will check the schema version associated with this message and will fetch the corresponding schema information from the broker. As a result, when Pulsar integrates with a Flink application it uses the pre-existing schema information and maps  [...]
 
+&lt;p&gt;For the cases when Flink users do not interact with schema directly or make use of primitive schema (for example, using a topic to store a string or long number), Pulsar will either convert the message payload into a Flink row, called ‘value’ or — for the cases of structured schema types, like JSON and AVRO —  Pulsar will extract the individual fields from the schema information and will map the fields to Flink’s type system. Finally, all metadata information associated with eac [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/flink-pulsar-sql-blog-post-visual-primitive-avro-schema.png&quot; width=&quot;600px&quot; alt=&quot;Primitive and AVRO Schema&quot;/&gt;
+&lt;img src=&quot;/img/blog/flink-pulsar-sql-blog-post-visual-primitive-avro-schema.png&quot; width=&quot;600px&quot; alt=&quot;Primitive and AVRO Schema&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Once all the schema information is mapped to Flink’s type system, you can start building a Pulsar source, sink or catalog in Flink based on the specified schema information as illustrated below:
+&lt;p&gt;Once all the schema information is mapped to Flink’s type system, you can start building a Pulsar source, sink or catalog in Flink based on the specified schema information as illustrated below:&lt;/p&gt;
 
-# Flink &amp; Pulsar: Read data from Pulsar
+&lt;h1 id=&quot;flink--pulsar-read-data-from-pulsar&quot;&gt;Flink &amp;amp; Pulsar: Read data from Pulsar&lt;/h1&gt;
 
-* Create a Pulsar source for streaming queries
+&lt;ul&gt;
+  &lt;li&gt;Create a Pulsar source for streaming queries&lt;/li&gt;
+&lt;/ul&gt;
 
-```java
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val props = new Properties()
-props.setProperty(&quot;service.url&quot;, &quot;pulsar://...&quot;)
-props.setProperty(&quot;admin.url&quot;, &quot;http://...&quot;)
-props.setProperty(&quot;partitionDiscoveryIntervalMillis&quot;, &quot;5000&quot;)
-props.setProperty(&quot;startingOffsets&quot;, &quot;earliest&quot;)
-props.setProperty(&quot;topic&quot;, &quot;test-source-topic&quot;)
-val source = new FlinkPulsarSource(props)
-// you don&#39;t need to provide a type information to addSource since FlinkPulsarSource is ResultTypeQueryable
-val dataStream = env.addSource(source)(null)
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getExecutionEnvironment&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;props&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;service.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;pulsar://...&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;admin.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;http://...&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;partitionDiscoveryIntervalMillis&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;5000&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;startingOffsets&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;earliest&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;test-source-topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;source&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;FlinkPulsarSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;// you don&amp;#39;t need to provide a type information to addSource since FlinkPulsarSource is ResultTypeQueryable&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;dataStream&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;source&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)(&lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span& [...]
 
-// chain operations on dataStream of Row and sink the output
-// end method chaining
+&lt;span class=&quot;c1&quot;&gt;// chain operations on dataStream of Row and sink the output&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;// end method chaining&lt;/span&gt;
 
-env.execute()
-```
+&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;execute&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Register topics in Pulsar as streaming tables&lt;/li&gt;
+&lt;/ul&gt;
 
-* Register topics in Pulsar as streaming tables
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getExecutionEnvironment&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamTableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;create&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-```java
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val tEnv = StreamTableEnvironment.create(env)
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;service.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;admin.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;adminUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;flushOnCheckpoint&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;failOnWrite&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;test-sink-topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-val prop = new Properties()
-prop.setProperty(&quot;service.url&quot;, serviceUrl)
-prop.setProperty(&quot;admin.url&quot;, adminUrl)
-prop.setProperty(&quot;flushOnCheckpoint&quot;, &quot;true&quot;)
-prop.setProperty(&quot;failOnWrite&quot;, &quot;true&quot;)
-props.setProperty(&quot;topic&quot;, &quot;test-sink-topic&quot;)
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;connect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Pulsar&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;inAppendMode&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerTableSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;sink-table&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-tEnv
-  .connect(new Pulsar().properties(props))
-  .inAppendMode()
-  .registerTableSource(&quot;sink-table&quot;)
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sql&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;INSERT INTO sink-table .....&amp;quot;&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;sqlUpdate&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;sql&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;execute&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-val sql = &quot;INSERT INTO sink-table .....&quot;
-tEnv.sqlUpdate(sql)
-env.execute()
-```
+&lt;h1 id=&quot;flink--pulsar-write-data-to-pulsar&quot;&gt;Flink &amp;amp; Pulsar: Write data to Pulsar&lt;/h1&gt;
 
-# Flink &amp; Pulsar: Write data to Pulsar
+&lt;ul&gt;
+  &lt;li&gt;Create a Pulsar sink for streaming queries&lt;/li&gt;
+&lt;/ul&gt;
 
-* Create a Pulsar sink for streaming queries
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getExecutionEnvironment&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;stream&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;.....&lt;/span&gt;
 
-```java
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val stream = .....
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;service.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;admin.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;adminUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;flushOnCheckpoint&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;failOnWrite&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;test-sink-topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-val prop = new Properties()
-prop.setProperty(&quot;service.url&quot;, serviceUrl)
-prop.setProperty(&quot;admin.url&quot;, adminUrl)
-prop.setProperty(&quot;flushOnCheckpoint&quot;, &quot;true&quot;)
-prop.setProperty(&quot;failOnWrite&quot;, &quot;true&quot;)
-props.setProperty(&quot;topic&quot;, &quot;test-sink-topic&quot;)
+&lt;span class=&quot;n&quot;&gt;stream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;FlinkPulsarSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;DummyTopicKe [...]
+&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;execute&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-stream.addSink(new FlinkPulsarSink(prop, DummyTopicKeyExtractor))
-env.execute()
-```
+&lt;ul&gt;
+  &lt;li&gt;Write a streaming table to Pulsar&lt;/li&gt;
+&lt;/ul&gt;
 
-* Write a streaming table to Pulsar
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getExecutionEnvironment&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamTableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;create&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-```java
-val env = StreamExecutionEnvironment.getExecutionEnvironment
-val tEnv = StreamTableEnvironment.create(env)
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;service.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;admin.url&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;adminUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;flushOnCheckpoint&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;prop&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;failOnWrite&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;true&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;test-sink-topic&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-val prop = new Properties()
-prop.setProperty(&quot;service.url&quot;, serviceUrl)
-prop.setProperty(&quot;admin.url&quot;, adminUrl)
-prop.setProperty(&quot;flushOnCheckpoint&quot;, &quot;true&quot;)
-prop.setProperty(&quot;failOnWrite&quot;, &quot;true&quot;)
-props.setProperty(&quot;topic&quot;, &quot;test-sink-topic&quot;)
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;connect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;Pulsar&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;inAppendMode&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerTableSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;sink-table&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
 
-tEnv
-  .connect(new Pulsar().properties(props))
-  .inAppendMode()
-  .registerTableSource(&quot;sink-table&quot;)
+&lt;span class=&quot;n&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sql&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;INSERT INTO sink-table .....&amp;quot;&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;sqlUpdate&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;sql&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;execute&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-val sql = &quot;INSERT INTO sink-table .....&quot;
-tEnv.sqlUpdate(sql)
-env.execute()
-```
+&lt;p&gt;In every instance, Flink developers only need to specify the properties of how Flink will connect to a Pulsar cluster without worrying about any schema registry, or serialization/deserialization actions and register the Pulsar cluster as a source, sink or streaming table in Flink. Once all three elements are put together, Pulsar can then be registered as a catalog in Flink, something that drastically simplifies how you process and query data like, for example, writing a program  [...]
 
-In every instance, Flink developers only need to specify the properties of how Flink will connect to a Pulsar cluster without worrying about any schema registry, or serialization/deserialization actions and register the Pulsar cluster as a source, sink or streaming table in Flink. Once all three elements are put together, Pulsar can then be registered as a catalog in Flink, something that drastically simplifies how you process and query data like, for example, writing a program to query  [...]
+&lt;h1 id=&quot;next-steps--future-integration&quot;&gt;Next Steps &amp;amp; Future Integration&lt;/h1&gt;
 
+&lt;p&gt;The goal of the integration between Pulsar and Flink is to simplify how developers use the two frameworks to build a unified data processing stack. As we progress from the classical Lamda architectures — where an online, speeding layer is combined with an offline, batch layer to run data computations — Flink and Pulsar present a great combination in providing a truly unified data processing stack. We see Flink as a unified computation engine, handling both online (streaming) and [...]
 
-# Next Steps &amp; Future Integration
+&lt;p&gt;There is still a lot of ongoing work and effort from both communities in getting the integration even better, such as a new source API (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface&quot;&gt;FLIP-27&lt;/a&gt;) that will allow the &lt;a href=&quot;http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Discussion-Flink-Pulsar-Connector-td22019.html&quot;&gt;contribution of the Pulsar connectors to the Flink communit [...]
 
-The goal of the integration between Pulsar and Flink is to simplify how developers use the two frameworks to build a unified data processing stack. As we progress from the classical Lamda architectures — where an online, speeding layer is combined with an offline, batch layer to run data computations — Flink and Pulsar present a great combination in providing a truly unified data processing stack. We see Flink as a unified computation engine, handling both online (streaming) and offline  [...]
- 
-There is still a lot of ongoing work and effort from both communities in getting the integration even better, such as a new source API ([FLIP-27](https://cwiki.apache.org/confluence/display/FLINK/FLIP-27%3A+Refactor+Source+Interface)) that will allow the [contribution of the Pulsar connectors to the Flink community](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Discussion-Flink-Pulsar-Connector-td22019.html) as well as a new subscription type called `Key_Shared` subscrip [...]
- 
-You can find a more detailed overview of the integration work between the two communities in this [recording video](https://youtu.be/3sBXXfgl5vs) from Flink Forward Europe 2019 or sign up to the [Flink dev mailing list](https://flink.apache.org/community.html#mailing-lists) for the latest contribution and integration efforts between Flink and Pulsar. 
+&lt;p&gt;You can find a more detailed overview of the integration work between the two communities in this &lt;a href=&quot;https://youtu.be/3sBXXfgl5vs&quot;&gt;recording video&lt;/a&gt; from Flink Forward Europe 2019 or sign up to the &lt;a href=&quot;https://flink.apache.org/community.html#mailing-lists&quot;&gt;Flink dev mailing list&lt;/a&gt; for the latest contribution and integration efforts between Flink and Pulsar.&lt;/p&gt;
 </description>
-<pubDate>Mon, 25 Nov 2019 12:00:00 +0000</pubDate>
+<pubDate>Mon, 25 Nov 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/11/25/query-pulsar-streams-using-apache-flink.html</link>
 <guid isPermaLink="true">/news/2019/11/25/query-pulsar-streams-using-apache-flink.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.9.1 Released</title>
-<description>The Apache Flink community released the first bugfix version of the Apache Flink 1.9 series.
+<description>&lt;p&gt;The Apache Flink community released the first bugfix version of the Apache Flink 1.9 series.&lt;/p&gt;
 
-This release includes 96 fixes and minor improvements for Flink 1.9.0. The list below includes a detailed list of all fixes and improvements.
+&lt;p&gt;This release includes 96 fixes and minor improvements for Flink 1.9.0. The list below includes a detailed list of all fixes and improvements.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.9.1.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.9.1.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.9.1&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.9.1&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.9.1&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.9.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11630&#39;&gt;FLINK-11630&lt;/a&gt;] -         TaskExecutor does not wait for Task termination when terminating itself
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11630&quot;&gt;FLINK-11630&lt;/a&gt;] -         TaskExecutor does not wait for Task termination when terminating itself
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13490&#39;&gt;FLINK-13490&lt;/a&gt;] -         Fix if one column value is null when reading JDBC, the following values are all null
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13490&quot;&gt;FLINK-13490&lt;/a&gt;] -         Fix if one column value is null when reading JDBC, the following values are all null
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13941&#39;&gt;FLINK-13941&lt;/a&gt;] -         Prevent data-loss by not cleaning up small part files from S3.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13941&quot;&gt;FLINK-13941&lt;/a&gt;] -         Prevent data-loss by not cleaning up small part files from S3.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12501&#39;&gt;FLINK-12501&lt;/a&gt;] -         AvroTypeSerializer does not work with types generated by avrohugger
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12501&quot;&gt;FLINK-12501&lt;/a&gt;] -         AvroTypeSerializer does not work with types generated by avrohugger
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13386&#39;&gt;FLINK-13386&lt;/a&gt;] -         Fix some frictions in the new default Web UI
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13386&quot;&gt;FLINK-13386&lt;/a&gt;] -         Fix some frictions in the new default Web UI
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13526&#39;&gt;FLINK-13526&lt;/a&gt;] -         Switching to a non existing catalog or database crashes sql-client
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13526&quot;&gt;FLINK-13526&lt;/a&gt;] -         Switching to a non existing catalog or database crashes sql-client
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13568&#39;&gt;FLINK-13568&lt;/a&gt;] -         DDL create table doesn&amp;#39;t allow STRING data type
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13568&quot;&gt;FLINK-13568&lt;/a&gt;] -         DDL create table doesn&amp;#39;t allow STRING data type
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13805&#39;&gt;FLINK-13805&lt;/a&gt;] -         Bad Error Message when TaskManager is lost
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13805&quot;&gt;FLINK-13805&lt;/a&gt;] -         Bad Error Message when TaskManager is lost
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13806&#39;&gt;FLINK-13806&lt;/a&gt;] -         Metric Fetcher floods the JM log with errors when TM is lost
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13806&quot;&gt;FLINK-13806&lt;/a&gt;] -         Metric Fetcher floods the JM log with errors when TM is lost
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14010&#39;&gt;FLINK-14010&lt;/a&gt;] -         Dispatcher &amp;amp; JobManagers don&amp;#39;t give up leadership when AM is shut down
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14010&quot;&gt;FLINK-14010&lt;/a&gt;] -         Dispatcher &amp;amp; JobManagers don&amp;#39;t give up leadership when AM is shut down
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14145&#39;&gt;FLINK-14145&lt;/a&gt;] -         CompletedCheckpointStore#getLatestCheckpoint(true) returns wrong checkpoint
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14145&quot;&gt;FLINK-14145&lt;/a&gt;] -         CompletedCheckpointStore#getLatestCheckpoint(true) returns wrong checkpoint
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13059&#39;&gt;FLINK-13059&lt;/a&gt;] -         Cassandra Connector leaks Semaphore on Exception and hangs on close
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13059&quot;&gt;FLINK-13059&lt;/a&gt;] -         Cassandra Connector leaks Semaphore on Exception and hangs on close
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13534&#39;&gt;FLINK-13534&lt;/a&gt;] -         Unable to query Hive table with decimal column
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13534&quot;&gt;FLINK-13534&lt;/a&gt;] -         Unable to query Hive table with decimal column
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13562&#39;&gt;FLINK-13562&lt;/a&gt;] -         Throws exception when FlinkRelMdColumnInterval meets two stage stream group aggregate
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13562&quot;&gt;FLINK-13562&lt;/a&gt;] -         Throws exception when FlinkRelMdColumnInterval meets two stage stream group aggregate
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13563&#39;&gt;FLINK-13563&lt;/a&gt;] -         TumblingGroupWindow should implement toString method
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13563&quot;&gt;FLINK-13563&lt;/a&gt;] -         TumblingGroupWindow should implement toString method
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13564&#39;&gt;FLINK-13564&lt;/a&gt;] -         Throw exception if constant with YEAR TO MONTH resolution was used for group windows
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13564&quot;&gt;FLINK-13564&lt;/a&gt;] -         Throw exception if constant with YEAR TO MONTH resolution was used for group windows
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13588&#39;&gt;FLINK-13588&lt;/a&gt;] -         StreamTask.handleAsyncException throws away the exception cause
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13588&quot;&gt;FLINK-13588&lt;/a&gt;] -         StreamTask.handleAsyncException throws away the exception cause
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13653&#39;&gt;FLINK-13653&lt;/a&gt;] -         ResultStore should avoid using RowTypeInfo when creating a result
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13653&quot;&gt;FLINK-13653&lt;/a&gt;] -         ResultStore should avoid using RowTypeInfo when creating a result
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13711&#39;&gt;FLINK-13711&lt;/a&gt;] -         Hive array values not properly displayed in SQL CLI
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13711&quot;&gt;FLINK-13711&lt;/a&gt;] -         Hive array values not properly displayed in SQL CLI
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13737&#39;&gt;FLINK-13737&lt;/a&gt;] -         flink-dist should add provided dependency on flink-examples-table
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13737&quot;&gt;FLINK-13737&lt;/a&gt;] -         flink-dist should add provided dependency on flink-examples-table
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13738&#39;&gt;FLINK-13738&lt;/a&gt;] -         Fix NegativeArraySizeException in LongHybridHashTable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13738&quot;&gt;FLINK-13738&lt;/a&gt;] -         Fix NegativeArraySizeException in LongHybridHashTable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13742&#39;&gt;FLINK-13742&lt;/a&gt;] -         Fix code generation when aggregation contains both distinct aggregate with and without filter
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13742&quot;&gt;FLINK-13742&lt;/a&gt;] -         Fix code generation when aggregation contains both distinct aggregate with and without filter
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13760&#39;&gt;FLINK-13760&lt;/a&gt;] -         Fix hardcode Scala version dependency in hive connector
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13760&quot;&gt;FLINK-13760&lt;/a&gt;] -         Fix hardcode Scala version dependency in hive connector
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13761&#39;&gt;FLINK-13761&lt;/a&gt;] -         `SplitStream` should be deprecated because `SplitJavaStream` is deprecated
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13761&quot;&gt;FLINK-13761&lt;/a&gt;] -         `SplitStream` should be deprecated because `SplitJavaStream` is deprecated
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13789&#39;&gt;FLINK-13789&lt;/a&gt;] -         Transactional Id Generation fails due to user code impacting formatting string
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13789&quot;&gt;FLINK-13789&lt;/a&gt;] -         Transactional Id Generation fails due to user code impacting formatting string
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13823&#39;&gt;FLINK-13823&lt;/a&gt;] -         Incorrect debug log in CompileUtils
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13823&quot;&gt;FLINK-13823&lt;/a&gt;] -         Incorrect debug log in CompileUtils
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13825&#39;&gt;FLINK-13825&lt;/a&gt;] -         The original plugins dir is not restored after e2e test run
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13825&quot;&gt;FLINK-13825&lt;/a&gt;] -         The original plugins dir is not restored after e2e test run
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13831&#39;&gt;FLINK-13831&lt;/a&gt;] -         Free Slots / All Slots display error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13831&quot;&gt;FLINK-13831&lt;/a&gt;] -         Free Slots / All Slots display error
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13887&#39;&gt;FLINK-13887&lt;/a&gt;] -         Ensure defaultInputDependencyConstraint to be non-null when setting it in ExecutionConfig
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13887&quot;&gt;FLINK-13887&lt;/a&gt;] -         Ensure defaultInputDependencyConstraint to be non-null when setting it in ExecutionConfig
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13897&#39;&gt;FLINK-13897&lt;/a&gt;] -         OSS FS NOTICE file is placed in wrong directory
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13897&quot;&gt;FLINK-13897&lt;/a&gt;] -         OSS FS NOTICE file is placed in wrong directory
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13933&#39;&gt;FLINK-13933&lt;/a&gt;] -         Hive Generic UDTF can not be used in table API both stream and batch mode
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13933&quot;&gt;FLINK-13933&lt;/a&gt;] -         Hive Generic UDTF can not be used in table API both stream and batch mode
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13936&#39;&gt;FLINK-13936&lt;/a&gt;] -         NOTICE-binary is outdated
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13936&quot;&gt;FLINK-13936&lt;/a&gt;] -         NOTICE-binary is outdated
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13966&#39;&gt;FLINK-13966&lt;/a&gt;] -         Jar sorting in collect_license_files.sh is locale dependent
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13966&quot;&gt;FLINK-13966&lt;/a&gt;] -         Jar sorting in collect_license_files.sh is locale dependent
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14009&#39;&gt;FLINK-14009&lt;/a&gt;] -         Cron jobs broken due to verifying incorrect NOTICE-binary file
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14009&quot;&gt;FLINK-14009&lt;/a&gt;] -         Cron jobs broken due to verifying incorrect NOTICE-binary file
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14049&#39;&gt;FLINK-14049&lt;/a&gt;] -         Update error message for failed partition updates to include task name
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14049&quot;&gt;FLINK-14049&lt;/a&gt;] -         Update error message for failed partition updates to include task name
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14076&#39;&gt;FLINK-14076&lt;/a&gt;] -         &amp;#39;ClassNotFoundException: KafkaException&amp;#39; on Flink v1.9 w/ checkpointing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14076&quot;&gt;FLINK-14076&lt;/a&gt;] -         &amp;#39;ClassNotFoundException: KafkaException&amp;#39; on Flink v1.9 w/ checkpointing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14107&#39;&gt;FLINK-14107&lt;/a&gt;] -         Kinesis consumer record emitter deadlock under event time alignment
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14107&quot;&gt;FLINK-14107&lt;/a&gt;] -         Kinesis consumer record emitter deadlock under event time alignment
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14119&#39;&gt;FLINK-14119&lt;/a&gt;] -         Clean idle state for RetractableTopNFunction
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14119&quot;&gt;FLINK-14119&lt;/a&gt;] -         Clean idle state for RetractableTopNFunction
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14139&#39;&gt;FLINK-14139&lt;/a&gt;] -         Fix potential memory leak of rest server when using session/standalone cluster
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14139&quot;&gt;FLINK-14139&lt;/a&gt;] -         Fix potential memory leak of rest server when using session/standalone cluster
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14140&#39;&gt;FLINK-14140&lt;/a&gt;] -         The Flink Logo Displayed in Flink Python Shell is Broken
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14140&quot;&gt;FLINK-14140&lt;/a&gt;] -         The Flink Logo Displayed in Flink Python Shell is Broken
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14150&#39;&gt;FLINK-14150&lt;/a&gt;] -         Unnecessary __pycache__ directories appears in pyflink.zip
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14150&quot;&gt;FLINK-14150&lt;/a&gt;] -         Unnecessary __pycache__ directories appears in pyflink.zip
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14288&#39;&gt;FLINK-14288&lt;/a&gt;] -         Add Py4j NOTICE for source release
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14288&quot;&gt;FLINK-14288&lt;/a&gt;] -         Add Py4j NOTICE for source release
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13892&#39;&gt;FLINK-13892&lt;/a&gt;] -         HistoryServerTest failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13892&quot;&gt;FLINK-13892&lt;/a&gt;] -         HistoryServerTest failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14043&#39;&gt;FLINK-14043&lt;/a&gt;] -         SavepointMigrationTestBase is super slow
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14043&quot;&gt;FLINK-14043&lt;/a&gt;] -         SavepointMigrationTestBase is super slow
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12164&#39;&gt;FLINK-12164&lt;/a&gt;] -         JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12164&quot;&gt;FLINK-12164&lt;/a&gt;] -         JobMasterTest.testJobFailureWhenTaskExecutorHeartbeatTimeout is unstable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9900&#39;&gt;FLINK-9900&lt;/a&gt;] -         Fix unstable test ZooKeeperHighAvailabilityITCase#testRestoreBehaviourWithFaultyStateHandles
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9900&quot;&gt;FLINK-9900&lt;/a&gt;] -         Fix unstable test ZooKeeperHighAvailabilityITCase#testRestoreBehaviourWithFaultyStateHandles
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13484&#39;&gt;FLINK-13484&lt;/a&gt;] -         ConnectedComponents end-to-end test instable with NoResourceAvailableException
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13484&quot;&gt;FLINK-13484&lt;/a&gt;] -         ConnectedComponents end-to-end test instable with NoResourceAvailableException
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13489&#39;&gt;FLINK-13489&lt;/a&gt;] -         Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13489&quot;&gt;FLINK-13489&lt;/a&gt;] -         Heavy deployment end-to-end test fails on Travis with TM heartbeat timeout
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13514&#39;&gt;FLINK-13514&lt;/a&gt;] -         StreamTaskTest.testAsyncCheckpointingConcurrentCloseAfterAcknowledge unstable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13514&quot;&gt;FLINK-13514&lt;/a&gt;] -         StreamTaskTest.testAsyncCheckpointingConcurrentCloseAfterAcknowledge unstable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13530&#39;&gt;FLINK-13530&lt;/a&gt;] -         AbstractServerTest failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13530&quot;&gt;FLINK-13530&lt;/a&gt;] -         AbstractServerTest failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13585&#39;&gt;FLINK-13585&lt;/a&gt;] -         Fix sporadical deallock in TaskAsyncCallTest#testSetsUserCodeClassLoader()
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13585&quot;&gt;FLINK-13585&lt;/a&gt;] -         Fix sporadical deallock in TaskAsyncCallTest#testSetsUserCodeClassLoader()
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13599&#39;&gt;FLINK-13599&lt;/a&gt;] -         Kinesis end-to-end test failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13599&quot;&gt;FLINK-13599&lt;/a&gt;] -         Kinesis end-to-end test failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13663&#39;&gt;FLINK-13663&lt;/a&gt;] -         SQL Client end-to-end test for modern Kafka failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13663&quot;&gt;FLINK-13663&lt;/a&gt;] -         SQL Client end-to-end test for modern Kafka failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13688&#39;&gt;FLINK-13688&lt;/a&gt;] -         HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13688&quot;&gt;FLINK-13688&lt;/a&gt;] -         HiveCatalogUseBlinkITCase.testBlinkUdf constantly failed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13739&#39;&gt;FLINK-13739&lt;/a&gt;] -         BinaryRowTest.testWriteString() fails in some environments
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13739&quot;&gt;FLINK-13739&lt;/a&gt;] -         BinaryRowTest.testWriteString() fails in some environments
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13746&#39;&gt;FLINK-13746&lt;/a&gt;] -         Elasticsearch (v2.3.5) sink end-to-end test fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13746&quot;&gt;FLINK-13746&lt;/a&gt;] -         Elasticsearch (v2.3.5) sink end-to-end test fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13769&#39;&gt;FLINK-13769&lt;/a&gt;] -         BatchFineGrainedRecoveryITCase.testProgram failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13769&quot;&gt;FLINK-13769&lt;/a&gt;] -         BatchFineGrainedRecoveryITCase.testProgram failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13807&#39;&gt;FLINK-13807&lt;/a&gt;] -         Flink-avro unit tests fails if the character encoding in the environment is not default to UTF-8
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13807&quot;&gt;FLINK-13807&lt;/a&gt;] -         Flink-avro unit tests fails if the character encoding in the environment is not default to UTF-8
 &lt;/li&gt;
 &lt;/ul&gt;
 
-
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13965&#39;&gt;FLINK-13965&lt;/a&gt;] -         Keep hasDeprecatedKeys and deprecatedKeys methods in ConfigOption and mark it with @Deprecated annotation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13965&quot;&gt;FLINK-13965&lt;/a&gt;] -         Keep hasDeprecatedKeys and deprecatedKeys methods in ConfigOption and mark it with @Deprecated annotation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9941&#39;&gt;FLINK-9941&lt;/a&gt;] -         Flush in ScalaCsvOutputFormat before close method
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9941&quot;&gt;FLINK-9941&lt;/a&gt;] -         Flush in ScalaCsvOutputFormat before close method
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13336&#39;&gt;FLINK-13336&lt;/a&gt;] -         Remove the legacy batch fault tolerance page and redirect it to the new task failure recovery page
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13336&quot;&gt;FLINK-13336&lt;/a&gt;] -         Remove the legacy batch fault tolerance page and redirect it to the new task failure recovery page
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13380&#39;&gt;FLINK-13380&lt;/a&gt;] -         Improve the usability of Flink session cluster on Kubernetes
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13380&quot;&gt;FLINK-13380&lt;/a&gt;] -         Improve the usability of Flink session cluster on Kubernetes
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13819&#39;&gt;FLINK-13819&lt;/a&gt;] -         Introduce RpcEndpoint State
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13819&quot;&gt;FLINK-13819&lt;/a&gt;] -         Introduce RpcEndpoint State
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13845&#39;&gt;FLINK-13845&lt;/a&gt;] -         Drop all the content of removed &amp;quot;Checkpointed&amp;quot; interface
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13845&quot;&gt;FLINK-13845&lt;/a&gt;] -         Drop all the content of removed &amp;quot;Checkpointed&amp;quot; interface
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13957&#39;&gt;FLINK-13957&lt;/a&gt;] -         Log dynamic properties on job submission
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13957&quot;&gt;FLINK-13957&lt;/a&gt;] -         Log dynamic properties on job submission
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13967&#39;&gt;FLINK-13967&lt;/a&gt;] -         Generate full binary licensing via collect_license_files.sh
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13967&quot;&gt;FLINK-13967&lt;/a&gt;] -         Generate full binary licensing via collect_license_files.sh
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13968&#39;&gt;FLINK-13968&lt;/a&gt;] -         Add travis check for the correctness of the binary licensing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13968&quot;&gt;FLINK-13968&lt;/a&gt;] -         Add travis check for the correctness of the binary licensing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13449&#39;&gt;FLINK-13449&lt;/a&gt;] -         Add ARM architecture to MemoryArchitecture
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13449&quot;&gt;FLINK-13449&lt;/a&gt;] -         Add ARM architecture to MemoryArchitecture
 &lt;/li&gt;
 &lt;/ul&gt;
 
-
 &lt;h2&gt;        Documentation
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13105&#39;&gt;FLINK-13105&lt;/a&gt;] -         Add documentation for blink planner&amp;#39;s built-in functions
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13105&quot;&gt;FLINK-13105&lt;/a&gt;] -         Add documentation for blink planner&amp;#39;s built-in functions
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13277&#39;&gt;FLINK-13277&lt;/a&gt;] -         add documentation of Hive source/sink
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13277&quot;&gt;FLINK-13277&lt;/a&gt;] -         add documentation of Hive source/sink
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13354&#39;&gt;FLINK-13354&lt;/a&gt;] -         Add documentation for how to use blink planner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13354&quot;&gt;FLINK-13354&lt;/a&gt;] -         Add documentation for how to use blink planner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13355&#39;&gt;FLINK-13355&lt;/a&gt;] -         Add documentation for Temporal Table Join in blink planner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13355&quot;&gt;FLINK-13355&lt;/a&gt;] -         Add documentation for Temporal Table Join in blink planner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13356&#39;&gt;FLINK-13356&lt;/a&gt;] -         Add documentation for TopN and Deduplication in blink planner
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13356&quot;&gt;FLINK-13356&lt;/a&gt;] -         Add documentation for TopN and Deduplication in blink planner
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13359&#39;&gt;FLINK-13359&lt;/a&gt;] -         Add documentation for DDL introduction
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13359&quot;&gt;FLINK-13359&lt;/a&gt;] -         Add documentation for DDL introduction
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13362&#39;&gt;FLINK-13362&lt;/a&gt;] -         Add documentation for Kafka &amp;amp; ES &amp;amp; FileSystem DDL
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13362&quot;&gt;FLINK-13362&lt;/a&gt;] -         Add documentation for Kafka &amp;amp; ES &amp;amp; FileSystem DDL
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13363&#39;&gt;FLINK-13363&lt;/a&gt;] -         Add documentation for streaming aggregate performance tunning.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13363&quot;&gt;FLINK-13363&lt;/a&gt;] -         Add documentation for streaming aggregate performance tunning.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13706&#39;&gt;FLINK-13706&lt;/a&gt;] -         add documentation of how to use Hive functions in Flink
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13706&quot;&gt;FLINK-13706&lt;/a&gt;] -         add documentation of how to use Hive functions in Flink
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13942&#39;&gt;FLINK-13942&lt;/a&gt;] -         Add Overview page for Getting Started section
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13942&quot;&gt;FLINK-13942&lt;/a&gt;] -         Add Overview page for Getting Started section
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13863&#39;&gt;FLINK-13863&lt;/a&gt;] -         Update Operations Playground to Flink 1.9.0
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13863&quot;&gt;FLINK-13863&lt;/a&gt;] -         Update Operations Playground to Flink 1.9.0
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13937&#39;&gt;FLINK-13937&lt;/a&gt;] -         Fix wrong hive dependency version in documentation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13937&quot;&gt;FLINK-13937&lt;/a&gt;] -         Fix wrong hive dependency version in documentation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13830&#39;&gt;FLINK-13830&lt;/a&gt;] -         The Document about Cluster on yarn have some problems
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13830&quot;&gt;FLINK-13830&lt;/a&gt;] -         The Document about Cluster on yarn have some problems
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-14160&#39;&gt;FLINK-14160&lt;/a&gt;] -         Extend Operations Playground with --backpressure option
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-14160&quot;&gt;FLINK-14160&lt;/a&gt;] -         Extend Operations Playground with --backpressure option
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13388&#39;&gt;FLINK-13388&lt;/a&gt;] -         Update UI screenshots in the documentation to the new default Web Frontend
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13388&quot;&gt;FLINK-13388&lt;/a&gt;] -         Update UI screenshots in the documentation to the new default Web Frontend
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13415&#39;&gt;FLINK-13415&lt;/a&gt;] -         Document how to use hive connector in scala shell
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13415&quot;&gt;FLINK-13415&lt;/a&gt;] -         Document how to use hive connector in scala shell
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13517&#39;&gt;FLINK-13517&lt;/a&gt;] -         Restructure Hive Catalog documentation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13517&quot;&gt;FLINK-13517&lt;/a&gt;] -         Restructure Hive Catalog documentation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13643&#39;&gt;FLINK-13643&lt;/a&gt;] -         Document the workaround for users with a different minor Hive version
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13643&quot;&gt;FLINK-13643&lt;/a&gt;] -         Document the workaround for users with a different minor Hive version
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13757&#39;&gt;FLINK-13757&lt;/a&gt;] -         Fix wrong description of &quot;IS NOT TRUE&quot; function documentation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13757&quot;&gt;FLINK-13757&lt;/a&gt;] -         Fix wrong description of &quot;IS NOT TRUE&quot; function documentation
 &lt;/li&gt;
 &lt;/ul&gt;
 
 </description>
-<pubDate>Fri, 18 Oct 2019 12:00:00 +0000</pubDate>
+<pubDate>Fri, 18 Oct 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/10/18/release-1.9.1.html</link>
 <guid isPermaLink="true">/news/2019/10/18/release-1.9.1.html</guid>
 </item>
 
 <item>
 <title>The State Processor API: How to Read, write and modify the state of Flink applications</title>
-<description>Whether you are running Apache Flink&lt;sup&gt;Ⓡ&lt;/sup&gt; in production or evaluated Flink as a computation framework in the past, you&#39;ve probably found yourself asking the question: How can I access, write or update state in a Flink savepoint? Ask no more! [Apache Flink 1.9.0](https://flink.apache.org/news/2019/08/22/release-1.9.0.html) introduces the [State Processor API](https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.html), [...]
- 
-In this post, we explain why this feature is a big step for Flink, what you can use it for, and how to use it. Finally, we will discuss the future of the State Processor API and how it aligns with our plans to evolve Flink into a system for [unified batch and stream processing](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html).
+<description>&lt;p&gt;Whether you are running Apache Flink&lt;sup&gt;Ⓡ&lt;/sup&gt; in production or evaluated Flink as a computation framework in the past, you’ve probably found yourself asking the question: How can I access, write or update state in a Flink savepoint? Ask no more! &lt;a href=&quot;https://flink.apache.org/news/2019/08/22/release-1.9.0.html&quot;&gt;Apache Flink 1.9.0&lt;/a&gt; introduces the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/de [...]
 
-## Stateful Stream Processing with Apache Flink until Flink 1.9
+&lt;p&gt;In this post, we explain why this feature is a big step for Flink, what you can use it for, and how to use it. Finally, we will discuss the future of the State Processor API and how it aligns with our plans to evolve Flink into a system for &lt;a href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;unified batch and stream processing&lt;/a&gt;.&lt;/p&gt;
 
-All non-trivial stream processing applications are stateful and most of them are designed to run for months or years. Over time, many of them accumulate a lot of valuable state that can be very expensive or even impossible to rebuild if it gets lost due to a failure. In order to guarantee the consistency and durability of application state, Flink featured a sophisticated checkpointing and recovery mechanism from very early on. With every release, the Flink community has added more and mo [...]
+&lt;h2 id=&quot;stateful-stream-processing-with-apache-flink-until-flink-19&quot;&gt;Stateful Stream Processing with Apache Flink until Flink 1.9&lt;/h2&gt;
 
-However, a feature that was commonly requested by Flink users was the ability to access the state of an application “from the outside”. This request was motivated by the need to validate or debug the state of an application, to migrate the state of an application to another application, to evolve an application from the Heap State Backend to the RocksDB State Backend, or to import the initial state of an application from an external system like a relational database.
+&lt;p&gt;All non-trivial stream processing applications are stateful and most of them are designed to run for months or years. Over time, many of them accumulate a lot of valuable state that can be very expensive or even impossible to rebuild if it gets lost due to a failure. In order to guarantee the consistency and durability of application state, Flink featured a sophisticated checkpointing and recovery mechanism from very early on. With every release, the Flink community has added mo [...]
 
-Despite all those convincing reasons to expose application state externally, your access options have been fairly limited until now. Flink&#39;s Queryable State feature only supports key-lookups (point queries) and does not guarantee the consistency of returned values (the value of a key might be different before and after an application recovered from a failure). Moreover, queryable state cannot be used to add or modify the state of an application. Also, savepoints, which are consistent [...]
+&lt;p&gt;However, a feature that was commonly requested by Flink users was the ability to access the state of an application “from the outside”. This request was motivated by the need to validate or debug the state of an application, to migrate the state of an application to another application, to evolve an application from the Heap State Backend to the RocksDB State Backend, or to import the initial state of an application from an external system like a relational database.&lt;/p&gt;
 
-## Reading and Writing Application State with the State Processor API
+&lt;p&gt;Despite all those convincing reasons to expose application state externally, your access options have been fairly limited until now. Flink’s Queryable State feature only supports key-lookups (point queries) and does not guarantee the consistency of returned values (the value of a key might be different before and after an application recovered from a failure). Moreover, queryable state cannot be used to add or modify the state of an application. Also, savepoints, which are consi [...]
 
-The State Processor API that comes with Flink 1.9 is a true game-changer in how you can work with application state! In a nutshell, it extends the DataSet API with Input and OutputFormats to read and write savepoint or checkpoint data. Due to the [interoperability of DataSet and Table API](https://ci.apache.org/projects/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-api), you can even use relational Table API or SQL queries to analyze and process st [...]
+&lt;h2 id=&quot;reading-and-writing-application-state-with-the-state-processor-api&quot;&gt;Reading and Writing Application State with the State Processor API&lt;/h2&gt;
 
-For example, you can take a savepoint of a running stream processing application and analyze it with a DataSet batch program to verify that the application behaves correctly. Or you can read a batch of data from any store, preprocess it, and write the result to a savepoint that you use to bootstrap the state of a streaming application. It&#39;s also possible to fix inconsistent state entries now. Finally, the State Processor API opens up many ways to evolve a stateful application that we [...]
+&lt;p&gt;The State Processor API that comes with Flink 1.9 is a true game-changer in how you can work with application state! In a nutshell, it extends the DataSet API with Input and OutputFormats to read and write savepoint or checkpoint data. Due to the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/dev/table/common.html#integration-with-datastream-and-dataset-api&quot;&gt;interoperability of DataSet and Table API&lt;/a&gt;, you can even use relational Table AP [...]
 
-## Mapping Application State to DataSets
+&lt;p&gt;For example, you can take a savepoint of a running stream processing application and analyze it with a DataSet batch program to verify that the application behaves correctly. Or you can read a batch of data from any store, preprocess it, and write the result to a savepoint that you use to bootstrap the state of a streaming application. It’s also possible to fix inconsistent state entries now. Finally, the State Processor API opens up many ways to evolve a stateful application th [...]
 
-The State Processor API maps the state of a streaming application to one or more data sets that can be separately processed. In order to be able to use the API, you need to understand how this mapping works.
- 
-But let&#39;s first have a look at what a stateful Flink job looks like. A Flink job is composed of operators, typically one or more source operators, a few operators for the actual processing, and one or more sink operators. Each operator runs in parallel in one or more tasks and can work with different types of state. An operator can have zero, one, or more *“operator states”* which are organized as lists that are scoped to the operator&#39;s tasks. If the operator is applied on a keye [...]
- 
-The following figure shows the application “MyApp” which consists of three operators called “Src”, “Proc”, and “Snk”. Src has one operator state (os1), Proc has one operator state (os2) and two keyed states (ks1, ks2) and Snk is stateless.
+&lt;h2 id=&quot;mapping-application-state-to-datasets&quot;&gt;Mapping Application State to DataSets&lt;/h2&gt;
+
+&lt;p&gt;The State Processor API maps the state of a streaming application to one or more data sets that can be separately processed. In order to be able to use the API, you need to understand how this mapping works.&lt;/p&gt;
+
+&lt;p&gt;But let’s first have a look at what a stateful Flink job looks like. A Flink job is composed of operators, typically one or more source operators, a few operators for the actual processing, and one or more sink operators. Each operator runs in parallel in one or more tasks and can work with different types of state. An operator can have zero, one, or more &lt;em&gt;“operator states”&lt;/em&gt; which are organized as lists that are scoped to the operator’s tasks. If the operator  [...]
+
+&lt;p&gt;The following figure shows the application “MyApp” which consists of three operators called “Src”, “Proc”, and “Snk”. Src has one operator state (os1), Proc has one operator state (os2) and two keyed states (ks1, ks2) and Snk is stateless.&lt;/p&gt;
 
 &lt;p style=&quot;display: block; text-align: center; margin-top: 20px; margin-bottom: 20px&quot;&gt;
-	&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-09-13-state-processor-api-blog/application-my-app-state-processor-api.png&quot; width=&quot;600px&quot; alt=&quot;Application: My App&quot;/&gt;
+	&lt;img src=&quot;/img/blog/2019-09-13-state-processor-api-blog/application-my-app-state-processor-api.png&quot; width=&quot;600px&quot; alt=&quot;Application: My App&quot; /&gt;
 &lt;/p&gt;
 
-A savepoint or checkpoint of MyApp consists of the data of all states, organized in a way that the states of each task can be restored. When processing the data of a savepoint (or checkpoint) with a batch job, we need a mental model that maps the data of the individual tasks&#39; states into data sets or tables. In fact, we can think of a savepoint as a database. Every operator (identified by its UID) represents a namespace. Each operator state of an operator is mapped to a dedicated tab [...]
+&lt;p&gt;A savepoint or checkpoint of MyApp consists of the data of all states, organized in a way that the states of each task can be restored. When processing the data of a savepoint (or checkpoint) with a batch job, we need a mental model that maps the data of the individual tasks’ states into data sets or tables. In fact, we can think of a savepoint as a database. Every operator (identified by its UID) represents a namespace. Each operator state of an operator is mapped to a dedicate [...]
 
 &lt;p style=&quot;display: block; text-align: center; margin-top: 20px; margin-bottom: 20px&quot;&gt;
-	&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-09-13-state-processor-api-blog/database-my-app-state-processor-api.png&quot; width=&quot;600px&quot; alt=&quot;Database: My App&quot;/&gt;
+	&lt;img src=&quot;/img/blog/2019-09-13-state-processor-api-blog/database-my-app-state-processor-api.png&quot; width=&quot;600px&quot; alt=&quot;Database: My App&quot; /&gt;
 &lt;/p&gt;
 
-The figure shows how the values of Src&#39;s operator state are mapped to a table with one column and five rows, one row for all list entries across all parallel tasks of Src. Operator state os2 of the operator “Proc” is similarly mapped to an individual table. The keyed states ks1 and ks2 are combined to a single table with three columns, one for the key, one for ks1 and one for ks2. The keyed table holds one row for each distinct key of both keyed states. Since the operator “Snk” does  [...]
+&lt;p&gt;The figure shows how the values of Src’s operator state are mapped to a table with one column and five rows, one row for all list entries across all parallel tasks of Src. Operator state os2 of the operator “Proc” is similarly mapped to an individual table. The keyed states ks1 and ks2 are combined to a single table with three columns, one for the key, one for ks1 and one for ks2. The keyed table holds one row for each distinct key of both keyed states. Since the operator “Snk”  [...]
 
-The State Processor API now offers methods to create, load, and write a savepoint. You can read a DataSet from a loaded savepoint or convert a DataSet into a state and add it to a savepoint. DataSets can be processed with the full feature set of the DataSet API. With these building blocks, all of the before-mentioned use cases (and more) can be addressed. Please have a look at the [documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_processor_api.htm [...]
+&lt;p&gt;The State Processor API now offers methods to create, load, and write a savepoint. You can read a DataSet from a loaded savepoint or convert a DataSet into a state and add it to a savepoint. DataSets can be processed with the full feature set of the DataSet API. With these building blocks, all of the before-mentioned use cases (and more) can be addressed. Please have a look at the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/libs/state_process [...]
 
-## Why DataSet API?
+&lt;h2 id=&quot;why-dataset-api&quot;&gt;Why DataSet API?&lt;/h2&gt;
 
-In case you are familiar with [Flink&#39;s roadmap](https://flink.apache.org/roadmap.html), you might be surprised that the State Processor API is based on the DataSet API. The Flink community plans to extend the DataStream API with the concept of *BoundedStreams* and deprecate the DataSet API. When designing this feature, we also evaluated the DataStream API or Table API but neither could provide the right feature set yet. Since we didn&#39;t want to block this feature on the progress o [...]
+&lt;p&gt;In case you are familiar with &lt;a href=&quot;https://flink.apache.org/roadmap.html&quot;&gt;Flink’s roadmap&lt;/a&gt;, you might be surprised that the State Processor API is based on the DataSet API. The Flink community plans to extend the DataStream API with the concept of &lt;em&gt;BoundedStreams&lt;/em&gt; and deprecate the DataSet API. When designing this feature, we also evaluated the DataStream API or Table API but neither could provide the right feature set yet. Since w [...]
 
-## Summary
+&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;
 
-Flink users have requested a feature to access and modify the state of streaming applications from the outside for a long time. With the State Processor API, Flink 1.9.0 finally exposes application state as a data format that can be manipulated. This feature opens up many new possibilities for how users can maintain and manage Flink streaming applications, including arbitrary evolution of stream applications and exporting and bootstrapping of application state. To put it concisely, the S [...]
+&lt;p&gt;Flink users have requested a feature to access and modify the state of streaming applications from the outside for a long time. With the State Processor API, Flink 1.9.0 finally exposes application state as a data format that can be manipulated. This feature opens up many new possibilities for how users can maintain and manage Flink streaming applications, including arbitrary evolution of stream applications and exporting and bootstrapping of application state. To put it concise [...]
 </description>
-<pubDate>Fri, 13 Sep 2019 12:00:00 +0000</pubDate>
+<pubDate>Fri, 13 Sep 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/feature/2019/09/13/state-processor-api.html</link>
 <guid isPermaLink="true">/feature/2019/09/13/state-processor-api.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.8.2 Released</title>
-<description>The Apache Flink community released the second bugfix version of the Apache Flink 1.8 series.
+<description>&lt;p&gt;The Apache Flink community released the second bugfix version of the Apache Flink 1.8 series.&lt;/p&gt;
 
-This release includes 23 fixes and minor improvements for Flink 1.8.1. The list below includes a detailed list of all fixes and improvements.
+&lt;p&gt;This release includes 23 fixes and minor improvements for Flink 1.8.1. The list below includes a detailed list of all fixes and improvements.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.8.2.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.8.2.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.8.2&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.8.2&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.8.2&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13941&#39;&gt;FLINK-13941&lt;/a&gt;] -         Prevent data-loss by not cleaning up small part files from S3.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13941&quot;&gt;FLINK-13941&lt;/a&gt;] -         Prevent data-loss by not cleaning up small part files from S3.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9526&#39;&gt;FLINK-9526&lt;/a&gt;] -         BucketingSink end-to-end test failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9526&quot;&gt;FLINK-9526&lt;/a&gt;] -         BucketingSink end-to-end test failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10368&#39;&gt;FLINK-10368&lt;/a&gt;] -         &amp;#39;Kerberized YARN on Docker test&amp;#39; unstable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10368&quot;&gt;FLINK-10368&lt;/a&gt;] -         &amp;#39;Kerberized YARN on Docker test&amp;#39; unstable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12319&#39;&gt;FLINK-12319&lt;/a&gt;] -         StackOverFlowError in cep.nfa.sharedbuffer.SharedBuffer
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12319&quot;&gt;FLINK-12319&lt;/a&gt;] -         StackOverFlowError in cep.nfa.sharedbuffer.SharedBuffer
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12736&#39;&gt;FLINK-12736&lt;/a&gt;] -         ResourceManager may release TM with allocated slots
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12736&quot;&gt;FLINK-12736&lt;/a&gt;] -         ResourceManager may release TM with allocated slots
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12889&#39;&gt;FLINK-12889&lt;/a&gt;] -         Job keeps in FAILING state
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12889&quot;&gt;FLINK-12889&lt;/a&gt;] -         Job keeps in FAILING state
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13059&#39;&gt;FLINK-13059&lt;/a&gt;] -         Cassandra Connector leaks Semaphore on Exception; hangs on close
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13059&quot;&gt;FLINK-13059&lt;/a&gt;] -         Cassandra Connector leaks Semaphore on Exception; hangs on close
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13159&#39;&gt;FLINK-13159&lt;/a&gt;] -         java.lang.ClassNotFoundException when restore job
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13159&quot;&gt;FLINK-13159&lt;/a&gt;] -         java.lang.ClassNotFoundException when restore job
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13367&#39;&gt;FLINK-13367&lt;/a&gt;] -         Make ClosureCleaner detect writeReplace serialization override
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13367&quot;&gt;FLINK-13367&lt;/a&gt;] -         Make ClosureCleaner detect writeReplace serialization override
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13369&#39;&gt;FLINK-13369&lt;/a&gt;] -         Recursive closure cleaner ends up with stackOverflow in case of circular dependency
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13369&quot;&gt;FLINK-13369&lt;/a&gt;] -         Recursive closure cleaner ends up with stackOverflow in case of circular dependency
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13394&#39;&gt;FLINK-13394&lt;/a&gt;] -         Use fallback unsafe secure MapR in nightly.sh
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13394&quot;&gt;FLINK-13394&lt;/a&gt;] -         Use fallback unsafe secure MapR in nightly.sh
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13484&#39;&gt;FLINK-13484&lt;/a&gt;] -         ConnectedComponents end-to-end test instable with NoResourceAvailableException
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13484&quot;&gt;FLINK-13484&lt;/a&gt;] -         ConnectedComponents end-to-end test instable with NoResourceAvailableException
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13499&#39;&gt;FLINK-13499&lt;/a&gt;] -         Remove dependency on MapR artifact repository
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13499&quot;&gt;FLINK-13499&lt;/a&gt;] -         Remove dependency on MapR artifact repository
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13508&#39;&gt;FLINK-13508&lt;/a&gt;] -         CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13508&quot;&gt;FLINK-13508&lt;/a&gt;] -         CommonTestUtils#waitUntilCondition() may attempt to sleep with negative time
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13586&#39;&gt;FLINK-13586&lt;/a&gt;] -         Method ClosureCleaner.clean broke backward compatibility between 1.8.0 and 1.8.1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13586&quot;&gt;FLINK-13586&lt;/a&gt;] -         Method ClosureCleaner.clean broke backward compatibility between 1.8.0 and 1.8.1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13761&#39;&gt;FLINK-13761&lt;/a&gt;] -         `SplitStream` should be deprecated because `SplitJavaStream` is deprecated
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13761&quot;&gt;FLINK-13761&lt;/a&gt;] -         `SplitStream` should be deprecated because `SplitJavaStream` is deprecated
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13789&#39;&gt;FLINK-13789&lt;/a&gt;] -         Transactional Id Generation fails due to user code impacting formatting string
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13789&quot;&gt;FLINK-13789&lt;/a&gt;] -         Transactional Id Generation fails due to user code impacting formatting string
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13806&#39;&gt;FLINK-13806&lt;/a&gt;] -         Metric Fetcher floods the JM log with errors when TM is lost
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13806&quot;&gt;FLINK-13806&lt;/a&gt;] -         Metric Fetcher floods the JM log with errors when TM is lost
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13807&#39;&gt;FLINK-13807&lt;/a&gt;] -         Flink-avro unit tests fails if the character encoding in the environment is not default to UTF-8
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13807&quot;&gt;FLINK-13807&lt;/a&gt;] -         Flink-avro unit tests fails if the character encoding in the environment is not default to UTF-8
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-13897&#39;&gt;FLINK-13897&lt;/a&gt;] -         OSS FS NOTICE file is placed in wrong directory
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-13897&quot;&gt;FLINK-13897&lt;/a&gt;] -         OSS FS NOTICE file is placed in wrong directory
 &lt;/li&gt;
 &lt;/ul&gt;
 
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12578&#39;&gt;FLINK-12578&lt;/a&gt;] -         Use secure URLs for Maven repositories
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12578&quot;&gt;FLINK-12578&lt;/a&gt;] -         Use secure URLs for Maven repositories
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12741&#39;&gt;FLINK-12741&lt;/a&gt;] -         Update docs about Kafka producer fault tolerance guarantees
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12741&quot;&gt;FLINK-12741&lt;/a&gt;] -         Update docs about Kafka producer fault tolerance guarantees
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12749&#39;&gt;FLINK-12749&lt;/a&gt;] -         Add Flink Operations Playground documentation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12749&quot;&gt;FLINK-12749&lt;/a&gt;] -         Add Flink Operations Playground documentation
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 11 Sep 2019 12:00:00 +0000</pubDate>
+<pubDate>Wed, 11 Sep 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/09/11/release-1.8.2.html</link>
 <guid isPermaLink="true">/news/2019/09/11/release-1.8.2.html</guid>
 </item>
 
 <item>
 <title>Flink Community Update - September&#39;19</title>
-<description>This has been an exciting, fast-paced year for the Apache Flink community. But with over 10k messages across the mailing lists, 3k Jira tickets and 2k pull requests, it is not easy to keep up with the latest state of the project. Plus everything happening around it. With that in mind, we want to bring back regular community updates to the Flink blog.
-
-The first post in the series takes you on an little detour across the year, to freshen up and make sure you&#39;re all up to date.
+<description>&lt;p&gt;This has been an exciting, fast-paced year for the Apache Flink community. But with over 10k messages across the mailing lists, 3k Jira tickets and 2k pull requests, it is not easy to keep up with the latest state of the project. Plus everything happening around it. With that in mind, we want to bring back regular community updates to the Flink blog.&lt;/p&gt;
+
+&lt;p&gt;The first post in the series takes you on an little detour across the year, to freshen up and make sure you’re all up to date.&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#the-year-so-far-in-flink&quot; id=&quot;markdown-toc-the-year-so-far-in-flink&quot;&gt;The Year (so far) in Flink&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#integration-of-the-chinese-speaking-community&quot; id=&quot;markdown-toc-integration-of-the-chinese-speaking-community&quot;&gt;Integration of the Chinese-speaking community&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#improving-flinks-documentation&quot; id=&quot;markdown-toc-improving-flinks-documentation&quot;&gt;Improving Flink’s Documentation&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#adjusting-the-contribution-process-and-experience&quot; id=&quot;markdown-toc-adjusting-the-contribution-process-and-experience&quot;&gt;Adjusting the Contribution Process and Experience&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#new-committers-and-pmc-members&quot; id=&quot;markdown-toc-new-committers-and-pmc-members&quot;&gt;New Committers and PMC Members&lt;/a&gt;        &lt;ul&gt;
+          &lt;li&gt;&lt;a href=&quot;#new-pmc-members&quot; id=&quot;markdown-toc-new-pmc-members&quot;&gt;New PMC Members&lt;/a&gt;&lt;/li&gt;
+          &lt;li&gt;&lt;a href=&quot;#new-committers&quot; id=&quot;markdown-toc-new-committers&quot;&gt;New Committers&lt;/a&gt;&lt;/li&gt;
+        &lt;/ul&gt;
+      &lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#the-bigger-picture&quot; id=&quot;markdown-toc-the-bigger-picture&quot;&gt;The Bigger Picture&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#upcoming-events&quot; id=&quot;markdown-toc-upcoming-events&quot;&gt;Upcoming Events&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#north-america&quot; id=&quot;markdown-toc-north-america&quot;&gt;North America&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#europe&quot; id=&quot;markdown-toc-europe&quot;&gt;Europe&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#asia&quot; id=&quot;markdown-toc-asia&quot;&gt;Asia&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-{% toc %}
+&lt;/div&gt;
 
-# The Year (so far) in Flink
+&lt;h1 id=&quot;the-year-so-far-in-flink&quot;&gt;The Year (so far) in Flink&lt;/h1&gt;
 
-Two major versions were released this year: [Flink 1.8](https://flink.apache.org/news/2019/04/09/release-1.8.0.html) and [Flink 1.9](https://flink.apache.org/news/2019/08/22/release-1.9.0.html); paving the way for the goal of making Flink the first framework to seamlessly support stream and batch processing with a single, unified runtime. The [contribution of Blink](https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html) to Apache Flink was key in accelerating the pa [...]
+&lt;p&gt;Two major versions were released this year: &lt;a href=&quot;https://flink.apache.org/news/2019/04/09/release-1.8.0.html&quot;&gt;Flink 1.8&lt;/a&gt; and &lt;a href=&quot;https://flink.apache.org/news/2019/08/22/release-1.9.0.html&quot;&gt;Flink 1.9&lt;/a&gt;; paving the way for the goal of making Flink the first framework to seamlessly support stream and batch processing with a single, unified runtime. The &lt;a href=&quot;https://flink.apache.org/news/2019/02/13/unified-batch- [...]
 
-The 1.9 release was the result of the **biggest community effort the project has experienced so far**, with the number of contributors soaring to 190 (see [The Bigger Picture](#the-bigger-picture)). For a quick overview of the upcoming work for Flink 1.10 (and beyond), have a look at the updated [roadmap](https://flink.apache.org/roadmap.html)!
+&lt;p&gt;The 1.9 release was the result of the &lt;strong&gt;biggest community effort the project has experienced so far&lt;/strong&gt;, with the number of contributors soaring to 190 (see &lt;a href=&quot;#the-bigger-picture&quot;&gt;The Bigger Picture&lt;/a&gt;). For a quick overview of the upcoming work for Flink 1.10 (and beyond), have a look at the updated &lt;a href=&quot;https://flink.apache.org/roadmap.html&quot;&gt;roadmap&lt;/a&gt;!&lt;/p&gt;
 
-## Integration of the Chinese-speaking community
+&lt;h2 id=&quot;integration-of-the-chinese-speaking-community&quot;&gt;Integration of the Chinese-speaking community&lt;/h2&gt;
 
-As the number of Chinese-speaking Flink users rapidly grows, the community is working on translating resources and creating dedicated spaces for discussion to invite and include these users in the wider Flink community. Part of the ongoing work is described in [FLIP-35](https://cwiki.apache.org/confluence/display/FLINK/FLIP-35%3A+Support+Chinese+Documents+and+Website) and has resulted in:
+&lt;p&gt;As the number of Chinese-speaking Flink users rapidly grows, the community is working on translating resources and creating dedicated spaces for discussion to invite and include these users in the wider Flink community. Part of the ongoing work is described in &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-35%3A+Support+Chinese+Documents+and+Website&quot;&gt;FLIP-35&lt;/a&gt; and has resulted in:&lt;/p&gt;
 
-* A new user mailing list (user-zh@f.a.o) dedicated to Chinese-speakers.
+&lt;ul&gt;
+  &lt;li&gt;A new user mailing list (user-zh@f.a.o) dedicated to Chinese-speakers.&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* A Chinese translation of the Apache Flink [website](https://flink.apache.org/zh/) and [documentation](https://ci.apache.org/projects/flink/flink-docs-master/zh/).
+&lt;ul&gt;
+  &lt;li&gt;A Chinese translation of the Apache Flink &lt;a href=&quot;https://flink.apache.org/zh/&quot;&gt;website&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/zh/&quot;&gt;documentation&lt;/a&gt;.&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* Multiple meetups organized all over China, with the biggest one reaching a whopping number of 500+ participants. Some of these meetups were also organized in collaboration with communities from other projects, like Apache Pulsar and Apache Kafka.
+&lt;ul&gt;
+  &lt;li&gt;Multiple meetups organized all over China, with the biggest one reaching a whopping number of 500+ participants. Some of these meetups were also organized in collaboration with communities from other projects, like Apache Pulsar and Apache Kafka.&lt;/li&gt;
+&lt;/ul&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_3.png&quot; width=&quot;800px&quot; alt=&quot;China Meetup&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_3.png&quot; width=&quot;800px&quot; alt=&quot;China Meetup&quot; /&gt;
 &lt;/center&gt;
 
-In case you&#39;re interested in knowing more about this work in progress, Robert Metzger and Fabian Hueske will be diving into &quot;Inviting Apache Flink&#39;s Chinese User Community&quot; at the upcoming ApacheCon Europe 2019 (see [Upcoming Flink Community Events](#upcoming-flink-community-events)).
+&lt;p&gt;In case you’re interested in knowing more about this work in progress, Robert Metzger and Fabian Hueske will be diving into “Inviting Apache Flink’s Chinese User Community” at the upcoming ApacheCon Europe 2019 (see &lt;a href=&quot;#upcoming-flink-community-events&quot;&gt;Upcoming Flink Community Events&lt;/a&gt;).&lt;/p&gt;
 
-## Improving Flink&#39;s Documentation
+&lt;h2 id=&quot;improving-flinks-documentation&quot;&gt;Improving Flink’s Documentation&lt;/h2&gt;
 
-Besides the translation effort, the community has also been working quite hard on a **Flink docs overhaul**. The main goals are to:
+&lt;p&gt;Besides the translation effort, the community has also been working quite hard on a &lt;strong&gt;Flink docs overhaul&lt;/strong&gt;. The main goals are to:&lt;/p&gt;
 
- * Organize and clean-up the structure of the docs;
- &lt;p&gt;&lt;/p&gt;
- * Align the content with the overall direction of the project;
- &lt;p&gt;&lt;/p&gt;
- * Improve the _getting-started_ material and make the content more accessible to different levels of Flink experience. 
+&lt;ul&gt;
+  &lt;li&gt;Organize and clean-up the structure of the docs;&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;&lt;/p&gt;
+&lt;ul&gt;
+  &lt;li&gt;Align the content with the overall direction of the project;&lt;/li&gt;
+&lt;/ul&gt;
+&lt;p&gt;&lt;/p&gt;
+&lt;ul&gt;
+  &lt;li&gt;Improve the &lt;em&gt;getting-started&lt;/em&gt; material and make the content more accessible to different levels of Flink experience.&lt;/li&gt;
+&lt;/ul&gt;
 
-Given that there has been some confusion in the past regarding unclear definition of core Flink concepts, one of the first completed efforts was to introduce a [Glossary](https://ci.apache.org/projects/flink/flink-docs-release-1.9/concepts/glossary.html#glossary) in the docs. To get up to speed with the roadmap for the remainder efforts, you can refer to [FLIP-42](https://cwiki.apache.org/confluence/display/FLINK/FLIP-42%3A+Rework+Flink+Documentation) and the corresponding [umbrella Jira [...]
+&lt;p&gt;Given that there has been some confusion in the past regarding unclear definition of core Flink concepts, one of the first completed efforts was to introduce a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/concepts/glossary.html#glossary&quot;&gt;Glossary&lt;/a&gt; in the docs. To get up to speed with the roadmap for the remainder efforts, you can refer to &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-42%3A+Rework+Flink+Do [...]
 
-## Adjusting the Contribution Process and Experience
+&lt;h2 id=&quot;adjusting-the-contribution-process-and-experience&quot;&gt;Adjusting the Contribution Process and Experience&lt;/h2&gt;
 
-The [guidelines](https://flink.apache.org/contributing/how-to-contribute.html) to contribute to Apache Flink have been reworked on the website, in an effort to lower the entry barrier for new contributors and reduce the overall friction in the contribution process. In addition, the Flink community discussed and adopted [bylaws](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120731026) to help the community collaborate and coordinate more smoothly.
+&lt;p&gt;The &lt;a href=&quot;https://flink.apache.org/contributing/how-to-contribute.html&quot;&gt;guidelines&lt;/a&gt; to contribute to Apache Flink have been reworked on the website, in an effort to lower the entry barrier for new contributors and reduce the overall friction in the contribution process. In addition, the Flink community discussed and adopted &lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=120731026&quot;&gt;bylaws&lt;/a&gt; to help the [...]
 
-For code contributors, a [Code Style and Quality Guide](https://flink.apache.org/contributing/code-style-and-quality-preamble.html) that captures the expected standards for contributions was also added to the &quot;Contributing&quot; section of the Flink website.
+&lt;p&gt;For code contributors, a &lt;a href=&quot;https://flink.apache.org/contributing/code-style-and-quality-preamble.html&quot;&gt;Code Style and Quality Guide&lt;/a&gt; that captures the expected standards for contributions was also added to the “Contributing” section of the Flink website.&lt;/p&gt;
 
-It&#39;s important to stress that **contributions are not restricted to code**. Non-code contributions such as mailing list support, documentation work or organization of community events are equally as important to the development of the project and highly encouraged.
+&lt;p&gt;It’s important to stress that &lt;strong&gt;contributions are not restricted to code&lt;/strong&gt;. Non-code contributions such as mailing list support, documentation work or organization of community events are equally as important to the development of the project and highly encouraged.&lt;/p&gt;
 
-## New Committers and PMC Members
+&lt;h2 id=&quot;new-committers-and-pmc-members&quot;&gt;New Committers and PMC Members&lt;/h2&gt;
 
-The Apache Flink community has welcomed **5 new Committers** and **4 PMC (Project Management Committee) Members** in 2019, so far:
+&lt;p&gt;The Apache Flink community has welcomed &lt;strong&gt;5 new Committers&lt;/strong&gt; and &lt;strong&gt;4 PMC (Project Management Committee) Members&lt;/strong&gt; in 2019, so far:&lt;/p&gt;
 
-### New PMC Members
-	Jincheng Sun, Kete (Kurt) Young, Kostas Kloudas, Thomas Weise
+&lt;h3 id=&quot;new-pmc-members&quot;&gt;New PMC Members&lt;/h3&gt;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;Jincheng Sun, Kete (Kurt) Young, Kostas Kloudas, Thomas Weise
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-### New Committers
-	Andrey Zagrebin, Hequn, Jiangjie (Becket) Qin, Rong Rong, Zhijiang Wang
+&lt;h3 id=&quot;new-committers&quot;&gt;New Committers&lt;/h3&gt;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;Andrey Zagrebin, Hequn, Jiangjie (Becket) Qin, Rong Rong, Zhijiang Wang
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Congratulations and thank you for your hardworking commitment to Flink!
+&lt;p&gt;Congratulations and thank you for your hardworking commitment to Flink!&lt;/p&gt;
 
-# The Bigger Picture
+&lt;h1 id=&quot;the-bigger-picture&quot;&gt;The Bigger Picture&lt;/h1&gt;
 
-Flink continues to push the boundaries of (stream) data processing, and the community is proud to see an ever-increasingly diverse set of contributors, users and technologies join the ecosystem. 
+&lt;p&gt;Flink continues to push the boundaries of (stream) data processing, and the community is proud to see an ever-increasingly diverse set of contributors, users and technologies join the ecosystem.&lt;/p&gt;
 
-In the timeframe of three releases, the project jumped from **112 to 190 contributors**, also doubling down on the number of requested changes and improvements. To top it off, the Flink GitHub repository recently reached the milestone of **10k stars**, all the way up from the incubation days in 2014.
+&lt;p&gt;In the timeframe of three releases, the project jumped from &lt;strong&gt;112 to 190 contributors&lt;/strong&gt;, also doubling down on the number of requested changes and improvements. To top it off, the Flink GitHub repository recently reached the milestone of &lt;strong&gt;10k stars&lt;/strong&gt;, all the way up from the incubation days in 2014.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_1.png&quot; width=&quot;1000px&quot; alt=&quot;GitHub&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_1.png&quot; width=&quot;1000px&quot; alt=&quot;GitHub&quot; /&gt;
 &lt;/center&gt;
 
-The activity across the user@ and dev@&lt;sup&gt;1&lt;/sup&gt; mailing lists shows a healthy heartbeat, and the gradual ramp up of user-zh@ suggests that this was a well-received community effort. Looking at the numbers for the same period in 2018, the dev@ mailing list has seen the biggest surge in activity, with an average growth of **2.5x in the number of messages and distinct users** — a great reflection of the hyperactive pace of development of the Flink codebase.
+&lt;p&gt;The activity across the user@ and dev@&lt;sup&gt;1&lt;/sup&gt; mailing lists shows a healthy heartbeat, and the gradual ramp up of user-zh@ suggests that this was a well-received community effort. Looking at the numbers for the same period in 2018, the dev@ mailing list has seen the biggest surge in activity, with an average growth of &lt;strong&gt;2.5x in the number of messages and distinct users&lt;/strong&gt; — a great reflection of the hyperactive pace of development of the  [...]
 
-&lt;img style=&quot;float: right;&quot; src=&quot;{{ site.baseurl }}/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_2.png&quot; width=&quot;420px&quot; alt=&quot;Mailing Lists&quot;/&gt;
+&lt;p&gt;&lt;img style=&quot;float: right;&quot; src=&quot;/img/blog/2019-09-05-flink-community-update/2019-09-05-flink-community-update_2.png&quot; width=&quot;420px&quot; alt=&quot;Mailing Lists&quot; /&gt;&lt;/p&gt;
 
-In support of these observations, the report for the financial year of 2019 from the Apache Software Foundation (ASF) features Flink as one of the most thriving open source projects, with mentions for: 
+&lt;p&gt;In support of these observations, the report for the financial year of 2019 from the Apache Software Foundation (ASF) features Flink as one of the most thriving open source projects, with mentions for:&lt;/p&gt;
 
-* Most Active Visits and Downloads
+&lt;ul&gt;
+  &lt;li&gt;Most Active Visits and Downloads&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* Most Active Sources: Visits
+&lt;ul&gt;
+  &lt;li&gt;Most Active Sources: Visits&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* Most Active Sources: Clones
+&lt;ul&gt;
+  &lt;li&gt;Most Active Sources: Clones&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* Top Repositories by Number of Commits
+&lt;ul&gt;
+  &lt;li&gt;Top Repositories by Number of Commits&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* Top Most Active Apache Mailing Lists (user@ and dev@)
+&lt;ul&gt;
+  &lt;li&gt;Top Most Active Apache Mailing Lists (user@ and dev@)&lt;/li&gt;
+&lt;/ul&gt;
 
-Hats off to our fellows at Apache Beam for an astounding year, too! For more detailed insights, check the [full report](https://s3.amazonaws.com/files-dist/AnnualReports/FY2018%20Annual%20Report.pdf).
+&lt;p&gt;Hats off to our fellows at Apache Beam for an astounding year, too! For more detailed insights, check the &lt;a href=&quot;https://s3.amazonaws.com/files-dist/AnnualReports/FY2018%20Annual%20Report.pdf&quot;&gt;full report&lt;/a&gt;.&lt;/p&gt;
 &lt;p&gt;&lt;/p&gt;
-&lt;sup&gt;1. Excluding messages from &quot;jira@apache.org&quot;.&lt;/sup&gt;
+&lt;p&gt;&lt;sup&gt;1. Excluding messages from “jira@apache.org”.&lt;/sup&gt;&lt;/p&gt;
 
-# Upcoming Events
+&lt;h1 id=&quot;upcoming-events&quot;&gt;Upcoming Events&lt;/h1&gt;
 
-As the conference and meetup season ramps up again, here are some events to keep an eye out for talks about Flink and opportunities to mingle with the wider stream processing community.
+&lt;p&gt;As the conference and meetup season ramps up again, here are some events to keep an eye out for talks about Flink and opportunities to mingle with the wider stream processing community.&lt;/p&gt;
 
-### North America
+&lt;h3 id=&quot;north-america&quot;&gt;North America&lt;/h3&gt;
 
-* [Conference] **[Strata Data Conference 2019](https://conferences.oreilly.com/strata/strata-ny)**, September 23-26, New York, USA
-  &lt;p&gt;&lt;/p&gt;
-* [Meetup] **[Apache Flink Bay Area Meetup](https://www.meetup.com/Bay-Area-Apache-Flink-Meetup/events/262680261/)**, September 24, San Francisco, USA
-  &lt;p&gt;&lt;/p&gt;
-* [Conference] **[Scale By The Bay 2019](https://www.meetup.com/Bay-Area-Apache-Flink-Meetup/events/262680261/)**, November 13-15, San Francisco, USA
-
-### Europe
+&lt;ul&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://conferences.oreilly.com/strata/strata-ny&quot;&gt;Strata Data Conference 2019&lt;/a&gt;&lt;/strong&gt;, September 23-26, New York, USA
+    &lt;p&gt;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;[Meetup] &lt;strong&gt;&lt;a href=&quot;https://www.meetup.com/Bay-Area-Apache-Flink-Meetup/events/262680261/&quot;&gt;Apache Flink Bay Area Meetup&lt;/a&gt;&lt;/strong&gt;, September 24, San Francisco, USA
+    &lt;p&gt;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://www.meetup.com/Bay-Area-Apache-Flink-Meetup/events/262680261/&quot;&gt;Scale By The Bay 2019&lt;/a&gt;&lt;/strong&gt;, November 13-15, San Francisco, USA&lt;/li&gt;
+&lt;/ul&gt;
 
-* [Meetup] **[Apache Flink London Meetup](https://www.meetup.com/Apache-Flink-London-Meetup/events/264123672)**, September 23, London, UK 
-	&lt;p&gt;&lt;/p&gt;
-* [Conference] **[Flink Forward Europe 2019](https://europe-2019.flink-forward.org)**, October 7-9, Berlin, Germany 
-	&lt;p&gt;&lt;/p&gt;
-	* The next edition of Flink Forward Europe is around the corner and the [program](https://europe-2019.flink-forward.org/conference-program) has been announced, featuring 70+ talks as well as panel discussions and interactive &quot;Ask Me Anything&quot; sessions with core Flink committers. If you&#39;re looking to learn more about Flink and share your experience with other community members, there really is [no better place]((https://vimeo.com/296403091)) than Flink Forward!
+&lt;h3 id=&quot;europe&quot;&gt;Europe&lt;/h3&gt;
 
-	* **Note:** if you are a **committer for any Apache project**, you can **get a free ticket** by registering with your Apache email address and using the discount code: *FFEU19-ApacheCommitter*.
+&lt;ul&gt;
+  &lt;li&gt;[Meetup] &lt;strong&gt;&lt;a href=&quot;https://www.meetup.com/Apache-Flink-London-Meetup/events/264123672&quot;&gt;Apache Flink London Meetup&lt;/a&gt;&lt;/strong&gt;, September 23, London, UK
+    &lt;p&gt;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://europe-2019.flink-forward.org&quot;&gt;Flink Forward Europe 2019&lt;/a&gt;&lt;/strong&gt;, October 7-9, Berlin, Germany
+    &lt;p&gt;&lt;/p&gt;
+    &lt;ul&gt;
+      &lt;li&gt;
+        &lt;p&gt;The next edition of Flink Forward Europe is around the corner and the &lt;a href=&quot;https://europe-2019.flink-forward.org/conference-program&quot;&gt;program&lt;/a&gt; has been announced, featuring 70+ talks as well as panel discussions and interactive “Ask Me Anything” sessions with core Flink committers. If you’re looking to learn more about Flink and share your experience with other community members, there really is &lt;a href=&quot;(https://vimeo.com/296403091)&q [...]
+      &lt;/li&gt;
+      &lt;li&gt;
+        &lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; if you are a &lt;strong&gt;committer for any Apache project&lt;/strong&gt;, you can &lt;strong&gt;get a free ticket&lt;/strong&gt; by registering with your Apache email address and using the discount code: &lt;em&gt;FFEU19-ApacheCommitter&lt;/em&gt;.&lt;/p&gt;
+      &lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* [Conference] **[ApacheCon Berlin 2019](https://aceu19.apachecon.com/)**, October 22-24, Berlin, Germany
+&lt;ul&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://aceu19.apachecon.com/&quot;&gt;ApacheCon Berlin 2019&lt;/a&gt;&lt;/strong&gt;, October 22-24, Berlin, Germany&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* [Conference] **[Data2Day 2019](https://www.data2day.de/)**, October 22-24, Ludwigshafen, Germany
+&lt;ul&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://www.data2day.de/&quot;&gt;Data2Day 2019&lt;/a&gt;&lt;/strong&gt;, October 22-24, Ludwigshafen, Germany&lt;/li&gt;
+&lt;/ul&gt;
 &lt;p&gt;&lt;/p&gt;
-* [Conference] **[Big Data Tech Warsaw 2020](https://bigdatatechwarsaw.eu)**, February 7, Warsaw, Poland
-	&lt;p&gt;&lt;/p&gt;
-	* The Call For Presentations (CFP) is now [open](https://bigdatatechwarsaw.eu/cfp/).
+&lt;ul&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://bigdatatechwarsaw.eu&quot;&gt;Big Data Tech Warsaw 2020&lt;/a&gt;&lt;/strong&gt;, February 7, Warsaw, Poland
+    &lt;p&gt;&lt;/p&gt;
+    &lt;ul&gt;
+      &lt;li&gt;The Call For Presentations (CFP) is now &lt;a href=&quot;https://bigdatatechwarsaw.eu/cfp/&quot;&gt;open&lt;/a&gt;.&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-### Asia
+&lt;h3 id=&quot;asia&quot;&gt;Asia&lt;/h3&gt;
 
-* [Conference] **[Flink Forward Asia 2019](https://m.aliyun.com/markets/aliyun/developer/ffa2019)**, November 28-30, Beijing, China
-	&lt;p&gt;&lt;/p&gt;
-	* The second edition of Flink Forward Asia is also happening later this year, in Beijing, and the CFP is [open](https://developer.aliyun.com/special/ffa2019) until September 20.
+&lt;ul&gt;
+  &lt;li&gt;[Conference] &lt;strong&gt;&lt;a href=&quot;https://m.aliyun.com/markets/aliyun/developer/ffa2019&quot;&gt;Flink Forward Asia 2019&lt;/a&gt;&lt;/strong&gt;, November 28-30, Beijing, China
+    &lt;p&gt;&lt;/p&gt;
+    &lt;ul&gt;
+      &lt;li&gt;The second edition of Flink Forward Asia is also happening later this year, in Beijing, and the CFP is &lt;a href=&quot;https://developer.aliyun.com/special/ffa2019&quot;&gt;open&lt;/a&gt; until September 20.&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-If you&#39;d like to keep a closer eye on what’s happening in the community, subscribe to the [community mailing list](https://flink.apache.org/community.html#mailing-lists) to get fine-grained weekly updates, upcoming event announcements and more. Also, please reach out if you&#39;re interested in organizing or being part of Flink events in your area!</description>
-<pubDate>Tue, 10 Sep 2019 12:00:00 +0000</pubDate>
+&lt;p&gt;If you’d like to keep a closer eye on what’s happening in the community, subscribe to the &lt;a href=&quot;https://flink.apache.org/community.html#mailing-lists&quot;&gt;community mailing list&lt;/a&gt; to get fine-grained weekly updates, upcoming event announcements and more. Also, please reach out if you’re interested in organizing or being part of Flink events in your area!&lt;/p&gt;
+</description>
+<pubDate>Tue, 10 Sep 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/09/10/community-update.html</link>
 <guid isPermaLink="true">/news/2019/09/10/community-update.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.9.0 Release Announcement</title>
-<description>The Apache Flink community is proud to announce the release of Apache Flink
-1.9.0.
+<description>&lt;p&gt;The Apache Flink community is proud to announce the release of Apache Flink
+1.9.0.&lt;/p&gt;
 
-The Apache Flink project&#39;s goal is to develop a stream processing system to
+&lt;p&gt;The Apache Flink project’s goal is to develop a stream processing system to
 unify and power many forms of real-time and offline data processing
 applications as well as event-driven applications. In this release, we have
 made a huge step forward in that effort, by integrating Flink’s stream and
-batch processing capabilities under a single, unified runtime.
+batch processing capabilities under a single, unified runtime.&lt;/p&gt;
 
-Significant features on this path are batch-style recovery for batch jobs and
+&lt;p&gt;Significant features on this path are batch-style recovery for batch jobs and
 a preview of the new Blink-based query engine for Table API and SQL queries.
 We are also excited to announce the availability of the State Processor API,
 which is one of the most frequently requested features and enables users to
 read and write savepoints with Flink DataSet jobs. Finally, Flink 1.9 includes
 a reworked WebUI and previews of Flink’s new Python Table API and its
-integration with the Apache Hive ecosystem.
+integration with the Apache Hive ecosystem.&lt;/p&gt;
 
-This blog post describes all major new features and improvements, important
+&lt;p&gt;This blog post describes all major new features and improvements, important
 changes to be aware of and what to expect moving forward. For more details,
-check the [complete release
-changelog](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;version=12344601).
+check the &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;amp;version=12344601&quot;&gt;complete release
+changelog&lt;/a&gt;.&lt;/p&gt;
 
-The binary distribution and source artifacts for this release are now
-available via the [Downloads](https://flink.apache.org/downloads.html) page of
+&lt;p&gt;The binary distribution and source artifacts for this release are now
+available via the &lt;a href=&quot;https://flink.apache.org/downloads.html&quot;&gt;Downloads&lt;/a&gt; page of
 the Flink project, along with the updated
-[documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.9/).
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/&quot;&gt;documentation&lt;/a&gt;.
 Flink 1.9 is API-compatible with previous 1.x releases for APIs annotated with
-the `@Public` annotation.
-
-Please feel encouraged to download the release and share your thoughts with
-the community through the Flink [mailing
-lists](https://flink.apache.org/community.html#mailing-lists) or
-[JIRA](https://issues.apache.org/jira/projects/FLINK/summary). As always,
-feedback is very much appreciated!
-
-
-{% toc %}
-
+the &lt;code&gt;@Public&lt;/code&gt; annotation.&lt;/p&gt;
+
+&lt;p&gt;Please feel encouraged to download the release and share your thoughts with
+the community through the Flink &lt;a href=&quot;https://flink.apache.org/community.html#mailing-lists&quot;&gt;mailing
+lists&lt;/a&gt; or
+&lt;a href=&quot;https://issues.apache.org/jira/projects/FLINK/summary&quot;&gt;JIRA&lt;/a&gt;. As always,
+feedback is very much appreciated!&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#new-features-and-improvements&quot; id=&quot;markdown-toc-new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#fine-grained-batch-recovery-flip-1&quot; id=&quot;markdown-toc-fine-grained-batch-recovery-flip-1&quot;&gt;Fine-grained Batch Recovery (FLIP-1)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#state-processor-api-flip-43&quot; id=&quot;markdown-toc-state-processor-api-flip-43&quot;&gt;State Processor API (FLIP-43)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#stop-with-savepoint-flip-34&quot; id=&quot;markdown-toc-stop-with-savepoint-flip-34&quot;&gt;Stop-with-Savepoint (FLIP-34)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#flink-webui-rework&quot; id=&quot;markdown-toc-flink-webui-rework&quot;&gt;Flink WebUI Rework&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#preview-of-the-new-blink-sql-query-processor&quot; id=&quot;markdown-toc-preview-of-the-new-blink-sql-query-processor&quot;&gt;Preview of the new Blink SQL Query Processor&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#preview-of-full-hive-integration-flink-10556&quot; id=&quot;markdown-toc-preview-of-full-hive-integration-flink-10556&quot;&gt;Preview of Full Hive Integration (FLINK-10556)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#preview-of-the-new-python-table-api-flip-38&quot; id=&quot;markdown-toc-preview-of-the-new-python-table-api-flip-38&quot;&gt;Preview of the new Python Table API (FLIP-38)&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#important-changes&quot; id=&quot;markdown-toc-important-changes&quot;&gt;Important Changes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#release-notes&quot; id=&quot;markdown-toc-release-notes&quot;&gt;Release Notes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#list-of-contributors&quot; id=&quot;markdown-toc-list-of-contributors&quot;&gt;List of Contributors&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-## New Features and Improvements
+&lt;/div&gt;
 
+&lt;h2 id=&quot;new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/h2&gt;
 
-### Fine-grained Batch Recovery (FLIP-1)
+&lt;h3 id=&quot;fine-grained-batch-recovery-flip-1&quot;&gt;Fine-grained Batch Recovery (FLIP-1)&lt;/h3&gt;
 
-The time to recover a batch (DataSet, Table API and SQL) job from a task
+&lt;p&gt;The time to recover a batch (DataSet, Table API and SQL) job from a task
 failure was significantly reduced. Until Flink 1.9, task failures in batch
 jobs were recovered by canceling all tasks and restarting the whole job, i.e,
 the job was started from scratch and all progress was voided. With this
 release, Flink can be configured to limit the recovery to only those tasks
-that are in the same **failover region**. A failover region is the set of
+that are in the same &lt;strong&gt;failover region&lt;/strong&gt;. A failover region is the set of
 tasks that are connected via pipelined data exchanges. Hence, the
 batch-shuffle connections of a job define the boundaries of its failover
 regions. More details are available in
-[FLIP-1](https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures).
-![alt_text]({{site.baseurl}}/img/blog/release-19-flip1.png &quot;Fine-grained Batch
-Recovery&quot;) 
+&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-1+%3A+Fine+Grained+Recovery+from+Task+Failures&quot;&gt;FLIP-1&lt;/a&gt;.
+&lt;img src=&quot;/img/blog/release-19-flip1.png&quot; alt=&quot;alt_text&quot; title=&quot;Fine-grained Batch
+Recovery&quot; /&gt;&lt;/p&gt;
 
-To use this new failover strategy, you need to do the following
-settings:
+&lt;p&gt;To use this new failover strategy, you need to do the following
+settings:&lt;/p&gt;
 
- * Make sure you have the entry `jobmanager.execution.failover-strategy:
-   region` in your `flink-conf.yaml`.
+&lt;ul&gt;
+  &lt;li&gt;Make sure you have the entry &lt;code&gt;jobmanager.execution.failover-strategy:
+region&lt;/code&gt; in your &lt;code&gt;flink-conf.yaml&lt;/code&gt;.&lt;/li&gt;
+&lt;/ul&gt;
 
-**Note:** The configuration of the 1.9 distribution has that entry by default,
+&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The configuration of the 1.9 distribution has that entry by default,
   but when reusing a configuration file from previous setups, you have to add
-  it manually.
+  it manually.&lt;/p&gt;
 
-Moreover, you need to set the `ExecutionMode` of batch jobs in the
-`ExecutionConfig` to `BATCH` to configure that data shuffles are not pipelined
-and jobs have more than one failover region.
+&lt;p&gt;Moreover, you need to set the &lt;code&gt;ExecutionMode&lt;/code&gt; of batch jobs in the
+&lt;code&gt;ExecutionConfig&lt;/code&gt; to &lt;code&gt;BATCH&lt;/code&gt; to configure that data shuffles are not pipelined
+and jobs have more than one failover region.&lt;/p&gt;
 
-The &quot;Region&quot; failover strategy also improves the recovery of “embarrassingly
+&lt;p&gt;The “Region” failover strategy also improves the recovery of “embarrassingly
 parallel” streaming jobs, i.e., jobs without any shuffle like keyBy() or
 rebalance. When such a job is recovered, only the tasks of the affected
 pipeline (failover region) are restarted. For all other streaming jobs, the
-recovery behavior is the same as in prior Flink versions.
+recovery behavior is the same as in prior Flink versions.&lt;/p&gt;
 
+&lt;h3 id=&quot;state-processor-api-flip-43&quot;&gt;State Processor API (FLIP-43)&lt;/h3&gt;
 
-### State Processor API (FLIP-43)
-
-Up to Flink 1.9, accessing the state of a job from the outside was limited to
-the (still) experimental [Queryable
-State](https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/queryable_state.html).
+&lt;p&gt;Up to Flink 1.9, accessing the state of a job from the outside was limited to
+the (still) experimental &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/queryable_state.html&quot;&gt;Queryable
+State&lt;/a&gt;.
 This release introduces a new, powerful library to read, write and modify
-state snapshots using the batch DataSet API. In practice, this means:
-
- * Flink job state can be bootstrapped by reading data from external systems,
-   such as external databases, and converting it into a savepoint.
- * State in savepoints can be queried using any of Flink’s batch APIs
-   (DataSet, Table, SQL), for example to analyze relevant state patterns or
-   check for discrepancies in state that can support application auditing or
-   troubleshooting.
- * The schema of state in savepoints can be migrated offline, compared to the
-   previous approach requiring online migration on schema access.
- * Invalid data in savepoints can be identified and corrected.
-
-The new State Processor API covers all variations of snapshots: savepoints,
-full checkpoints and incremental checkpoints. More details are available in
-[FLIP-43](https://cwiki.apache.org/confluence/display/FLINK/FLIP-43%3A+State+Processor+API)
+state snapshots using the batch DataSet API. In practice, this means:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;Flink job state can be bootstrapped by reading data from external systems,
+such as external databases, and converting it into a savepoint.&lt;/li&gt;
+  &lt;li&gt;State in savepoints can be queried using any of Flink’s batch APIs
+(DataSet, Table, SQL), for example to analyze relevant state patterns or
+check for discrepancies in state that can support application auditing or
+troubleshooting.&lt;/li&gt;
+  &lt;li&gt;The schema of state in savepoints can be migrated offline, compared to the
+previous approach requiring online migration on schema access.&lt;/li&gt;
+  &lt;li&gt;Invalid data in savepoints can be identified and corrected.&lt;/li&gt;
+&lt;/ul&gt;
 
+&lt;p&gt;The new State Processor API covers all variations of snapshots: savepoints,
+full checkpoints and incremental checkpoints. More details are available in
+&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-43%3A+State+Processor+API&quot;&gt;FLIP-43&lt;/a&gt;&lt;/p&gt;
 
-### Stop-with-Savepoint (FLIP-34)
+&lt;h3 id=&quot;stop-with-savepoint-flip-34&quot;&gt;Stop-with-Savepoint (FLIP-34)&lt;/h3&gt;
 
-[Cancelling with a
-savepoint](https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html#operations)
+&lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/ops/state/savepoints.html#operations&quot;&gt;Cancelling with a
+savepoint&lt;/a&gt;
 is a common operation for stopping/restarting, forking or updating Flink jobs.
 However, the existing implementation did not guarantee output persistence to
 external storage systems for exactly-once sinks. To improve the end-to-end
-semantics when stopping a job, Flink 1.9 introduces a new `SUSPEND` mode to
+semantics when stopping a job, Flink 1.9 introduces a new &lt;code&gt;SUSPEND&lt;/code&gt; mode to
 stop a job with a savepoint that is consistent with the emitted data.
-You can suspend a job with Flink’s CLI client as follows:
-
-```
-bin/flink stop -p [:targetDirectory] :jobId
-```
-
-The final job state is set to `FINISHED` on success, allowing
-users to detect failures of the requested operation. 
+You can suspend a job with Flink’s CLI client as follows:&lt;/p&gt;
 
-More details are available in
-[FLIP-34](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090212)
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;bin/flink stop -p [:targetDirectory] :jobId
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
+&lt;p&gt;The final job state is set to &lt;code&gt;FINISHED&lt;/code&gt; on success, allowing
+users to detect failures of the requested operation.&lt;/p&gt;
 
+&lt;p&gt;More details are available in
+&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=103090212&quot;&gt;FLIP-34&lt;/a&gt;&lt;/p&gt;
 
-### Flink WebUI Rework
+&lt;h3 id=&quot;flink-webui-rework&quot;&gt;Flink WebUI Rework&lt;/h3&gt;
 
-After a
-[discussion](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Change-underlying-Frontend-Architecture-for-Flink-Web-Dashboard-td24902.html)
+&lt;p&gt;After a
+&lt;a href=&quot;http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/DISCUSS-Change-underlying-Frontend-Architecture-for-Flink-Web-Dashboard-td24902.html&quot;&gt;discussion&lt;/a&gt;
 about modernizing the internals of Flink’s WebUI, this component was
 reconstructed using the latest stable version of Angular — basically, a bump
 from Angular 1.x to 7.x. The redesigned version is the default in 1.9.0,
-however there is a link to switch to the old WebUI.
+however there is a link to switch to the old WebUI.&lt;/p&gt;
 
-&lt;div class=&quot;row&quot;&gt; &lt;div class=&quot;col-sm-6&quot;&gt; &lt;span&gt;&lt;img class=&quot;thumbnail&quot;
-    src=&quot;{{site.baseurl}}/img/blog/release-19-web1.png&quot; /&gt;&lt;/span&gt; &lt;/div&gt; &lt;div
-    class=&quot;col-sm-6&quot;&gt; &lt;span&gt;&lt;img class=&quot;thumbnail&quot;
-    src=&quot;{{site.baseurl}}/img/blog/release-19-web2.png&quot; /&gt;&lt;/span&gt; &lt;/div&gt;
+&lt;div class=&quot;row&quot;&gt; &lt;div class=&quot;col-sm-6&quot;&gt; &lt;span&gt;&lt;img class=&quot;thumbnail&quot; src=&quot;/img/blog/release-19-web1.png&quot; /&gt;&lt;/span&gt; &lt;/div&gt; &lt;div class=&quot;col-sm-6&quot;&gt; &lt;span&gt;&lt;img class=&quot;thumbnail&quot; src=&quot;/img/blog/release-19-web2.png&quot; /&gt;&lt;/span&gt; &lt;/div&gt;
     &lt;/div&gt;
 
-**Note:** Moving forward, feature parity for the old version of the WebUI 
-will not be guaranteed.
+&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; Moving forward, feature parity for the old version of the WebUI 
+will not be guaranteed.&lt;/p&gt;
 
+&lt;h3 id=&quot;preview-of-the-new-blink-sql-query-processor&quot;&gt;Preview of the new Blink SQL Query Processor&lt;/h3&gt;
 
-### Preview of the new Blink SQL Query Processor
-
-Following the [donation of
-Blink]({{site.baseurl}}/news/2019/02/13/unified-batch-streaming-blink.html) to
+&lt;p&gt;Following the &lt;a href=&quot;/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;donation of
+Blink&lt;/a&gt; to
 Apache Flink, the community worked on integrating Blink’s query optimizer and
 runtime for the Table API and SQL. As a first step, we refactored the
-monolithic `flink-table` module into smaller modules
-([FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions)).
+monolithic &lt;code&gt;flink-table&lt;/code&gt; module into smaller modules
+(&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions&quot;&gt;FLIP-32&lt;/a&gt;).
 This resulted in a clear separation of and well-defined interfaces between the
-Java and Scala API modules and the optimizer and runtime modules.
+Java and Scala API modules and the optimizer and runtime modules.&lt;/p&gt;
 
-&lt;span&gt;&lt;img style=&quot;width:50%&quot;
-src=&quot;{{site.baseurl}}/img/blog/release-19-stack.png&quot; /&gt;&lt;/span&gt;
+&lt;p&gt;&lt;span&gt;&lt;img style=&quot;width:50%&quot; src=&quot;/img/blog/release-19-stack.png&quot; /&gt;&lt;/span&gt;&lt;/p&gt;
 
-Next, we extended Blink’s planner to implement the new optimizer interface
+&lt;p&gt;Next, we extended Blink’s planner to implement the new optimizer interface
 such that there are now two pluggable query processors to execute Table API
 and SQL statements: the pre-1.9 Flink processor and the new Blink-based query
 processor. The Blink-based query processor offers better SQL coverage (full TPC-H
@@ -2975,161 +3030,165 @@ code-generation, and tuned operator implementations.
 The Blink-based query processor also provides a more powerful streaming runner,
 with some new features (e.g. dimension table join, TopN, deduplication) and 
 optimizations to solve data-skew in aggregation and more useful built-in
-functions.
+functions.&lt;/p&gt;
 
-**Note:** The semantics and set of supported operations of the query
-processors are mostly, but not fully aligned.
+&lt;p&gt;&lt;strong&gt;Note:&lt;/strong&gt; The semantics and set of supported operations of the query
+processors are mostly, but not fully aligned.&lt;/p&gt;
 
-However, the integration of Blink’s query processor is not fully completed
+&lt;p&gt;However, the integration of Blink’s query processor is not fully completed
 yet. Therefore, the pre-1.9 Flink processor is still the default processor in
 Flink 1.9 and recommended for production settings. You can enable the Blink
-processor by configuring it via the `EnvironmentSettings` when creating a
-`TableEnvironment`. The selected processor must be on the classpath of the
+processor by configuring it via the &lt;code&gt;EnvironmentSettings&lt;/code&gt; when creating a
+&lt;code&gt;TableEnvironment&lt;/code&gt;. The selected processor must be on the classpath of the
 executing Java process. For cluster setups, both query processors are
 automatically loaded with the default configuration. When running a query from
-your IDE you need to explicitly [add a planner
-dependency](https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/#table-program-dependencies)
-to your project.
-
-
-#### **Other Improvements to the Table API and SQL**
-
-Besides the exciting progress around the Blink planner, the community worked
-on a whole set of other improvements to these interfaces, including:
-
- * **Scala-free Table API and SQL for Java users
-   ([FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions))**
-
-   As part of the refactoring and splitting of the flink-table module, two
-   separate API modules for Java and Scala were created. For Scala users,
-   nothing really changes, but Java users can use the Table API and/or SQL now
-   without pulling in a Scala dependency.
-
- * **Rework of the Table API Type System**
-   **([FLIP-37](https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System))**
-
-   The community implemented a [new data type
-   system](https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/types.html#data-types)
-   to detach the Table API from Flink’s
-   [TypeInformation](https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/types_serialization.html#flinks-typeinformation-class)
-   class and improve its compliance with the SQL standard. This is still a
-   work in progress and expected to be completed in the next release. In
-   Flink 1.9, UDFs are―among other things―not ported to the new type system
-   yet.
-
- * **Multi-column and Multi-row Transformations for Table API**
-   **([FLIP-29](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739))**
-
-   The functionality of the Table API was extended with a set of
-   transformations that support multi-row and/or multi-column inputs and
-   outputs. These transformations significantly ease the implementation of
-   processing logic that would be cumbersome to implement with relational
-   operators.
-
- * **New, Unified Catalog APIs**
-   **([FLIP-30](https://cwiki.apache.org/confluence/display/FLINK/FLIP-30%3A+Unified+Catalog+APIs))**
-
-   We reworked the catalog APIs to store metadata and unified the handling of
-   internal and external catalogs. This effort was mainly initiated as a
-   prerequisite for the Hive integration (see below), but improves the overall
-   convenience of managing catalog metadata in Flink. Besides improving the
-   catalog interfaces, we also extended their functionality. Previously table
-   definitions for Table API or SQL queries were volatile. With Flink 1.9, the
-   metadata of tables which are registered with a SQL DDL statement can be
-   persisted in a catalog. This means you can add a table that is backed by a
-   Kafka topic to a Metastore catalog and from then on query this table
-   whenever your catalog is connected to Metastore.
-
- * **DDL Support in the SQL API
-   ([FLINK-10232](https://issues.apache.org/jira/browse/FLINK-10232))**
-
-   Up to this point, Flink SQL only supported DML statements (e.g. `SELECT`,
-   `INSERT`). External tables (table sources and sinks) had to be registered
-   via Java/Scala code or configuration files. For 1.9, we added support for
-   SQL DDL statements to register and remove tables and views (`CREATE TABLE,
-   DROP TABLE)`. However, we did not add
-   stream-specific syntax extensions to define timestamp extraction and
-   watermark generation, yet. Full support for streaming use cases is planned
-   for the next release.
-
-
-### Preview of Full Hive Integration (FLINK-10556)
-
-Apache Hive is widely used in Hadoop’s ecosystem to store and query large
+your IDE you need to explicitly &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/#table-program-dependencies&quot;&gt;add a planner
+dependency&lt;/a&gt;
+to your project.&lt;/p&gt;
+
+&lt;h4 id=&quot;other-improvements-to-the-table-api-and-sql&quot;&gt;&lt;strong&gt;Other Improvements to the Table API and SQL&lt;/strong&gt;&lt;/h4&gt;
+
+&lt;p&gt;Besides the exciting progress around the Blink planner, the community worked
+on a whole set of other improvements to these interfaces, including:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Scala-free Table API and SQL for Java users
+(&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions&quot;&gt;FLIP-32&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+    &lt;p&gt;As part of the refactoring and splitting of the flink-table module, two
+separate API modules for Java and Scala were created. For Scala users,
+nothing really changes, but Java users can use the Table API and/or SQL now
+without pulling in a Scala dependency.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Rework of the Table API Type System&lt;/strong&gt;
+&lt;strong&gt;(&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-37%3A+Rework+of+the+Table+API+Type+System&quot;&gt;FLIP-37&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+    &lt;p&gt;The community implemented a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/table/types.html#data-types&quot;&gt;new data type
+system&lt;/a&gt;
+to detach the Table API from Flink’s
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/dev/types_serialization.html#flinks-typeinformation-class&quot;&gt;TypeInformation&lt;/a&gt;
+class and improve its compliance with the SQL standard. This is still a
+work in progress and expected to be completed in the next release. In
+Flink 1.9, UDFs are―among other things―not ported to the new type system
+yet.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Multi-column and Multi-row Transformations for Table API&lt;/strong&gt;
+&lt;strong&gt;(&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=97552739&quot;&gt;FLIP-29&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+    &lt;p&gt;The functionality of the Table API was extended with a set of
+transformations that support multi-row and/or multi-column inputs and
+outputs. These transformations significantly ease the implementation of
+processing logic that would be cumbersome to implement with relational
+operators.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;New, Unified Catalog APIs&lt;/strong&gt;
+&lt;strong&gt;(&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-30%3A+Unified+Catalog+APIs&quot;&gt;FLIP-30&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+    &lt;p&gt;We reworked the catalog APIs to store metadata and unified the handling of
+internal and external catalogs. This effort was mainly initiated as a
+prerequisite for the Hive integration (see below), but improves the overall
+convenience of managing catalog metadata in Flink. Besides improving the
+catalog interfaces, we also extended their functionality. Previously table
+definitions for Table API or SQL queries were volatile. With Flink 1.9, the
+metadata of tables which are registered with a SQL DDL statement can be
+persisted in a catalog. This means you can add a table that is backed by a
+Kafka topic to a Metastore catalog and from then on query this table
+whenever your catalog is connected to Metastore.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;DDL Support in the SQL API
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10232&quot;&gt;FLINK-10232&lt;/a&gt;)&lt;/strong&gt;&lt;/p&gt;
+
+    &lt;p&gt;Up to this point, Flink SQL only supported DML statements (e.g. &lt;code&gt;SELECT&lt;/code&gt;,
+&lt;code&gt;INSERT&lt;/code&gt;). External tables (table sources and sinks) had to be registered
+via Java/Scala code or configuration files. For 1.9, we added support for
+SQL DDL statements to register and remove tables and views (&lt;code&gt;CREATE TABLE,
+DROP TABLE)&lt;/code&gt;. However, we did not add
+stream-specific syntax extensions to define timestamp extraction and
+watermark generation, yet. Full support for streaming use cases is planned
+for the next release.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h3 id=&quot;preview-of-full-hive-integration-flink-10556&quot;&gt;Preview of Full Hive Integration (FLINK-10556)&lt;/h3&gt;
+
+&lt;p&gt;Apache Hive is widely used in Hadoop’s ecosystem to store and query large
 amounts of structured data. Besides being a query processor, Hive features a
 catalog called Metastore to manage and organize large datasets. A common
 integration point for query processors is to integrate with Hive’s Metastore
-in order to be able to tap into the data managed by Hive.
+in order to be able to tap into the data managed by Hive.&lt;/p&gt;
 
-Recently, the community started implementing an external catalog for Flink’s
+&lt;p&gt;Recently, the community started implementing an external catalog for Flink’s
 Table API and SQL that connects to Hive’s Metastore. In Flink 1.9, users will
 be able to query and process all data that is stored in Hive. As described
 earlier, you will also be able to persist metadata of Flink tables in Metastore.
 Moreover, the Hive integration includes support to use Hive’s UDFs in Flink
 Table API or SQL queries. More details are available in
-[FLINK-10556](https://issues.apache.org/jira/browse/FLINK-10556).
+&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10556&quot;&gt;FLINK-10556&lt;/a&gt;.&lt;/p&gt;
 
-While, previously, table definitions for Table API or SQL queries were always
+&lt;p&gt;While, previously, table definitions for Table API or SQL queries were always
 volatile, the new catalog connector additionally allows persisting a table in
 Metastore that is created with a SQL DDL statement (see above). This means
 that you connect to Metastore and register a table that is, for example,
 backed by a Kafka topic. From now on, you can query that table whenever your
-catalog is connected to Metastore.
+catalog is connected to Metastore.&lt;/p&gt;
 
-Please note that the Hive support in Flink 1.9 is experimental. We are
+&lt;p&gt;Please note that the Hive support in Flink 1.9 is experimental. We are
 planning to stabilize these features for the next release and are looking
-forward to your feedback.
+forward to your feedback.&lt;/p&gt;
 
+&lt;h3 id=&quot;preview-of-the-new-python-table-api-flip-38&quot;&gt;Preview of the new Python Table API (FLIP-38)&lt;/h3&gt;
 
-### Preview of the new Python Table API (FLIP-38)
-
-This release also introduces a first version of a Python Table API
-([FLIP-38](https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API)).
+&lt;p&gt;This release also introduces a first version of a Python Table API
+(&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API&quot;&gt;FLIP-38&lt;/a&gt;).
 This marks the start towards our goal of bringing
 full-fledged Python support to Flink. The feature was designed as a slim
 Python API wrapper around the Table API, basically translating Python Table
 API method calls into Java Table API calls. In the initial version that ships
 with Flink 1.9, the Python Table API does not support UDFs yet, but just
 standard relational operations. Support for UDFs implemented in Python is on
-the roadmap for future releases.
-
-If you’d like to try the new Python API, you have to manually [install
-PyFlink](https://ci.apache.org/projects/flink/flink-docs-release-1.9/flinkDev/building.html#build-pyflink).
-From there, you can have a look at [this
-walkthrough](https://ci.apache.org/projects/flink/flink-docs-release-1.9/tutorials/python_table_api.html)
-or explore it on your own. The [community is currently
-working](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Publish-the-PyFlink-into-PyPI-td31201.html)
-on preparing a `pyflink` Python package that will be made available for
-installation via `pip`.
+the roadmap for future releases.&lt;/p&gt;
 
+&lt;p&gt;If you’d like to try the new Python API, you have to manually &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/flinkDev/building.html#build-pyflink&quot;&gt;install
+PyFlink&lt;/a&gt;.
+From there, you can have a look at &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/tutorials/python_table_api.html&quot;&gt;this
+walkthrough&lt;/a&gt;
+or explore it on your own. The &lt;a href=&quot;http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/VOTE-Publish-the-PyFlink-into-PyPI-td31201.html&quot;&gt;community is currently
+working&lt;/a&gt;
+on preparing a &lt;code&gt;pyflink&lt;/code&gt; Python package that will be made available for
+installation via &lt;code&gt;pip&lt;/code&gt;.&lt;/p&gt;
 
-## Important Changes
-
- * The Table API and SQL are now part of the default configuration of the
-   Flink distribution. Before, the Table API and SQL had to be enabled by
-   moving the corresponding JAR file from ./opt to ./lib.
- * The machine learning library (flink-ml) has been removed in preparation for
-   [FLIP-39](https://docs.google.com/document/d/1StObo1DLp8iiy0rbukx8kwAJb0BwDZrQrMWub3DzsEo/edit).
- * The old DataSet and DataStream Python APIs have been removed in favor of
-   [FLIP-38](https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API).
- * Flink can be compiled and run on Java 9. Note that certain components
-   interacting with external systems (connectors, filesystems, reporters) may
-   not work since the respective projects may have skipped Java 9 support.
+&lt;h2 id=&quot;important-changes&quot;&gt;Important Changes&lt;/h2&gt;
 
+&lt;ul&gt;
+  &lt;li&gt;The Table API and SQL are now part of the default configuration of the
+Flink distribution. Before, the Table API and SQL had to be enabled by
+moving the corresponding JAR file from ./opt to ./lib.&lt;/li&gt;
+  &lt;li&gt;The machine learning library (flink-ml) has been removed in preparation for
+&lt;a href=&quot;https://docs.google.com/document/d/1StObo1DLp8iiy0rbukx8kwAJb0BwDZrQrMWub3DzsEo/edit&quot;&gt;FLIP-39&lt;/a&gt;.&lt;/li&gt;
+  &lt;li&gt;The old DataSet and DataStream Python APIs have been removed in favor of
+&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-38%3A+Python+Table+API&quot;&gt;FLIP-38&lt;/a&gt;.&lt;/li&gt;
+  &lt;li&gt;Flink can be compiled and run on Java 9. Note that certain components
+interacting with external systems (connectors, filesystems, reporters) may
+not work since the respective projects may have skipped Java 9 support.&lt;/li&gt;
+&lt;/ul&gt;
 
-## Release Notes
+&lt;h2 id=&quot;release-notes&quot;&gt;Release Notes&lt;/h2&gt;
 
-Please review the [release
-notes](https://ci.apache.org/projects/flink/flink-docs-release-1.9/release-notes/flink-1.9.html)
+&lt;p&gt;Please review the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.9/release-notes/flink-1.9.html&quot;&gt;release
+notes&lt;/a&gt;
 for a more detailed list of changes and new features if you plan to upgrade
-your Flink setup to Flink 1.9.0.
-
+your Flink setup to Flink 1.9.0.&lt;/p&gt;
 
-## List of Contributors
+&lt;h2 id=&quot;list-of-contributors&quot;&gt;List of Contributors&lt;/h2&gt;
 
-We would like to thank all contributors who have made this release possible:
+&lt;p&gt;We would like to thank all contributors who have made this release possible:&lt;/p&gt;
 
-Abdul Qadeer (abqadeer), Aitozi, Alberto Romero, Aleksey Pak, Alexander
+&lt;p&gt;Abdul Qadeer (abqadeer), Aitozi, Alberto Romero, Aleksey Pak, Alexander
 Fedulov, Alice Yan, Aljoscha Krettek, Aloys, Andrew Duffy, Andrey Zagrebin,
 Ankur, Artsem Semianenka, Benchao Li, Biao Liu, Bo WANG, Bowen L, Chesnay
 Schepler, Clark Yang, Congxian Qiu, Cristian, Danny Chan, David Moravek, Dawid
@@ -3160,9 +3219,9 @@ shenlang.sl, shuai-xu, sunhaibotb, tianchen, tianchen92,
 tison, tom_gong, vinoyang, vthinkxie, wanggeng3, wenhuitang, winifredtamg,
 xl38154, xuyang1706, yangfei5, yanghua, yuzhao.cyz,
 zhangxin516, zhangxinxing, zhaofaxian, zhijiang, zjuwangg, 林小铂,
-黄培松, 时无两丶.
+黄培松, 时无两丶.&lt;/p&gt;
 </description>
-<pubDate>Thu, 22 Aug 2019 02:30:00 +0000</pubDate>
+<pubDate>Thu, 22 Aug 2019 04:30:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/08/22/release-1.9.0.html</link>
 <guid isPermaLink="true">/news/2019/08/22/release-1.9.0.html</guid>
 </item>
@@ -3179,109 +3238,147 @@ zhangxin516, zhangxinxing, zhaofaxian, zhijiang, zjuwangg, 林小铂,
 .tg .tg-center{text-align:center;vertical-align:center}
 &lt;/style&gt;
 
-In a [previous blog post]({{ site.baseurl }}/2019/06/05/flink-network-stack.html), we presented how Flink’s network stack works from the high-level abstractions to the low-level details. This second blog post in the series of network stack posts extends on this knowledge and discusses monitoring network-related metrics to identify effects such as backpressure or bottlenecks in throughput and latency. Although this post briefly covers what to do with backpressure, the topic of tuning the  [...]
-
-{% toc %}
-
-## Monitoring
-
-Probably the most important part of network monitoring is [monitoring backpressure]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring/back_pressure.html), a situation where a system is receiving data at a higher rate than it can process¹. Such behaviour will result in the sender being backpressured and may be caused by two things:
+&lt;p&gt;In a &lt;a href=&quot;/2019/06/05/flink-network-stack.html&quot;&gt;previous blog post&lt;/a&gt;, we presented how Flink’s network stack works from the high-level abstractions to the low-level details. This second blog post in the series of network stack posts extends on this knowledge and discusses monitoring network-related metrics to identify effects such as backpressure or bottlenecks in throughput and latency. Although this post briefly covers what to do with backpressure,  [...]
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#monitoring&quot; id=&quot;markdown-toc-monitoring&quot;&gt;Monitoring&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#backpressure-monitor&quot; id=&quot;markdown-toc-backpressure-monitor&quot;&gt;Backpressure Monitor&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#network-metrics&quot; id=&quot;markdown-toc-network-metrics&quot;&gt;Network Metrics&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#backpressure&quot; id=&quot;markdown-toc-backpressure&quot;&gt;Backpressure&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#resource-usage--throughput&quot; id=&quot;markdown-toc-resource-usage--throughput&quot;&gt;Resource Usage / Throughput&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#latency-tracking&quot; id=&quot;markdown-toc-latency-tracking&quot;&gt;Latency Tracking&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-* The receiver is slow.&lt;br&gt;
-  This can happen because the receiver is backpressured itself, is unable to keep processing at the same rate as the sender, or is temporarily blocked by garbage collection, lack of system resources, or I/O.
+&lt;/div&gt;
 
- * The network channel is slow.&lt;br&gt;
-  Even though in such case the receiver is not (directly) involved, we call the sender backpressured due to a potential oversubscription on network bandwidth shared by all subtasks running on the same machine. Beware that, in addition to Flink’s network stack, there may be more network users, such as sources and sinks, distributed file systems (checkpointing, network-attached storage), logging, and metrics. A previous [capacity planning blog post](https://www.ververica.com/blog/how-to-si [...]
+&lt;h2 id=&quot;monitoring&quot;&gt;Monitoring&lt;/h2&gt;
 
-&lt;sup&gt;1&lt;/sup&gt; In case you are unfamiliar with backpressure and how it interacts with Flink, we recommend reading through [this blog post on backpressure](https://www.ververica.com/blog/how-flink-handles-backpressure) from 2015.
+&lt;p&gt;Probably the most important part of network monitoring is &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/back_pressure.html&quot;&gt;monitoring backpressure&lt;/a&gt;, a situation where a system is receiving data at a higher rate than it can process¹. Such behaviour will result in the sender being backpressured and may be caused by two things:&lt;/p&gt;
 
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;The receiver is slow.&lt;br /&gt;
+This can happen because the receiver is backpressured itself, is unable to keep processing at the same rate as the sender, or is temporarily blocked by garbage collection, lack of system resources, or I/O.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;The network channel is slow.&lt;br /&gt;
+  Even though in such case the receiver is not (directly) involved, we call the sender backpressured due to a potential oversubscription on network bandwidth shared by all subtasks running on the same machine. Beware that, in addition to Flink’s network stack, there may be more network users, such as sources and sinks, distributed file systems (checkpointing, network-attached storage), logging, and metrics. A previous &lt;a href=&quot;https://www.ververica.com/blog/how-to-size-your-apach [...]
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-&lt;br&gt;
-If backpressure occurs, it will bubble upstream and eventually reach your sources and slow them down. This is not a bad thing per-se and merely states that you lack resources for the current load. However, you may want to improve your job so that it can cope with higher loads without using more resources. In order to do so, you need to find (1) where (at which task/operator) the bottleneck is and (2) what is causing it. Flink offers two mechanisms for identifying where the bottleneck is:
+&lt;p&gt;&lt;sup&gt;1&lt;/sup&gt; In case you are unfamiliar with backpressure and how it interacts with Flink, we recommend reading through &lt;a href=&quot;https://www.ververica.com/blog/how-flink-handles-backpressure&quot;&gt;this blog post on backpressure&lt;/a&gt; from 2015.&lt;/p&gt;
 
- * directly via Flink’s web UI and its backpressure monitor, or
- * indirectly through some of the network metrics.
+&lt;p&gt;&lt;br /&gt;
+If backpressure occurs, it will bubble upstream and eventually reach your sources and slow them down. This is not a bad thing per-se and merely states that you lack resources for the current load. However, you may want to improve your job so that it can cope with higher loads without using more resources. In order to do so, you need to find (1) where (at which task/operator) the bottleneck is and (2) what is causing it. Flink offers two mechanisms for identifying where the bottleneck is: [...]
 
-Flink’s web UI is likely the first entry point for a quick troubleshooting but has some disadvantages that we will explain below. On the other hand, Flink’s network metrics are better suited for continuous monitoring and reasoning about the exact nature of the bottleneck causing backpressure. We will cover both in the sections below. In both cases, you need to identify the origin of backpressure from the sources to the sinks. Your starting point for the current and future investigations  [...]
+&lt;ul&gt;
+  &lt;li&gt;directly via Flink’s web UI and its backpressure monitor, or&lt;/li&gt;
+  &lt;li&gt;indirectly through some of the network metrics.&lt;/li&gt;
+&lt;/ul&gt;
 
+&lt;p&gt;Flink’s web UI is likely the first entry point for a quick troubleshooting but has some disadvantages that we will explain below. On the other hand, Flink’s network metrics are better suited for continuous monitoring and reasoning about the exact nature of the bottleneck causing backpressure. We will cover both in the sections below. In both cases, you need to identify the origin of backpressure from the sources to the sinks. Your starting point for the current and future invest [...]
 
-### Backpressure Monitor
+&lt;h3 id=&quot;backpressure-monitor&quot;&gt;Backpressure Monitor&lt;/h3&gt;
 
-The [backpressure monitor]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring/back_pressure.html) is only exposed via Flink’s web UI². Since it&#39;s an active component that is only triggered on request, it is currently not available via metrics. The backpressure monitor samples the running tasks&#39; threads on all TaskManagers via `Thread.getStackTrace()` and computes the number of samples where tasks were blocked on a buffer request. These tasks were either unable to send netw [...]
+&lt;p&gt;The &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/back_pressure.html&quot;&gt;backpressure monitor&lt;/a&gt; is only exposed via Flink’s web UI². Since it’s an active component that is only triggered on request, it is currently not available via metrics. The backpressure monitor samples the running tasks’ threads on all TaskManagers via &lt;code&gt;Thread.getStackTrace()&lt;/code&gt; and computes the number of samples where tasks were bl [...]
 
-* &lt;span style=&quot;color:green&quot;&gt;OK&lt;/span&gt; for `ratio ≤ 0.10`,
-* &lt;span style=&quot;color:orange&quot;&gt;LOW&lt;/span&gt; for `0.10 &lt; Ratio ≤ 0.5`, and
-* &lt;span style=&quot;color:red&quot;&gt;HIGH&lt;/span&gt; for `0.5 &lt; Ratio ≤ 1`.
+&lt;ul&gt;
+  &lt;li&gt;&lt;span style=&quot;color:green&quot;&gt;OK&lt;/span&gt; for &lt;code&gt;ratio ≤ 0.10&lt;/code&gt;,&lt;/li&gt;
+  &lt;li&gt;&lt;span style=&quot;color:orange&quot;&gt;LOW&lt;/span&gt; for &lt;code&gt;0.10 &amp;lt; Ratio ≤ 0.5&lt;/code&gt;, and&lt;/li&gt;
+  &lt;li&gt;&lt;span style=&quot;color:red&quot;&gt;HIGH&lt;/span&gt; for &lt;code&gt;0.5 &amp;lt; Ratio ≤ 1&lt;/code&gt;.&lt;/li&gt;
+&lt;/ul&gt;
 
-Although you can tune things like the refresh-interval, the number of samples, or the delay between samples, normally, you would not need to touch these since the defaults already give good-enough results.
+&lt;p&gt;Although you can tune things like the refresh-interval, the number of samples, or the delay between samples, normally, you would not need to touch these since the defaults already give good-enough results.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-07-23-network-stack-2/back_pressure_sampling_high.png&quot; width=&quot;600px&quot; alt=&quot;Backpressure sampling:high&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-07-23-network-stack-2/back_pressure_sampling_high.png&quot; width=&quot;600px&quot; alt=&quot;Backpressure sampling:high&quot; /&gt;
 &lt;/center&gt;
 
-&lt;sup&gt;2&lt;/sup&gt; You may also access the backpressure monitor via the REST API: `/jobs/:jobid/vertices/:vertexid/backpressure`
-
-
-&lt;br&gt;
-The backpressure monitor can help you find where (at which task/operator) backpressure originates from. However, it does not support you in further reasoning about the causes of it. Additionally, for larger jobs or higher parallelism, the backpressure monitor becomes too crowded to use and may also take some time to gather all information from all TaskManagers. Please also note that sampling may affect your running job’s performance.
-
-## Network Metrics
+&lt;p&gt;&lt;sup&gt;2&lt;/sup&gt; You may also access the backpressure monitor via the REST API: &lt;code&gt;/jobs/:jobid/vertices/:vertexid/backpressure&lt;/code&gt;&lt;/p&gt;
 
-[Network]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring/metrics.html#network) and [task I/O]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring/metrics.html#io) metrics are more lightweight than the backpressure monitor and are continuously published for each running job. We can leverage those and get even more insights, not only for backpressure monitoring. The most relevant metrics for users are:
+&lt;p&gt;&lt;br /&gt;
+The backpressure monitor can help you find where (at which task/operator) backpressure originates from. However, it does not support you in further reasoning about the causes of it. Additionally, for larger jobs or higher parallelism, the backpressure monitor becomes too crowded to use and may also take some time to gather all information from all TaskManagers. Please also note that sampling may affect your running job’s performance.&lt;/p&gt;
 
+&lt;h2 id=&quot;network-metrics&quot;&gt;Network Metrics&lt;/h2&gt;
 
-* **&lt;span style=&quot;color:orange&quot;&gt;up to Flink 1.8:&lt;/span&gt;** `outPoolUsage`, `inPoolUsage`&lt;br&gt;
-  An estimate on the ratio of buffers used vs. buffers available in the respective local buffer pools.
-  While interpreting `inPoolUsage` in Flink 1.5 - 1.8 with credit-based flow control, please note that this only relates to floating buffers (exclusive buffers are not part of the pool).
+&lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/metrics.html#network&quot;&gt;Network&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/metrics.html#io&quot;&gt;task I/O&lt;/a&gt; metrics are more lightweight than the backpressure monitor and are continuously published for each running job. We can leverage those and get even more insights, not only for backpressure monitoring. The most re [...]
 
-* **&lt;span style=&quot;color:green&quot;&gt;Flink 1.9 and above:&lt;/span&gt;** `outPoolUsage`, `inPoolUsage`, `floatingBuffersUsage`, `exclusiveBuffersUsage`&lt;br&gt;
-  An estimate on the ratio of buffers used vs. buffers available in the respective local buffer pools.
-  Starting with Flink 1.9, `inPoolUsage` is the sum of `floatingBuffersUsage` and `exclusiveBuffersUsage`.
-
-* `numRecordsOut`, `numRecordsIn`&lt;br&gt;
-  Each metric comes with two scopes: one scoped to the operator and one scoped to the subtask. For network monitoring, the subtask-scoped metric is relevant and shows the total number of records it has sent/received. You may need to further look into these figures to extract the number of records within a certain time span or use the equivalent `…PerSecond` metrics.
-
-* `numBytesOut`, `numBytesInLocal`, `numBytesInRemote`&lt;br&gt;
-  The total number of bytes this subtask has emitted or read from a local/remote source. These are also available as meters via `…PerSecond` metrics.
-
-* `numBuffersOut`, `numBuffersInLocal`, `numBuffersInRemote`&lt;br&gt;
-  Similar to `numBytes…` but counting the number of network buffers.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;&lt;span style=&quot;color:orange&quot;&gt;up to Flink 1.8:&lt;/span&gt;&lt;/strong&gt; &lt;code&gt;outPoolUsage&lt;/code&gt;, &lt;code&gt;inPoolUsage&lt;/code&gt;&lt;br /&gt;
+An estimate on the ratio of buffers used vs. buffers available in the respective local buffer pools.
+While interpreting &lt;code&gt;inPoolUsage&lt;/code&gt; in Flink 1.5 - 1.8 with credit-based flow control, please note that this only relates to floating buffers (exclusive buffers are not part of the pool).&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;&lt;span style=&quot;color:green&quot;&gt;Flink 1.9 and above:&lt;/span&gt;&lt;/strong&gt; &lt;code&gt;outPoolUsage&lt;/code&gt;, &lt;code&gt;inPoolUsage&lt;/code&gt;, &lt;code&gt;floatingBuffersUsage&lt;/code&gt;, &lt;code&gt;exclusiveBuffersUsage&lt;/code&gt;&lt;br /&gt;
+An estimate on the ratio of buffers used vs. buffers available in the respective local buffer pools.
+Starting with Flink 1.9, &lt;code&gt;inPoolUsage&lt;/code&gt; is the sum of &lt;code&gt;floatingBuffersUsage&lt;/code&gt; and &lt;code&gt;exclusiveBuffersUsage&lt;/code&gt;.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;code&gt;numRecordsOut&lt;/code&gt;, &lt;code&gt;numRecordsIn&lt;/code&gt;&lt;br /&gt;
+Each metric comes with two scopes: one scoped to the operator and one scoped to the subtask. For network monitoring, the subtask-scoped metric is relevant and shows the total number of records it has sent/received. You may need to further look into these figures to extract the number of records within a certain time span or use the equivalent &lt;code&gt;…PerSecond&lt;/code&gt; metrics.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;code&gt;numBytesOut&lt;/code&gt;, &lt;code&gt;numBytesInLocal&lt;/code&gt;, &lt;code&gt;numBytesInRemote&lt;/code&gt;&lt;br /&gt;
+The total number of bytes this subtask has emitted or read from a local/remote source. These are also available as meters via &lt;code&gt;…PerSecond&lt;/code&gt; metrics.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;code&gt;numBuffersOut&lt;/code&gt;, &lt;code&gt;numBuffersInLocal&lt;/code&gt;, &lt;code&gt;numBuffersInRemote&lt;/code&gt;&lt;br /&gt;
+Similar to &lt;code&gt;numBytes…&lt;/code&gt; but counting the number of network buffers.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-&lt;div class=&quot;alert alert-warning&quot; markdown=&quot;1&quot;&gt;
-&lt;span class=&quot;label label-warning&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Warning&lt;/span&gt;
-For the sake of completeness and since they have been used in the past, we will briefly look at the `outputQueueLength` and `inputQueueLength` metrics. These are somewhat similar to the `[out,in]PoolUsage` metrics but show the number of buffers sitting in a sender subtask’s output queues and in a receiver subtask’s input queues, respectively. Reasoning about absolute numbers of buffers, however, is difficult and there is also a special subtlety with local channels: since a local input ch [...]
+&lt;div class=&quot;alert alert-warning&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-warning&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Warning&lt;/span&gt;
+For the sake of completeness and since they have been used in the past, we will briefly look at the &lt;code&gt;outputQueueLength&lt;/code&gt; and &lt;code&gt;inputQueueLength&lt;/code&gt; metrics. These are somewhat similar to the &lt;code&gt;[out,in]PoolUsage&lt;/code&gt; metrics but show the number of buffers sitting in a sender subtask’s output queues and in a receiver subtask’s input queues, respectively. Reasoning about absolute numbers of buffers, however, is difficult and there i [...]
 
-Overall, **we discourage the use of** `outputQueueLength` **and** `inputQueueLength` because their interpretation highly depends on the current parallelism of the operator and the configured numbers of exclusive and floating buffers. Instead, we recommend using the various `*PoolUsage` metrics which even reveal more detailed insight.
+  &lt;p&gt;Overall, &lt;strong&gt;we discourage the use of&lt;/strong&gt; &lt;code&gt;outputQueueLength&lt;/code&gt; &lt;strong&gt;and&lt;/strong&gt; &lt;code&gt;inputQueueLength&lt;/code&gt; because their interpretation highly depends on the current parallelism of the operator and the configured numbers of exclusive and floating buffers. Instead, we recommend using the various &lt;code&gt;*PoolUsage&lt;/code&gt; metrics which even reveal more detailed insight.&lt;/p&gt;
 &lt;/div&gt;
 
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
+ If you reason about buffer usage, please keep the following in mind:&lt;/p&gt;
 
-&lt;div class=&quot;alert alert-info&quot; markdown=&quot;1&quot;&gt;
-&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
- If you reason about buffer usage, please keep the following in mind:
-
-* Any outgoing channel which has been used at least once will always occupy one buffer (since Flink 1.5).
-  * **&lt;span style=&quot;color:orange&quot;&gt;up to Flink 1.8:&lt;/span&gt;** This buffer (even if empty!) was always counted as a backlog of 1 and thus receivers tried to reserve a floating buffer for it.
-  * **&lt;span style=&quot;color:green&quot;&gt;Flink 1.9 and above:&lt;/span&gt;** A buffer is only counted in the backlog if it is ready for consumption, i.e. it is full or was flushed (see FLINK-11082)
-* The receiver will only release a received buffer after deserialising the last record in it.
+  &lt;ul&gt;
+    &lt;li&gt;Any outgoing channel which has been used at least once will always occupy one buffer (since Flink 1.5).
+      &lt;ul&gt;
+        &lt;li&gt;&lt;strong&gt;&lt;span style=&quot;color:orange&quot;&gt;up to Flink 1.8:&lt;/span&gt;&lt;/strong&gt; This buffer (even if empty!) was always counted as a backlog of 1 and thus receivers tried to reserve a floating buffer for it.&lt;/li&gt;
+        &lt;li&gt;&lt;strong&gt;&lt;span style=&quot;color:green&quot;&gt;Flink 1.9 and above:&lt;/span&gt;&lt;/strong&gt; A buffer is only counted in the backlog if it is ready for consumption, i.e. it is full or was flushed (see FLINK-11082)&lt;/li&gt;
+      &lt;/ul&gt;
+    &lt;/li&gt;
+    &lt;li&gt;The receiver will only release a received buffer after deserialising the last record in it.&lt;/li&gt;
+  &lt;/ul&gt;
 &lt;/div&gt;
 
-The following sections make use of and combine these metrics to reason about backpressure and resource usage / efficiency with respect to throughput. A separate section will detail latency related metrics.
-
-
-### Backpressure
-
-Backpressure may be indicated by two different sets of metrics: (local) buffer pool usages as well as input/output queue lengths. They provide a different level of granularity but, unfortunately, none of these are exhaustive and there is room for interpretation. Because of the inherent problems with interpreting these queue lengths we will focus on the usage of input and output pools below which also provides more detail.
+&lt;p&gt;The following sections make use of and combine these metrics to reason about backpressure and resource usage / efficiency with respect to throughput. A separate section will detail latency related metrics.&lt;/p&gt;
 
-* **If a subtask’s** `outPoolUsage` **is 100%**, it is backpressured. Whether the subtask is already blocking or still writing records into network buffers depends on how full the buffers are, that the `RecordWriters` are currently writing into.&lt;br&gt;
-&lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;&quot;&gt;&lt;/span&gt; This is different to what the backpressure monitor is showing!
+&lt;h3 id=&quot;backpressure&quot;&gt;Backpressure&lt;/h3&gt;
 
-* An `inPoolUsage` of 100% means that all floating buffers are assigned to channels and eventually backpressure will be exercised upstream. These floating buffers are in either of the following conditions: they are reserved for future use on a channel due to an exclusive buffer being utilised (remote input channels always try to maintain `#exclusive buffers` credits), they are reserved for a sender’s backlog and wait for data, they may contain data and are enqueued in an input channel, o [...]
+&lt;p&gt;Backpressure may be indicated by two different sets of metrics: (local) buffer pool usages as well as input/output queue lengths. They provide a different level of granularity but, unfortunately, none of these are exhaustive and there is room for interpretation. Because of the inherent problems with interpreting these queue lengths we will focus on the usage of input and output pools below which also provides more detail.&lt;/p&gt;
 
-* **&lt;span style=&quot;color:orange&quot;&gt;up to Flink 1.8:&lt;/span&gt;** Due to [FLINK-11082](https://issues.apache.org/jira/browse/FLINK-11082), an `inPoolUsage` of 100% is quite common even in normal situations.
-
-* **&lt;span style=&quot;color:green&quot;&gt;Flink 1.9 and above:&lt;/span&gt;** If `inPoolUsage` is constantly around 100%, this is a strong indicator for exercising backpressure upstream.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;If a subtask’s&lt;/strong&gt; &lt;code&gt;outPoolUsage&lt;/code&gt; &lt;strong&gt;is 100%&lt;/strong&gt;, it is backpressured. Whether the subtask is already blocking or still writing records into network buffers depends on how full the buffers are, that the &lt;code&gt;RecordWriters&lt;/code&gt; are currently writing into.&lt;br /&gt;
+&lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;&quot;&gt;&lt;/span&gt; This is different to what the backpressure monitor is showing!&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;An &lt;code&gt;inPoolUsage&lt;/code&gt; of 100% means that all floating buffers are assigned to channels and eventually backpressure will be exercised upstream. These floating buffers are in either of the following conditions: they are reserved for future use on a channel due to an exclusive buffer being utilised (remote input channels always try to maintain &lt;code&gt;#exclusive buffers&lt;/code&gt; credits), they are reserved for a sender’s backlog and wait for data, they [...]
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;&lt;span style=&quot;color:orange&quot;&gt;up to Flink 1.8:&lt;/span&gt;&lt;/strong&gt; Due to &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11082&quot;&gt;FLINK-11082&lt;/a&gt;, an &lt;code&gt;inPoolUsage&lt;/code&gt; of 100% is quite common even in normal situations.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;&lt;span style=&quot;color:green&quot;&gt;Flink 1.9 and above:&lt;/span&gt;&lt;/strong&gt; If &lt;code&gt;inPoolUsage&lt;/code&gt; is constantly around 100%, this is a strong indicator for exercising backpressure upstream.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-The following table summarises all combinations and their interpretation. Bear in mind, though, that backpressure may be minor or temporary (no need to look into it), on particular channels only, or caused by other JVM processes on a particular TaskManager, such as GC, synchronisation, I/O, resource shortage, instead of a specific subtask.
+&lt;p&gt;The following table summarises all combinations and their interpretation. Bear in mind, though, that backpressure may be minor or temporary (no need to look into it), on particular channels only, or caused by other JVM processes on a particular TaskManager, such as GC, synchronisation, I/O, resource shortage, instead of a specific subtask.&lt;/p&gt;
 
 &lt;center&gt;
 &lt;table class=&quot;tg&quot;&gt;
@@ -3295,46 +3392,52 @@ The following table summarises all combinations and their interpretation. Bear i
     &lt;td class=&quot;tg-topcenter&quot;&gt;
       &lt;span class=&quot;glyphicon glyphicon-ok-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:green;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;/td&gt;
     &lt;td class=&quot;tg-topcenter&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (backpressured, temporary situation: upstream is not backpressured yet or not anymore)&lt;/td&gt;
   &lt;/tr&gt;
   &lt;tr&gt;
     &lt;th class=&quot;tg-top&quot; rowspan=&quot;2&quot;&gt;
-      &lt;code&gt;inPoolUsage&lt;/code&gt; high&lt;br&gt;
+      &lt;code&gt;inPoolUsage&lt;/code&gt; high&lt;br /&gt;
       (&lt;strong&gt;&lt;span style=&quot;color:green&quot;&gt;Flink 1.9+&lt;/span&gt;&lt;/strong&gt;)&lt;/th&gt;
     &lt;td class=&quot;tg-topcenter&quot;&gt;
-      if all upstream tasks’&lt;code&gt;outPoolUsage&lt;/code&gt; are low: &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      if all upstream tasks’&lt;code&gt;outPoolUsage&lt;/code&gt; are low: &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (may eventually cause backpressure)&lt;/td&gt;
     &lt;td class=&quot;tg-topcenter&quot; rowspan=&quot;2&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (backpressured by downstream task(s) or network, probably forwarding backpressure upstream)&lt;/td&gt;
   &lt;/tr&gt;
   &lt;tr&gt;
-    &lt;td class=&quot;tg-topcenter&quot;&gt;if any upstream task’s&lt;code&gt;outPoolUsage&lt;/code&gt; is high: &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+    &lt;td class=&quot;tg-topcenter&quot;&gt;if any upstream task’s&lt;code&gt;outPoolUsage&lt;/code&gt; is high: &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (may exercise backpressure upstream and may be the source of backpressure)&lt;/td&gt;
   &lt;/tr&gt;
 &lt;/table&gt;
 &lt;/center&gt;
 
-&lt;br&gt;
-We may even reason more about the cause of backpressure by looking at the network metrics of the subtasks of two consecutive tasks:
+&lt;p&gt;&lt;br /&gt;
+We may even reason more about the cause of backpressure by looking at the network metrics of the subtasks of two consecutive tasks:&lt;/p&gt;
 
-* If all subtasks of the receiver task have low `inPoolUsage` values and any upstream subtask’s `outPoolUsage` is high, then there may be a network bottleneck causing backpressure.
-Since network is a shared resource among all subtasks of a TaskManager, this may not directly originate from this subtask, but rather from various concurrent operations, e.g. checkpoints, other streams, external connections, or other TaskManagers/processes on the same machine.
+&lt;ul&gt;
+  &lt;li&gt;If all subtasks of the receiver task have low &lt;code&gt;inPoolUsage&lt;/code&gt; values and any upstream subtask’s &lt;code&gt;outPoolUsage&lt;/code&gt; is high, then there may be a network bottleneck causing backpressure.
+Since network is a shared resource among all subtasks of a TaskManager, this may not directly originate from this subtask, but rather from various concurrent operations, e.g. checkpoints, other streams, external connections, or other TaskManagers/processes on the same machine.&lt;/li&gt;
+&lt;/ul&gt;
 
-Backpressure can also be caused by all parallel instances of a task or by a single task instance. The first usually happens because the task is performing some time consuming operation that applies to all input partitions. The latter is usually the result of some kind of skew, either data skew or resource availability/allocation skew. In either case, you can find some hints on how to handle such situations in the [What to do with backpressure?](#span-classlabel-label-info-styledisplay-in [...]
+&lt;p&gt;Backpressure can also be caused by all parallel instances of a task or by a single task instance. The first usually happens because the task is performing some time consuming operation that applies to all input partitions. The latter is usually the result of some kind of skew, either data skew or resource availability/allocation skew. In either case, you can find some hints on how to handle such situations in the &lt;a href=&quot;#span-classlabel-label-info-styledisplay-inline-b [...]
 
-&lt;div class=&quot;alert alert-info&quot; markdown=&quot;1&quot;&gt;
-### &lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Flink 1.9 and above
-{:.no_toc}
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;h3 class=&quot;no_toc&quot; id=&quot;span-classglyphicon-glyphicon-info-sign-aria-hiddentruespan-flink-19-and-above&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Flink 1.9 and above&lt;/h3&gt;
 
-* If `floatingBuffersUsage` is not 100%, it is unlikely that there is backpressure. If it is 100% and any upstream task is backpressured, it suggests that this input is exercising backpressure on either a single, some or all input channels. To differentiate between those three situations you can use `exclusiveBuffersUsage`:
-  * Assuming that `floatingBuffersUsage` is around 100%, the higher the `exclusiveBuffersUsage` the more input channels are backpressured. In an extreme case of `exclusiveBuffersUsage` being close to 100%, it means that all channels are backpressured.
+  &lt;ul&gt;
+    &lt;li&gt;If &lt;code&gt;floatingBuffersUsage&lt;/code&gt; is not 100%, it is unlikely that there is backpressure. If it is 100% and any upstream task is backpressured, it suggests that this input is exercising backpressure on either a single, some or all input channels. To differentiate between those three situations you can use &lt;code&gt;exclusiveBuffersUsage&lt;/code&gt;:
+      &lt;ul&gt;
+        &lt;li&gt;Assuming that &lt;code&gt;floatingBuffersUsage&lt;/code&gt; is around 100%, the higher the &lt;code&gt;exclusiveBuffersUsage&lt;/code&gt; the more input channels are backpressured. In an extreme case of &lt;code&gt;exclusiveBuffersUsage&lt;/code&gt; being close to 100%, it means that all channels are backpressured.&lt;/li&gt;
+      &lt;/ul&gt;
+    &lt;/li&gt;
+  &lt;/ul&gt;
 
-&lt;br&gt;
-The relation between `exclusiveBuffersUsage`, `floatingBuffersUsage`, and the upstream tasks&#39; `outPoolUsage` is summarised in the following table and extends on the table above with `inPoolUsage = floatingBuffersUsage + exclusiveBuffersUsage`:
+  &lt;p&gt;&lt;br /&gt;
+The relation between &lt;code&gt;exclusiveBuffersUsage&lt;/code&gt;, &lt;code&gt;floatingBuffersUsage&lt;/code&gt;, and the upstream tasks’ &lt;code&gt;outPoolUsage&lt;/code&gt; is summarised in the following table and extends on the table above with &lt;code&gt;inPoolUsage = floatingBuffersUsage + exclusiveBuffersUsage&lt;/code&gt;:&lt;/p&gt;
 
-&lt;center&gt;
+  &lt;center&gt;
 &lt;table class=&quot;tg&quot;&gt;
   &lt;tr&gt;
     &lt;th&gt;&lt;/th&gt;
@@ -3343,490 +3446,493 @@ The relation between `exclusiveBuffersUsage`, `floatingBuffersUsage`, and the up
   &lt;/tr&gt;
   &lt;tr&gt;
     &lt;th class=&quot;tg-top&quot; style=&quot;min-width:33%;&quot;&gt;
-      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; low +&lt;br&gt;
+      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; low +&lt;br /&gt;
       &lt;em&gt;all&lt;/em&gt; upstream &lt;code&gt;outPoolUsage&lt;/code&gt; low&lt;/th&gt;
     &lt;td class=&quot;tg-center&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-ok-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:green;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;/td&gt;
     &lt;td class=&quot;tg-center&quot;&gt;-&lt;sup&gt;3&lt;/sup&gt;&lt;/td&gt;
   &lt;/tr&gt;
   &lt;tr&gt;
     &lt;th class=&quot;tg-top&quot; style=&quot;min-width:33%;&quot;&gt;
-      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; low +&lt;br&gt;
+      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; low +&lt;br /&gt;
       &lt;em&gt;any&lt;/em&gt; upstream &lt;code&gt;outPoolUsage&lt;/code&gt; high&lt;/th&gt;
     &lt;td class=&quot;tg-center&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (potential network bottleneck)&lt;/td&gt;
     &lt;td class=&quot;tg-center&quot;&gt;-&lt;sup&gt;3&lt;/sup&gt;&lt;/td&gt;
   &lt;/tr&gt;
   &lt;tr&gt;
     &lt;th class=&quot;tg-top&quot; style=&quot;min-width:33%;&quot;&gt;
-      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; high +&lt;br&gt;
+      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; high +&lt;br /&gt;
       &lt;em&gt;all&lt;/em&gt; upstream &lt;code&gt;outPoolUsage&lt;/code&gt; low&lt;/th&gt;
     &lt;td class=&quot;tg-center&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (backpressure eventually appears on only some of the input channels)&lt;/td&gt;
     &lt;td class=&quot;tg-center&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:orange;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (backpressure eventually appears on most or all of the input channels)&lt;/td&gt;
   &lt;/tr&gt;
   &lt;tr&gt;
     &lt;th class=&quot;tg-top&quot; style=&quot;min-width:33%;&quot;&gt;
-      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; high +&lt;br&gt;
+      &lt;code&gt;floatingBuffersUsage&lt;/code&gt; high +&lt;br /&gt;
       any upstream &lt;code&gt;outPoolUsage&lt;/code&gt; high&lt;/th&gt;
     &lt;td class=&quot;tg-center&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (backpressure on only some of the input channels)&lt;/td&gt;
     &lt;td class=&quot;tg-center&quot;&gt;
-      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br&gt;
+      &lt;span class=&quot;glyphicon glyphicon-remove-sign&quot; aria-hidden=&quot;true&quot; style=&quot;color:red;font-size:1.5em;&quot;&gt;&lt;/span&gt;&lt;br /&gt;
       (backpressure on most or all of the input channels)&lt;/td&gt;
   &lt;/tr&gt;
 &lt;/table&gt;
 &lt;/center&gt;
 
-&lt;sup&gt;3&lt;/sup&gt; this should not happen
+  &lt;p&gt;&lt;sup&gt;3&lt;/sup&gt; this should not happen&lt;/p&gt;
 
 &lt;/div&gt;
 
+&lt;h3 id=&quot;resource-usage--throughput&quot;&gt;Resource Usage / Throughput&lt;/h3&gt;
 
-### Resource Usage / Throughput
+&lt;p&gt;Besides the obvious use of each individual metric mentioned above, there are also a few combinations providing useful insight into what is happening in the network stack:&lt;/p&gt;
 
-Besides the obvious use of each individual metric mentioned above, there are also a few combinations providing useful insight into what is happening in the network stack:
-
-* Low throughput with frequent `outPoolUsage` values around 100% but low `inPoolUsage` on all receivers is an indicator that the round-trip-time of our credit-notification (depends on your network’s latency) is too high for the default number of exclusive buffers to make use of your bandwidth. Consider increasing the [buffers-per-channel]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-buffers-per-channel) parameter or try disabling credit-based  [...]
-
-* Combining `numRecordsOut` and `numBytesOut` helps identifying average serialised record sizes which supports you in capacity planning for peak scenarios.
-
-* If you want to reason about buffer fill rates and the influence of the output flusher, you may combine `numBytesInRemote` with `numBuffersInRemote`. When tuning for throughput (and not latency!), low buffer fill rates may indicate reduced network efficiency. In such cases, consider increasing the buffer timeout.
-Please note that, as of Flink 1.8 and 1.9, `numBuffersOut` only increases for buffers getting full or for an event cutting off a buffer (e.g. a checkpoint barrier) and may lag behind. Please also note that reasoning about buffer fill rates on local channels is unnecessary since buffering is an optimisation technique for remote channels with limited effect on local channels.
-
-* You may also separate local from remote traffic using numBytesInLocal and numBytesInRemote but in most cases this is unnecessary.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Low throughput with frequent &lt;code&gt;outPoolUsage&lt;/code&gt; values around 100% but low &lt;code&gt;inPoolUsage&lt;/code&gt; on all receivers is an indicator that the round-trip-time of our credit-notification (depends on your network’s latency) is too high for the default number of exclusive buffers to make use of your bandwidth. Consider increasing the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskmanager-network-mem [...]
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Combining &lt;code&gt;numRecordsOut&lt;/code&gt; and &lt;code&gt;numBytesOut&lt;/code&gt; helps identifying average serialised record sizes which supports you in capacity planning for peak scenarios.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;If you want to reason about buffer fill rates and the influence of the output flusher, you may combine &lt;code&gt;numBytesInRemote&lt;/code&gt; with &lt;code&gt;numBuffersInRemote&lt;/code&gt;. When tuning for throughput (and not latency!), low buffer fill rates may indicate reduced network efficiency. In such cases, consider increasing the buffer timeout.
+Please note that, as of Flink 1.8 and 1.9, &lt;code&gt;numBuffersOut&lt;/code&gt; only increases for buffers getting full or for an event cutting off a buffer (e.g. a checkpoint barrier) and may lag behind. Please also note that reasoning about buffer fill rates on local channels is unnecessary since buffering is an optimisation technique for remote channels with limited effect on local channels.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;You may also separate local from remote traffic using numBytesInLocal and numBytesInRemote but in most cases this is unnecessary.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-&lt;div class=&quot;alert alert-info&quot; markdown=&quot;1&quot;&gt;
-### &lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; What to do with Backpressure?
-{:.no_toc}
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;h3 class=&quot;no_toc&quot; id=&quot;span-classglyphicon-glyphicon-info-sign-aria-hiddentruespan-what-to-do-with-backpressure&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; What to do with Backpressure?&lt;/h3&gt;
 
-Assuming that you identified where the source of backpressure — a bottleneck — is located, the next step is to analyse why this is happening. Below, we list some potential causes of backpressure from the more basic to the more complex ones. We recommend to check the basic causes first, before diving deeper on the more complex ones and potentially drawing false conclusions.
+  &lt;p&gt;Assuming that you identified where the source of backpressure — a bottleneck — is located, the next step is to analyse why this is happening. Below, we list some potential causes of backpressure from the more basic to the more complex ones. We recommend to check the basic causes first, before diving deeper on the more complex ones and potentially drawing false conclusions.&lt;/p&gt;
 
-Please also recall that backpressure might be temporary and the result of a load spike, checkpointing, or a job restart with a data backlog waiting to be processed. In that case, you can often just ignore it. Alternatively, keep in mind that the process of analysing and solving the issue can be affected by the intermittent nature of your bottleneck. Having said that, here are a couple of things to check.
+  &lt;p&gt;Please also recall that backpressure might be temporary and the result of a load spike, checkpointing, or a job restart with a data backlog waiting to be processed. In that case, you can often just ignore it. Alternatively, keep in mind that the process of analysing and solving the issue can be affected by the intermittent nature of your bottleneck. Having said that, here are a couple of things to check.&lt;/p&gt;
 
-#### System Resources
+  &lt;h4 id=&quot;system-resources&quot;&gt;System Resources&lt;/h4&gt;
 
-Firstly, you should check the incriminated machines’ basic resource usage like CPU, network, or disk I/O. If some resource is fully or heavily utilised you can do one of the following:
+  &lt;p&gt;Firstly, you should check the incriminated machines’ basic resource usage like CPU, network, or disk I/O. If some resource is fully or heavily utilised you can do one of the following:&lt;/p&gt;
 
-1. Try to optimise your code. Code profilers are helpful in this case.
-2. Tune Flink for that specific resource.
-3. Scale out by increasing the parallelism and/or increasing the number of machines in the cluster.
+  &lt;ol&gt;
+    &lt;li&gt;Try to optimise your code. Code profilers are helpful in this case.&lt;/li&gt;
+    &lt;li&gt;Tune Flink for that specific resource.&lt;/li&gt;
+    &lt;li&gt;Scale out by increasing the parallelism and/or increasing the number of machines in the cluster.&lt;/li&gt;
+  &lt;/ol&gt;
 
-#### Garbage Collection
+  &lt;h4 id=&quot;garbage-collection&quot;&gt;Garbage Collection&lt;/h4&gt;
 
-Oftentimes, performance issues arise from long GC pauses. You can verify whether you are in such a situation by either printing debug GC logs (via -`XX:+PrintGCDetails`) or by using some memory/GC profilers. Since dealing with GC issues is highly application-dependent and independent of Flink, we will not go into details here ([Oracle&#39;s Garbage Collection Tuning Guide](https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/index.html) or [Plumbr’s Java Garbage Collection  [...]
+  &lt;p&gt;Oftentimes, performance issues arise from long GC pauses. You can verify whether you are in such a situation by either printing debug GC logs (via -&lt;code&gt;XX:+PrintGCDetails&lt;/code&gt;) or by using some memory/GC profilers. Since dealing with GC issues is highly application-dependent and independent of Flink, we will not go into details here (&lt;a href=&quot;https://docs.oracle.com/javase/8/docs/technotes/guides/vm/gctuning/index.html&quot;&gt;Oracle’s Garbage Collecti [...]
 
-#### CPU/Thread Bottleneck
+  &lt;h4 id=&quot;cputhread-bottleneck&quot;&gt;CPU/Thread Bottleneck&lt;/h4&gt;
 
-Sometimes a CPU bottleneck might not be visible at first glance if one or a couple of threads are causing the CPU bottleneck while the CPU usage of the overall machine remains relatively low. For instance, a single CPU-bottlenecked thread on a 48-core machine would result in only 2% CPU use. Consider using code profilers for this as they can identify hot threads by showing each threads&#39; CPU usage, for example.
+  &lt;p&gt;Sometimes a CPU bottleneck might not be visible at first glance if one or a couple of threads are causing the CPU bottleneck while the CPU usage of the overall machine remains relatively low. For instance, a single CPU-bottlenecked thread on a 48-core machine would result in only 2% CPU use. Consider using code profilers for this as they can identify hot threads by showing each threads’ CPU usage, for example.&lt;/p&gt;
 
-#### Thread Contention
+  &lt;h4 id=&quot;thread-contention&quot;&gt;Thread Contention&lt;/h4&gt;
 
-Similarly to the CPU/thread bottleneck issue above, a subtask may be bottlenecked due to high thread contention on shared resources. Again, CPU profilers are your best friend here! Consider looking for synchronisation overhead / lock contention in user code — although adding synchronisation in user code should be avoided and may even be dangerous! Also consider investigating shared system resources. The default JVM’s SSL implementation, for example, can become contented around the shared [...]
+  &lt;p&gt;Similarly to the CPU/thread bottleneck issue above, a subtask may be bottlenecked due to high thread contention on shared resources. Again, CPU profilers are your best friend here! Consider looking for synchronisation overhead / lock contention in user code — although adding synchronisation in user code should be avoided and may even be dangerous! Also consider investigating shared system resources. The default JVM’s SSL implementation, for example, can become contented around [...]
 
-#### Load Imbalance
+  &lt;h4 id=&quot;load-imbalance&quot;&gt;Load Imbalance&lt;/h4&gt;
 
-If your bottleneck is caused by data skew, you can try to remove it or mitigate its impact by changing the data partitioning to separate heavy keys or by implementing local/pre-aggregation.
+  &lt;p&gt;If your bottleneck is caused by data skew, you can try to remove it or mitigate its impact by changing the data partitioning to separate heavy keys or by implementing local/pre-aggregation.&lt;/p&gt;
 
-&lt;br&gt;
-This list is far from exhaustive. Generally, in order to reduce a bottleneck and thus backpressure, first analyse where it is happening and then find out why. The best place to start reasoning about the “why” is by checking what resources are fully utilised.
+  &lt;p&gt;&lt;br /&gt;
+This list is far from exhaustive. Generally, in order to reduce a bottleneck and thus backpressure, first analyse where it is happening and then find out why. The best place to start reasoning about the “why” is by checking what resources are fully utilised.&lt;/p&gt;
 &lt;/div&gt;
 
-### Latency Tracking
+&lt;h3 id=&quot;latency-tracking&quot;&gt;Latency Tracking&lt;/h3&gt;
 
-Tracking latencies at the various locations they may occur is a topic of its own. In this section, we will focus on the time records wait inside Flink’s network stack — including the system’s network connections. In low throughput scenarios, these latencies are influenced directly by the output flusher via the buffer timeout parameter or indirectly by any application code latencies. When processing a record takes longer than expected or when (multiple) timers fire at the same time — and  [...]
+&lt;p&gt;Tracking latencies at the various locations they may occur is a topic of its own. In this section, we will focus on the time records wait inside Flink’s network stack — including the system’s network connections. In low throughput scenarios, these latencies are influenced directly by the output flusher via the buffer timeout parameter or indirectly by any application code latencies. When processing a record takes longer than expected or when (multiple) timers fire at the same ti [...]
 
-Flink offers some support for [tracking the latency]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring/metrics.html#latency-tracking) of records passing through the system (outside of user code). However, this is disabled by default (see below why!) and must be enabled by setting a latency tracking interval either in Flink’s [configuration via `metrics.latency.interval`]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#metrics-latency-interval) or via [ExecutionConf [...]
+&lt;p&gt;Flink offers some support for &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring/metrics.html#latency-tracking&quot;&gt;tracking the latency&lt;/a&gt; of records passing through the system (outside of user code). However, this is disabled by default (see below why!) and must be enabled by setting a latency tracking interval either in Flink’s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#metrics-l [...]
 
-* `single`: one histogram for each operator subtask
-* `operator` (default): one histogram for each combination of source task and operator subtask
-* `subtask`: one histogram for each combination of source subtask and operator subtask (quadratic in the parallelism!)
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;single&lt;/code&gt;: one histogram for each operator subtask&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;operator&lt;/code&gt; (default): one histogram for each combination of source task and operator subtask&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;subtask&lt;/code&gt;: one histogram for each combination of source subtask and operator subtask (quadratic in the parallelism!)&lt;/li&gt;
+&lt;/ul&gt;
 
-These metrics are collected through special “latency markers”: each source subtask will periodically emit a special record containing the timestamp of its creation. The latency markers then flow alongside normal records while not overtaking them on the wire or inside a buffer queue. However, _a latency marker does not enter application logic_ and is overtaking records there. Latency markers therefore only measure the waiting time between the user code and not a full “end-to-end” latency. [...]
+&lt;p&gt;These metrics are collected through special “latency markers”: each source subtask will periodically emit a special record containing the timestamp of its creation. The latency markers then flow alongside normal records while not overtaking them on the wire or inside a buffer queue. However, &lt;em&gt;a latency marker does not enter application logic&lt;/em&gt; and is overtaking records there. Latency markers therefore only measure the waiting time between the user code and not  [...]
 
-Since `LatencyMarkers` sit in network buffers just like normal records, they will also wait for the buffer to be full or flushed due to buffer timeouts. When a channel is on high load, there is no added latency by the network buffering data. However, as soon as one channel is under low load, records and latency markers will experience an expected average delay of at most `buffer_timeout / 2`. This delay will add to each network connection towards a subtask and should be taken into accoun [...]
+&lt;p&gt;Since &lt;code&gt;LatencyMarkers&lt;/code&gt; sit in network buffers just like normal records, they will also wait for the buffer to be full or flushed due to buffer timeouts. When a channel is on high load, there is no added latency by the network buffering data. However, as soon as one channel is under low load, records and latency markers will experience an expected average delay of at most &lt;code&gt;buffer_timeout / 2&lt;/code&gt;. This delay will add to each network conne [...]
 
-By looking at the exposed latency tracking metrics for each subtask, for example at the 95th percentile, you should nevertheless be able to identify subtasks which are adding substantially to the overall source-to-sink latency and continue with optimising there.
+&lt;p&gt;By looking at the exposed latency tracking metrics for each subtask, for example at the 95th percentile, you should nevertheless be able to identify subtasks which are adding substantially to the overall source-to-sink latency and continue with optimising there.&lt;/p&gt;
 
-&lt;div class=&quot;alert alert-info&quot; markdown=&quot;1&quot;&gt;
-&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
-Flink&#39;s latency markers assume that the clocks on all machines in the cluster are in sync. We recommend setting up an automated clock synchronisation service (like NTP) to avoid false latency results.
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
+Flink’s latency markers assume that the clocks on all machines in the cluster are in sync. We recommend setting up an automated clock synchronisation service (like NTP) to avoid false latency results.&lt;/p&gt;
 &lt;/div&gt;
 
-&lt;div class=&quot;alert alert-warning&quot; markdown=&quot;1&quot;&gt;
-&lt;span class=&quot;label label-warning&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Warning&lt;/span&gt;
-Enabling latency metrics can significantly impact the performance of the cluster (in particular for `subtask` granularity) due to the sheer amount of metrics being added as well as the use of histograms which are quite expensive to maintain. It is highly recommended to only use them for debugging purposes.
+&lt;div class=&quot;alert alert-warning&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-warning&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-warning-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Warning&lt;/span&gt;
+Enabling latency metrics can significantly impact the performance of the cluster (in particular for &lt;code&gt;subtask&lt;/code&gt; granularity) due to the sheer amount of metrics being added as well as the use of histograms which are quite expensive to maintain. It is highly recommended to only use them for debugging purposes.&lt;/p&gt;
 &lt;/div&gt;
 
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-## Conclusion
-
-In the previous sections we discussed how to monitor Flink&#39;s network stack which primarily involves identifying backpressure: where it occurs, where it originates from, and (potentially) why it occurs. This can be executed in two ways: for simple cases and debugging sessions by using the backpressure monitor; for continuous monitoring, more in-depth analysis, and less runtime overhead by using Flink’s task and network stack metrics. Backpressure can be caused by the network layer its [...]
-
-Stay tuned for the third blog post in the series of network stack posts that will focus on tuning techniques and anti-patterns to avoid.
+&lt;p&gt;In the previous sections we discussed how to monitor Flink’s network stack which primarily involves identifying backpressure: where it occurs, where it originates from, and (potentially) why it occurs. This can be executed in two ways: for simple cases and debugging sessions by using the backpressure monitor; for continuous monitoring, more in-depth analysis, and less runtime overhead by using Flink’s task and network stack metrics. Backpressure can be caused by the network laye [...]
 
+&lt;p&gt;Stay tuned for the third blog post in the series of network stack posts that will focus on tuning techniques and anti-patterns to avoid.&lt;/p&gt;
 
 </description>
-<pubDate>Tue, 23 Jul 2019 15:30:00 +0000</pubDate>
+<pubDate>Tue, 23 Jul 2019 17:30:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/07/23/flink-network-stack-2.html</link>
 <guid isPermaLink="true">/2019/07/23/flink-network-stack-2.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.8.1 Released</title>
-<description>The Apache Flink community released the first bugfix version of the Apache Flink 1.8 series.
-
-This release includes more than 40 fixes and minor improvements for Flink 1.8.1. The list below includes a detailed list of all improvements, sub-tasks and bug fixes.
-
-We highly recommend all users to upgrade to Flink 1.8.1.
-
-Updated Maven dependencies:
-
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.8.1&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.8.1&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.8.1&lt;/version&gt;
-&lt;/dependency&gt;
-```
-
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
-
-List of resolved issues:
-    
+<description>&lt;p&gt;The Apache Flink community released the first bugfix version of the Apache Flink 1.8 series.&lt;/p&gt;
+
+&lt;p&gt;This release includes more than 40 fixes and minor improvements for Flink 1.8.1. The list below includes a detailed list of all improvements, sub-tasks and bug fixes.&lt;/p&gt;
+
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.8.1.&lt;/p&gt;
+
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.8.1&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
+
 &lt;h2&gt;        Sub-task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10921&#39;&gt;FLINK-10921&lt;/a&gt;] -         Prioritize shard consumers in Kinesis Consumer by event time 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10921&quot;&gt;FLINK-10921&lt;/a&gt;] -         Prioritize shard consumers in Kinesis Consumer by event time 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12617&#39;&gt;FLINK-12617&lt;/a&gt;] -         StandaloneJobClusterEntrypoint should default to random JobID for non-HA setups 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12617&quot;&gt;FLINK-12617&lt;/a&gt;] -         StandaloneJobClusterEntrypoint should default to random JobID for non-HA setups 
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9445&#39;&gt;FLINK-9445&lt;/a&gt;] -         scala-shell uses plain java command
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9445&quot;&gt;FLINK-9445&lt;/a&gt;] -         scala-shell uses plain java command
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10455&#39;&gt;FLINK-10455&lt;/a&gt;] -         Potential Kafka producer leak in case of failures
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10455&quot;&gt;FLINK-10455&lt;/a&gt;] -         Potential Kafka producer leak in case of failures
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10941&#39;&gt;FLINK-10941&lt;/a&gt;] -         Slots prematurely released which still contain unconsumed data 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10941&quot;&gt;FLINK-10941&lt;/a&gt;] -         Slots prematurely released which still contain unconsumed data 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11059&#39;&gt;FLINK-11059&lt;/a&gt;] -         JobMaster may continue using an invalid slot if releasing idle slot meet a timeout
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11059&quot;&gt;FLINK-11059&lt;/a&gt;] -         JobMaster may continue using an invalid slot if releasing idle slot meet a timeout
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11107&#39;&gt;FLINK-11107&lt;/a&gt;] -         Avoid memory stateBackend to create arbitrary folders under HA path when no checkpoint path configured
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11107&quot;&gt;FLINK-11107&lt;/a&gt;] -         Avoid memory stateBackend to create arbitrary folders under HA path when no checkpoint path configured
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11897&#39;&gt;FLINK-11897&lt;/a&gt;] -         ExecutionGraphSuspendTest does not wait for all tasks to be submitted
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11897&quot;&gt;FLINK-11897&lt;/a&gt;] -         ExecutionGraphSuspendTest does not wait for all tasks to be submitted
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11915&#39;&gt;FLINK-11915&lt;/a&gt;] -         DataInputViewStream skip returns wrong value
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11915&quot;&gt;FLINK-11915&lt;/a&gt;] -         DataInputViewStream skip returns wrong value
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11987&#39;&gt;FLINK-11987&lt;/a&gt;] -         Kafka producer occasionally throws NullpointerException
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11987&quot;&gt;FLINK-11987&lt;/a&gt;] -         Kafka producer occasionally throws NullpointerException
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12009&#39;&gt;FLINK-12009&lt;/a&gt;] -         Wrong check message about heartbeat interval for HeartbeatServices
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12009&quot;&gt;FLINK-12009&lt;/a&gt;] -         Wrong check message about heartbeat interval for HeartbeatServices
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12042&#39;&gt;FLINK-12042&lt;/a&gt;] -         RocksDBStateBackend mistakenly uses default filesystem
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12042&quot;&gt;FLINK-12042&lt;/a&gt;] -         RocksDBStateBackend mistakenly uses default filesystem
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12112&#39;&gt;FLINK-12112&lt;/a&gt;] -         AbstractTaskManagerProcessFailureRecoveryTest process output logging does not work properly
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12112&quot;&gt;FLINK-12112&lt;/a&gt;] -         AbstractTaskManagerProcessFailureRecoveryTest process output logging does not work properly
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12132&#39;&gt;FLINK-12132&lt;/a&gt;] -         The example in /docs/ops/deployment/yarn_setup.md should be updated due to the change FLINK-2021
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12132&quot;&gt;FLINK-12132&lt;/a&gt;] -         The example in /docs/ops/deployment/yarn_setup.md should be updated due to the change FLINK-2021
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12184&#39;&gt;FLINK-12184&lt;/a&gt;] -         HistoryServerArchiveFetcher isn&amp;#39;t compatible with old version
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12184&quot;&gt;FLINK-12184&lt;/a&gt;] -         HistoryServerArchiveFetcher isn&amp;#39;t compatible with old version
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12219&#39;&gt;FLINK-12219&lt;/a&gt;] -         Yarn application can&amp;#39;t stop when flink job failed in per-job yarn cluster mode
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12219&quot;&gt;FLINK-12219&lt;/a&gt;] -         Yarn application can&amp;#39;t stop when flink job failed in per-job yarn cluster mode
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12247&#39;&gt;FLINK-12247&lt;/a&gt;] -         fix NPE when writing an archive file to a FileSystem
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12247&quot;&gt;FLINK-12247&lt;/a&gt;] -         fix NPE when writing an archive file to a FileSystem
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12260&#39;&gt;FLINK-12260&lt;/a&gt;] -         Slot allocation failure by taskmanager registration timeout and race
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12260&quot;&gt;FLINK-12260&lt;/a&gt;] -         Slot allocation failure by taskmanager registration timeout and race
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12296&#39;&gt;FLINK-12296&lt;/a&gt;] -         Data loss silently in RocksDBStateBackend when more than one operator(has states) chained in a single task 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12296&quot;&gt;FLINK-12296&lt;/a&gt;] -         Data loss silently in RocksDBStateBackend when more than one operator(has states) chained in a single task 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12297&#39;&gt;FLINK-12297&lt;/a&gt;] -         Make ClosureCleaner recursive
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12297&quot;&gt;FLINK-12297&lt;/a&gt;] -         Make ClosureCleaner recursive
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12301&#39;&gt;FLINK-12301&lt;/a&gt;] -         Scala value classes inside case classes cannot be serialized anymore in Flink 1.8.0
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12301&quot;&gt;FLINK-12301&lt;/a&gt;] -         Scala value classes inside case classes cannot be serialized anymore in Flink 1.8.0
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12342&#39;&gt;FLINK-12342&lt;/a&gt;] -         Yarn Resource Manager Acquires Too Many Containers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12342&quot;&gt;FLINK-12342&lt;/a&gt;] -         Yarn Resource Manager Acquires Too Many Containers
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12375&#39;&gt;FLINK-12375&lt;/a&gt;] -         flink-container job jar does not have read permissions
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12375&quot;&gt;FLINK-12375&lt;/a&gt;] -         flink-container job jar does not have read permissions
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12416&#39;&gt;FLINK-12416&lt;/a&gt;] -         Docker build script fails on symlink creation ln -s
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12416&quot;&gt;FLINK-12416&lt;/a&gt;] -         Docker build script fails on symlink creation ln -s
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12544&#39;&gt;FLINK-12544&lt;/a&gt;] -         Deadlock while releasing memory and requesting segment concurrent in SpillableSubpartition
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12544&quot;&gt;FLINK-12544&lt;/a&gt;] -         Deadlock while releasing memory and requesting segment concurrent in SpillableSubpartition
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12547&#39;&gt;FLINK-12547&lt;/a&gt;] -         Deadlock when the task thread downloads jars using BlobClient
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12547&quot;&gt;FLINK-12547&lt;/a&gt;] -         Deadlock when the task thread downloads jars using BlobClient
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12646&#39;&gt;FLINK-12646&lt;/a&gt;] -         Use reserved IP as unrouteable IP in RestClientTest
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12646&quot;&gt;FLINK-12646&lt;/a&gt;] -         Use reserved IP as unrouteable IP in RestClientTest
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12688&#39;&gt;FLINK-12688&lt;/a&gt;] -         Make serializer lazy initialization thread safe in StateDescriptor
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12688&quot;&gt;FLINK-12688&lt;/a&gt;] -         Make serializer lazy initialization thread safe in StateDescriptor
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12740&#39;&gt;FLINK-12740&lt;/a&gt;] -         SpillableSubpartitionTest deadlocks on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12740&quot;&gt;FLINK-12740&lt;/a&gt;] -         SpillableSubpartitionTest deadlocks on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12835&#39;&gt;FLINK-12835&lt;/a&gt;] -         Time conversion is wrong in ManualClock
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12835&quot;&gt;FLINK-12835&lt;/a&gt;] -         Time conversion is wrong in ManualClock
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12863&#39;&gt;FLINK-12863&lt;/a&gt;] -         Race condition between slot offerings and AllocatedSlotReport
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12863&quot;&gt;FLINK-12863&lt;/a&gt;] -         Race condition between slot offerings and AllocatedSlotReport
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12865&#39;&gt;FLINK-12865&lt;/a&gt;] -         State inconsistency between RM and TM on the slot status
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12865&quot;&gt;FLINK-12865&lt;/a&gt;] -         State inconsistency between RM and TM on the slot status
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12871&#39;&gt;FLINK-12871&lt;/a&gt;] -         Wrong SSL setup examples in docs
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12871&quot;&gt;FLINK-12871&lt;/a&gt;] -         Wrong SSL setup examples in docs
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12895&#39;&gt;FLINK-12895&lt;/a&gt;] -         TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure failed on travis 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12895&quot;&gt;FLINK-12895&lt;/a&gt;] -         TaskManagerProcessFailureBatchRecoveryITCase.testTaskManagerProcessFailure failed on travis 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12896&#39;&gt;FLINK-12896&lt;/a&gt;] -         TaskCheckpointStatisticDetailsHandler uses wrong value for JobID when archiving
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12896&quot;&gt;FLINK-12896&lt;/a&gt;] -         TaskCheckpointStatisticDetailsHandler uses wrong value for JobID when archiving
 &lt;/li&gt;
 &lt;/ul&gt;
-                
+
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11126&#39;&gt;FLINK-11126&lt;/a&gt;] -         Filter out AMRMToken in the TaskManager credentials
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11126&quot;&gt;FLINK-11126&lt;/a&gt;] -         Filter out AMRMToken in the TaskManager credentials
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12137&#39;&gt;FLINK-12137&lt;/a&gt;] -         Add more proper explanation on flink streaming connectors 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12137&quot;&gt;FLINK-12137&lt;/a&gt;] -         Add more proper explanation on flink streaming connectors 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12169&#39;&gt;FLINK-12169&lt;/a&gt;] -         Improve Javadoc of MessageAcknowledgingSourceBase
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12169&quot;&gt;FLINK-12169&lt;/a&gt;] -         Improve Javadoc of MessageAcknowledgingSourceBase
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12378&#39;&gt;FLINK-12378&lt;/a&gt;] -         Consolidate FileSystem Documentation
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12378&quot;&gt;FLINK-12378&lt;/a&gt;] -         Consolidate FileSystem Documentation
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12391&#39;&gt;FLINK-12391&lt;/a&gt;] -         Add timeout to transfer.sh
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12391&quot;&gt;FLINK-12391&lt;/a&gt;] -         Add timeout to transfer.sh
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12539&#39;&gt;FLINK-12539&lt;/a&gt;] -         StreamingFileSink: Make the class extendable to customize for different usecases
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12539&quot;&gt;FLINK-12539&lt;/a&gt;] -         StreamingFileSink: Make the class extendable to customize for different usecases
 &lt;/li&gt;
 &lt;/ul&gt;
-    
+
 &lt;h2&gt;        Test
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12350&#39;&gt;FLINK-12350&lt;/a&gt;] -         RocksDBStateBackendTest doesn&amp;#39;t cover the incremental checkpoint code path
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12350&quot;&gt;FLINK-12350&lt;/a&gt;] -         RocksDBStateBackendTest doesn&amp;#39;t cover the incremental checkpoint code path
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-12460&#39;&gt;FLINK-12460&lt;/a&gt;] -         Change taskmanager.tmp.dirs to io.tmp.dirs in configuration docs
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-12460&quot;&gt;FLINK-12460&lt;/a&gt;] -         Change taskmanager.tmp.dirs to io.tmp.dirs in configuration docs
 &lt;/li&gt;
 &lt;/ul&gt;
-                                                                                                                                        </description>
-<pubDate>Tue, 02 Jul 2019 12:00:00 +0000</pubDate>
+
+</description>
+<pubDate>Tue, 02 Jul 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/07/02/release-1.8.1.html</link>
 <guid isPermaLink="true">/news/2019/07/02/release-1.8.1.html</guid>
 </item>
 
 <item>
 <title>A Practical Guide to Broadcast State in Apache Flink</title>
-<description>Since version 1.5.0, Apache Flink features a new type of state which is called Broadcast State. In this post, we explain what Broadcast State is, and show an example of how it can be applied to an application that evaluates dynamic patterns on an event stream. We walk you through the processing steps and the source code to implement this application in practice.
+<description>&lt;p&gt;Since version 1.5.0, Apache Flink features a new type of state which is called Broadcast State. In this post, we explain what Broadcast State is, and show an example of how it can be applied to an application that evaluates dynamic patterns on an event stream. We walk you through the processing steps and the source code to implement this application in practice.&lt;/p&gt;
 
-## What is Broadcast State?
+&lt;h2 id=&quot;what-is-broadcast-state&quot;&gt;What is Broadcast State?&lt;/h2&gt;
 
-The Broadcast State can be used to combine and jointly process two streams of events in a specific way. The events of the first stream are broadcasted to all parallel instances of an operator, which maintains them as state. The events of the other stream are not broadcasted but sent to individual instances of the same operator and processed together with the events of the broadcasted stream. 
-The new broadcast state is a natural fit for applications that need to join a low-throughput and a high-throughput stream or need to dynamically update their processing logic. We will use a concrete example of the latter use case to explain the broadcast state and show its API in more detail in the remainder of this post.
+&lt;p&gt;The Broadcast State can be used to combine and jointly process two streams of events in a specific way. The events of the first stream are broadcasted to all parallel instances of an operator, which maintains them as state. The events of the other stream are not broadcasted but sent to individual instances of the same operator and processed together with the events of the broadcasted stream. 
+The new broadcast state is a natural fit for applications that need to join a low-throughput and a high-throughput stream or need to dynamically update their processing logic. We will use a concrete example of the latter use case to explain the broadcast state and show its API in more detail in the remainder of this post.&lt;/p&gt;
 
-## Dynamic Pattern Evaluation with Broadcast State
+&lt;h2 id=&quot;dynamic-pattern-evaluation-with-broadcast-state&quot;&gt;Dynamic Pattern Evaluation with Broadcast State&lt;/h2&gt;
 
-Imagine an e-commerce website that captures the interactions of all users as a stream of user actions. The company that operates the website is interested in analyzing the interactions to increase revenue, improve the user experience, and detect and prevent malicious behavior. 
-The website implements a streaming application that detects a pattern on the stream of user events. However, the company wants to avoid modifying and redeploying the application every time the pattern changes. Instead, the application ingests a second stream of patterns and updates its active pattern when it receives a new pattern from the pattern stream. In the following, we discuss this application step-by-step and show how it leverages the broadcast state feature in Apache Flink.
+&lt;p&gt;Imagine an e-commerce website that captures the interactions of all users as a stream of user actions. The company that operates the website is interested in analyzing the interactions to increase revenue, improve the user experience, and detect and prevent malicious behavior. 
+The website implements a streaming application that detects a pattern on the stream of user events. However, the company wants to avoid modifying and redeploying the application every time the pattern changes. Instead, the application ingests a second stream of patterns and updates its active pattern when it receives a new pattern from the pattern stream. In the following, we discuss this application step-by-step and show how it leverages the broadcast state feature in Apache Flink.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/broadcastState/fig1.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot;/&gt;
+&lt;img src=&quot;/img/blog/broadcastState/fig1.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-Our example application ingests two data streams. The first stream provides user actions on the website and is illustrated on the top left side of the above figure. A user interaction event consists of the type of the action (user login, user logout, add to cart, or complete payment) and the id of the user, which is encoded by color. The user action event stream in our illustration contains a logout action of User 1001 followed by a payment-complete event for User 1003, and an “add-to-ca [...]
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The second stream provides action patterns that the application will evaluate. A pattern consists of two consecutive actions. In the figure above, the pattern stream contains the following two:
+&lt;p&gt;Our example application ingests two data streams. The first stream provides user actions on the website and is illustrated on the top left side of the above figure. A user interaction event consists of the type of the action (user login, user logout, add to cart, or complete payment) and the id of the user, which is encoded by color. The user action event stream in our illustration contains a logout action of User 1001 followed by a payment-complete event for User 1003, and an “ [...]
 
-* Pattern #1: A user logs in and immediately logs out without browsing additional pages on the e-commerce website. 
-* Pattern #2: A user adds an item to the shopping cart and logs out without completing the purchase.
+&lt;p&gt;The second stream provides action patterns that the application will evaluate. A pattern consists of two consecutive actions. In the figure above, the pattern stream contains the following two:&lt;/p&gt;
 
+&lt;ul&gt;
+  &lt;li&gt;Pattern #1: A user logs in and immediately logs out without browsing additional pages on the e-commerce website.&lt;/li&gt;
+  &lt;li&gt;Pattern #2: A user adds an item to the shopping cart and logs out without completing the purchase.&lt;/li&gt;
+&lt;/ul&gt;
 
-Such patterns help a business in better analyzing user behavior, detecting malicious actions, and improving the website experience. For example, in the case of items being added to a shopping cart with no follow up purchase, the website team can take appropriate actions to understand better the reasons why users don’t complete a purchase and initiate specific programs to improve the website conversion (such as providing discount codes, limited free shipping offers etc.)
+&lt;p&gt;Such patterns help a business in better analyzing user behavior, detecting malicious actions, and improving the website experience. For example, in the case of items being added to a shopping cart with no follow up purchase, the website team can take appropriate actions to understand better the reasons why users don’t complete a purchase and initiate specific programs to improve the website conversion (such as providing discount codes, limited free shipping offers etc.)&lt;/p&gt;
 
-On the right-hand side, the figure shows three parallel tasks of an operator that ingest the pattern and user action streams, evaluate the patterns on the action stream, and emit pattern matches downstream. For the sake of simplicity, the operator in our example only evaluates a single pattern with exactly two subsequent actions. The currently active pattern is replaced when a new pattern is received from the pattern stream. In principle, the operator could also be implemented to evaluat [...]
+&lt;p&gt;On the right-hand side, the figure shows three parallel tasks of an operator that ingest the pattern and user action streams, evaluate the patterns on the action stream, and emit pattern matches downstream. For the sake of simplicity, the operator in our example only evaluates a single pattern with exactly two subsequent actions. The currently active pattern is replaced when a new pattern is received from the pattern stream. In principle, the operator could also be implemented t [...]
 
-We will describe how the pattern matching application processes the user action and pattern streams.
+&lt;p&gt;We will describe how the pattern matching application processes the user action and pattern streams.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/broadcastState/fig2.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot;/&gt;
+&lt;img src=&quot;/img/blog/broadcastState/fig2.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-First a pattern is sent to the operator. The pattern is broadcasted to all three parallel tasks of the operator. The tasks store the pattern in their broadcast state. Since the broadcast state should only be updated using broadcasted data, the state of all tasks is always expected to be the same.
+&lt;p&gt;First a pattern is sent to the operator. The pattern is broadcasted to all three parallel tasks of the operator. The tasks store the pattern in their broadcast state. Since the broadcast state should only be updated using broadcasted data, the state of all tasks is always expected to be the same.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/broadcastState/fig3.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot;/&gt;
+&lt;img src=&quot;/img/blog/broadcastState/fig3.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Next, the first user actions are partitioned on the user id and shipped to the operator tasks. The partitioning ensures that all actions of the same user are processed by the same task. The figure above shows the state of the application after the first pattern and the first three action events were consumed by the operator tasks.
+&lt;p&gt;Next, the first user actions are partitioned on the user id and shipped to the operator tasks. The partitioning ensures that all actions of the same user are processed by the same task. The figure above shows the state of the application after the first pattern and the first three action events were consumed by the operator tasks.&lt;/p&gt;
 
-When a task receives a new user action, it evaluates the currently active pattern by looking at the user’s latest and previous actions. For each user, the operator stores the previous action in the keyed state. Since the tasks in the figure above only received a single action for each user so far (we just started the application), the pattern does not need to be evaluated. Finally, the previous action in the user’s keyed state is updated to the latest action, to be able to look it up whe [...]
+&lt;p&gt;When a task receives a new user action, it evaluates the currently active pattern by looking at the user’s latest and previous actions. For each user, the operator stores the previous action in the keyed state. Since the tasks in the figure above only received a single action for each user so far (we just started the application), the pattern does not need to be evaluated. Finally, the previous action in the user’s keyed state is updated to the latest action, to be able to look  [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/broadcastState/fig4.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot;/&gt;
+&lt;img src=&quot;/img/blog/broadcastState/fig4.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-After the first three actions are processed, the next event, the logout action of User 1001, is shipped to the task that processes the events of User 1001. When the task receives the actions, it looks up the current pattern from the broadcast state and the previous action of User 1001. Since the pattern matches both actions, the task emits a pattern match event. Finally, the task updates its keyed state by overriding the previous event with the latest action.
+&lt;p&gt;After the first three actions are processed, the next event, the logout action of User 1001, is shipped to the task that processes the events of User 1001. When the task receives the actions, it looks up the current pattern from the broadcast state and the previous action of User 1001. Since the pattern matches both actions, the task emits a pattern match event. Finally, the task updates its keyed state by overriding the previous event with the latest action.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/broadcastState/fig5.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot;/&gt;
+&lt;img src=&quot;/img/blog/broadcastState/fig5.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-When a new pattern arrives in the pattern stream, it is broadcasted to all tasks and each task updates its broadcast state by replacing the current pattern with the new one.
+&lt;p&gt;When a new pattern arrives in the pattern stream, it is broadcasted to all tasks and each task updates its broadcast state by replacing the current pattern with the new one.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/broadcastState/fig6.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot;/&gt;
+&lt;img src=&quot;/img/blog/broadcastState/fig6.png&quot; width=&quot;600px&quot; alt=&quot;Broadcast State in Apache Flink.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-Once the broadcast state is updated with a new pattern, the matching logic continues as before, i.e., user action events are partitioned by key and evaluated by the responsible task.
-
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## How to Implement an Application with Broadcast State?
+&lt;p&gt;Once the broadcast state is updated with a new pattern, the matching logic continues as before, i.e., user action events are partitioned by key and evaluated by the responsible task.&lt;/p&gt;
 
-Until now, we conceptually discussed the application and explained how it uses broadcast state to evaluate dynamic patterns over event streams. Next, we’ll show how to implement the example application with Flink’s DataStream API and the broadcast state feature.
+&lt;h2 id=&quot;how-to-implement-an-application-with-broadcast-state&quot;&gt;How to Implement an Application with Broadcast State?&lt;/h2&gt;
 
-Let’s start with the input data of the application. We have two data streams, actions, and patterns. At this point, we don’t really care where the streams come from. The streams could be ingested from Apache Kafka or Kinesis or any other system. Action and Pattern are Pojos with two fields each:
+&lt;p&gt;Until now, we conceptually discussed the application and explained how it uses broadcast state to evaluate dynamic patterns over event streams. Next, we’ll show how to implement the example application with Flink’s DataStream API and the broadcast state feature.&lt;/p&gt;
 
-```java
-DataStream&lt;Action&gt; actions = ???
-DataStream&lt;Pattern&gt; patterns = ???
-```
+&lt;p&gt;Let’s start with the input data of the application. We have two data streams, actions, and patterns. At this point, we don’t really care where the streams come from. The streams could be ingested from Apache Kafka or Kinesis or any other system. Action and Pattern are Pojos with two fields each:&lt;/p&gt;
 
-`Action` and `Pattern` are Pojos with two fields each:
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;actions&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;???&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;patterns&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;???&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-- `Action: Long userId, String action`
+&lt;p&gt;&lt;code&gt;Action&lt;/code&gt; and &lt;code&gt;Pattern&lt;/code&gt; are Pojos with two fields each:&lt;/p&gt;
 
-- `Pattern: String firstAction, String secondAction`
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;code&gt;Action: Long userId, String action&lt;/code&gt;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;code&gt;Pattern: String firstAction, String secondAction&lt;/code&gt;&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-As a first step, we key the action stream on the `userId` attribute.
+&lt;p&gt;As a first step, we key the action stream on the &lt;code&gt;userId&lt;/code&gt; attribute.&lt;/p&gt;
 
-```java
-KeyedStream&lt;Action, Long&gt; actionsByUser = actions
-  .keyBy((KeySelector&lt;Action, Long&gt;) action -&gt; action.userId);
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;KeyedStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;actionsByUser&lt;/span&gt; &lt;span class=&quot;o&quot;& [...]
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;KeySelector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;act [...]
 
-Next, we prepare the broadcast state. Broadcast state is always represented as `MapState`, the most versatile state primitive that Flink provides.
+&lt;p&gt;Next, we prepare the broadcast state. Broadcast state is always represented as &lt;code&gt;MapState&lt;/code&gt;, the most versatile state primitive that Flink provides.&lt;/p&gt;
 
-```java
-MapStateDescriptor&lt;Void, Pattern&gt; bcStateDescriptor = 
-  new MapStateDescriptor&lt;&gt;(&quot;patterns&quot;, Types.VOID, Types.POJO(Pattern.class));
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;MapStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Void&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;bcStateDescriptor&lt;/span&gt; &lt;span class=&q [...]
+  &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;MapStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;patterns&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;VOID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt [...]
 
-Since our application only evaluates and stores a single `Pattern` at a time, we configure the broadcast state as a `MapState` with key type `Void` and value type `Pattern`. The `Pattern` is always stored in the `MapState` with `null` as key.
+&lt;p&gt;Since our application only evaluates and stores a single &lt;code&gt;Pattern&lt;/code&gt; at a time, we configure the broadcast state as a &lt;code&gt;MapState&lt;/code&gt; with key type &lt;code&gt;Void&lt;/code&gt; and value type &lt;code&gt;Pattern&lt;/code&gt;. The &lt;code&gt;Pattern&lt;/code&gt; is always stored in the &lt;code&gt;MapState&lt;/code&gt; with &lt;code&gt;null&lt;/code&gt; as key.&lt;/p&gt;
 
-```java
-BroadcastStream&lt;Pattern&gt; bcedPatterns = patterns.broadcast(bcStateDescriptor);
-```
-Using the `MapStateDescriptor` for the broadcast state, we apply the `broadcast()` transformation on the patterns stream and receive a `BroadcastStream bcedPatterns`.
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;BroadcastStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;bcedPatterns&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;patterns&lt;/span&gt;&lt;span class=&quot; [...]
+&lt;p&gt;Using the &lt;code&gt;MapStateDescriptor&lt;/code&gt; for the broadcast state, we apply the &lt;code&gt;broadcast()&lt;/code&gt; transformation on the patterns stream and receive a &lt;code&gt;BroadcastStream bcedPatterns&lt;/code&gt;.&lt;/p&gt;
 
-```java
-DataStream&lt;Tuple2&lt;Long, Pattern&gt;&gt; matches = actionsByUser
- .connect(bcedPatterns)
- .process(new PatternEvaluator());
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;g [...]
+ &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;connect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;bcedPatterns&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+ &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;process&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;PatternEvaluator&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-After we obtained the keyed `actionsByUser` stream and the broadcasted `bcedPatterns` stream, we `connect()` both streams and apply a `PatternEvaluator` on the connected streams. `PatternEvaluator` is a custom function that implements the `KeyedBroadcastProcessFunction` interface. It applies the pattern matching logic that we discussed before and emits `Tuple2&lt;Long, Pattern&gt;` records which contain the user id and the matched pattern.
+&lt;p&gt;After we obtained the keyed &lt;code&gt;actionsByUser&lt;/code&gt; stream and the broadcasted &lt;code&gt;bcedPatterns&lt;/code&gt; stream, we &lt;code&gt;connect()&lt;/code&gt; both streams and apply a &lt;code&gt;PatternEvaluator&lt;/code&gt; on the connected streams. &lt;code&gt;PatternEvaluator&lt;/code&gt; is a custom function that implements the &lt;code&gt;KeyedBroadcastProcessFunction&lt;/code&gt; interface. It applies the pattern matching logic that we discussed before  [...]
 
-```java
-public static class PatternEvaluator
-    extends KeyedBroadcastProcessFunction&lt;Long, Action, Pattern, Tuple2&lt;Long, Pattern&gt;&gt; {
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;static&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;PatternEvaluator&lt;/span&gt;
+    &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;KeyedBroadcastProcessFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class [...]
  
-  // handle for keyed state (per user)
-  ValueState&lt;String&gt; prevActionState;
-  // broadcast state descriptor
-  MapStateDescriptor&lt;Void, Pattern&gt; patternDesc;
+  &lt;span class=&quot;c1&quot;&gt;// handle for keyed state (per user)&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;ValueState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prevActionState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;c1&quot;&gt;// broadcast state descriptor&lt;/span&gt;
+  &lt;span class=&quot;n&quot;&gt;MapStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Void&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;patternDesc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
  
-  @Override
-  public void open(Configuration conf) {
-    // initialize keyed state
-    prevActionState = getRuntimeContext().getState(
-      new ValueStateDescriptor&lt;&gt;(&quot;lastAction&quot;, Types.STRING));
-    patternDesc = 
-      new MapStateDescriptor&lt;&gt;(&quot;patterns&quot;, Types.VOID, Types.POJO(Pattern.class));
-  }
-
-  /**
-   * Called for each user action.
-   * Evaluates the current pattern against the previous and
-   * current action of the user.
-   */
-  @Override
-  public void processElement(
-     Action action, 
-     ReadOnlyContext ctx, 
-     Collector&lt;Tuple2&lt;Long, Pattern&gt;&gt; out) throws Exception {
-   // get current pattern from broadcast state
-   Pattern pattern = ctx
-     .getBroadcastState(this.patternDesc)
-     // access MapState with null as VOID default value
-     .get(null);
-   // get previous action of current user from keyed state
-   String prevAction = prevActionState.value();
-   if (pattern != null &amp;&amp; prevAction != null) {
-     // user had an action before, check if pattern matches
-     if (pattern.firstAction.equals(prevAction) &amp;&amp; 
-         pattern.secondAction.equals(action.action)) {
-       // MATCH
-       out.collect(new Tuple2&lt;&gt;(ctx.getCurrentKey(), pattern));
-     }
-   }
-   // update keyed state and remember action for next pattern evaluation
-   prevActionState.update(action.action);
- }
-
- /**
-  * Called for each new pattern.
-  * Overwrites the current pattern with the new pattern.
-  */
- @Override
- public void processBroadcastElement(
-     Pattern pattern, 
-     Context ctx, 
-     Collector&lt;Tuple2&lt;Long, Pattern&gt;&gt; out) throws Exception {
-   // store the new pattern by updating the broadcast state
-   BroadcastState&lt;Void, Pattern&gt; bcState = ctx.getBroadcastState(patternDesc);
-   // storing in MapState with null as VOID default value
-   bcState.put(null, pattern);
- }
-}
-```
-
-The `KeyedBroadcastProcessFunction` interface provides three methods to process records and emit results.
-
-- `processBroadcastElement()` is called for each record of the broadcasted stream. In our `PatternEvaluator` function, we simply put the received `Pattern` record in to the broadcast state using the `null` key (remember, we only store a single pattern in the `MapState`).
-- `processElement()` is called for each record of the keyed stream. It provides read-only access to the broadcast state to prevent modification that result in different broadcast states across the parallel instances of the function. The `processElement()` method of the `PatternEvaluator` retrieves the current pattern from the broadcast state and the previous action of the user from the keyed state. If both are present, it checks whether the previous and current action match with the patt [...]
-- `onTimer()` is called when a previously registered timer fires. Timers can be registered in the `processElement` method and are used to perform computations or to clean up state in the future. We did not implement this method in our example to keep the code concise. However, it could be used to remove the last action of a user when the user was not active for a certain period of time to avoid growing state due to inactive users.
-
-You might have noticed the context objects of the `KeyedBroadcastProcessFunction`’s processing method. The context objects give access to additional functionality such as:
-
-- The broadcast state (read-write or read-only, depending on the method), 
-- A `TimerService`, which gives access to the record’s timestamp, the current watermark, and which can register timers,
-- The current key (only available in `processElement()`), and
-- A method to apply a function the keyed state of each registered key (only available in `processBroadcastElement()`)
-
-The `KeyedBroadcastProcessFunction` has full access to Flink state and time features just like any other ProcessFunction and hence can be used to implement sophisticated application logic. Broadcast state was designed to be a versatile feature that adapts to different scenarios and use cases. Although we only discussed a fairly simple and restricted application, you can use broadcast state in many ways to implement the requirements of your application. 
-
-## Conclusion
-
-In this blog post, we walked you through an example application to explain what Apache Flink’s broadcast state is and how it can be used to evaluate dynamic patterns on event streams. We’ve also discussed the API and showed the source code of our example application. 
-
-We invite you to check the [documentation]({{ site.DOCS_BASE_URL }}flink-docs-stable/dev/stream/state/broadcast_state.html) of this feature and provide feedback or suggestions for further improvements through our [mailing list](http://mail-archives.apache.org/mod_mbox/flink-community/).
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Configuration&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;conf&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;c1&quot;&gt;// initialize keyed state&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;prevActionState&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getRuntimeContext&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;lastAction&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;));&lt;/ [...]
+    &lt;span class=&quot;n&quot;&gt;patternDesc&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;MapStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;patterns&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;VOID&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; [...]
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+  &lt;span class=&quot;cm&quot;&gt;/**&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;   * Called for each user action.&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;   * Evaluates the current pattern against the previous and&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;   * current action of the user.&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;   */&lt;/span&gt;
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+     &lt;span class=&quot;n&quot;&gt;Action&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; 
+     &lt;span class=&quot;n&quot;&gt;ReadOnlyContext&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; 
+     &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&qu [...]
+   &lt;span class=&quot;c1&quot;&gt;// get current pattern from broadcast state&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pattern&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;
+     &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getBroadcastState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;this&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;patternDesc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+     &lt;span class=&quot;c1&quot;&gt;// access MapState with null as VOID default value&lt;/span&gt;
+     &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;get&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+   &lt;span class=&quot;c1&quot;&gt;// get previous action of current user from keyed state&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prevAction&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prevActionState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+   &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;pattern&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prevAction&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;!=&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;o&qu [...]
+     &lt;span class=&quot;c1&quot;&gt;// user had an action before, check if pattern matches&lt;/span&gt;
+     &lt;span class=&quot;k&quot;&gt;if&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;firstAction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;equals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;prevAction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/s [...]
+         &lt;span class=&quot;n&quot;&gt;pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;secondAction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;equals&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)) [...]
+       &lt;span class=&quot;c1&quot;&gt;// MATCH&lt;/span&gt;
+       &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;get [...]
+     &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+   &lt;span class=&quot;c1&quot;&gt;// update keyed state and remember action for next pattern evaluation&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;prevActionState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;update&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;action&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+ &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+ &lt;span class=&quot;cm&quot;&gt;/**&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;  * Called for each new pattern.&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;  * Overwrites the current pattern with the new pattern.&lt;/span&gt;
+&lt;span class=&quot;cm&quot;&gt;  */&lt;/span&gt;
+ &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+ &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;processBroadcastElement&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+     &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; 
+     &lt;span class=&quot;n&quot;&gt;Context&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; 
+     &lt;span class=&quot;n&quot;&gt;Collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;out&lt;/span&gt;&lt;span class=&qu [...]
+   &lt;span class=&quot;c1&quot;&gt;// store the new pattern by updating the broadcast state&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;BroadcastState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Void&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;bcState&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ctx&lt;/span&gt;&lt;span class=&quot;o&quot [...]
+   &lt;span class=&quot;c1&quot;&gt;// storing in MapState with null as VOID default value&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;bcState&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;put&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;kc&quot;&gt;null&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pattern&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+ &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The &lt;code&gt;KeyedBroadcastProcessFunction&lt;/code&gt; interface provides three methods to process records and emit results.&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;processBroadcastElement()&lt;/code&gt; is called for each record of the broadcasted stream. In our &lt;code&gt;PatternEvaluator&lt;/code&gt; function, we simply put the received &lt;code&gt;Pattern&lt;/code&gt; record in to the broadcast state using the &lt;code&gt;null&lt;/code&gt; key (remember, we only store a single pattern in the &lt;code&gt;MapState&lt;/code&gt;).&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;processElement()&lt;/code&gt; is called for each record of the keyed stream. It provides read-only access to the broadcast state to prevent modification that result in different broadcast states across the parallel instances of the function. The &lt;code&gt;processElement()&lt;/code&gt; method of the &lt;code&gt;PatternEvaluator&lt;/code&gt; retrieves the current pattern from the broadcast state and the previous action of the user from the keyed state. If both are [...]
+  &lt;li&gt;&lt;code&gt;onTimer()&lt;/code&gt; is called when a previously registered timer fires. Timers can be registered in the &lt;code&gt;processElement&lt;/code&gt; method and are used to perform computations or to clean up state in the future. We did not implement this method in our example to keep the code concise. However, it could be used to remove the last action of a user when the user was not active for a certain period of time to avoid growing state due to inactive users.&l [...]
+&lt;/ul&gt;
+
+&lt;p&gt;You might have noticed the context objects of the &lt;code&gt;KeyedBroadcastProcessFunction&lt;/code&gt;’s processing method. The context objects give access to additional functionality such as:&lt;/p&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;The broadcast state (read-write or read-only, depending on the method),&lt;/li&gt;
+  &lt;li&gt;A &lt;code&gt;TimerService&lt;/code&gt;, which gives access to the record’s timestamp, the current watermark, and which can register timers,&lt;/li&gt;
+  &lt;li&gt;The current key (only available in &lt;code&gt;processElement()&lt;/code&gt;), and&lt;/li&gt;
+  &lt;li&gt;A method to apply a function the keyed state of each registered key (only available in &lt;code&gt;processBroadcastElement()&lt;/code&gt;)&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;p&gt;The &lt;code&gt;KeyedBroadcastProcessFunction&lt;/code&gt; has full access to Flink state and time features just like any other ProcessFunction and hence can be used to implement sophisticated application logic. Broadcast state was designed to be a versatile feature that adapts to different scenarios and use cases. Although we only discussed a fairly simple and restricted application, you can use broadcast state in many ways to implement the requirements of your application.&lt;/p&gt;
+
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
+
+&lt;p&gt;In this blog post, we walked you through an example application to explain what Apache Flink’s broadcast state is and how it can be used to evaluate dynamic patterns on event streams. We’ve also discussed the API and showed the source code of our example application.&lt;/p&gt;
+
+&lt;p&gt;We invite you to check the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/broadcast_state.html&quot;&gt;documentation&lt;/a&gt; of this feature and provide feedback or suggestions for further improvements through our &lt;a href=&quot;http://mail-archives.apache.org/mod_mbox/flink-community/&quot;&gt;mailing list&lt;/a&gt;.&lt;/p&gt;
 </description>
-<pubDate>Wed, 26 Jun 2019 12:00:00 +0000</pubDate>
+<pubDate>Wed, 26 Jun 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/06/26/broadcast-state.html</link>
 <guid isPermaLink="true">/2019/06/26/broadcast-state.html</guid>
 </item>
@@ -3842,51 +3948,81 @@ We invite you to check the [documentation]({{ site.DOCS_BASE_URL }}flink-docs-st
 .tg .tg-center{text-align:center;vertical-align:center}
 &lt;/style&gt;
 
-Flink’s network stack is one of the core components that make up the `flink-runtime` module and sit at the heart of every Flink job. It connects individual work units (subtasks) from all TaskManagers. This is where your streamed-in data flows through and it is therefore crucial to the performance of your Flink job for both the throughput as well as latency you observe. In contrast to the coordination channels between TaskManagers and JobManagers which are using RPCs via Akka, the network [...]
-
-This blog post is the first in a series of posts about the network stack. In the sections below, we will first have a high-level look at what abstractions are exposed to the stream operators and then go into detail on the physical implementation and various optimisations Flink did. We will briefly present the result of these optimisations and Flink’s trade-off between throughput and latency. Future blog posts in this series will elaborate more on monitoring and metrics, tuning parameters [...]
+&lt;p&gt;Flink’s network stack is one of the core components that make up the &lt;code&gt;flink-runtime&lt;/code&gt; module and sit at the heart of every Flink job. It connects individual work units (subtasks) from all TaskManagers. This is where your streamed-in data flows through and it is therefore crucial to the performance of your Flink job for both the throughput as well as latency you observe. In contrast to the coordination channels between TaskManagers and JobManagers which are  [...]
+
+&lt;p&gt;This blog post is the first in a series of posts about the network stack. In the sections below, we will first have a high-level look at what abstractions are exposed to the stream operators and then go into detail on the physical implementation and various optimisations Flink did. We will briefly present the result of these optimisations and Flink’s trade-off between throughput and latency. Future blog posts in this series will elaborate more on monitoring and metrics, tuning p [...]
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#logical-view&quot; id=&quot;markdown-toc-logical-view&quot;&gt;Logical View&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#physical-transport&quot; id=&quot;markdown-toc-physical-transport&quot;&gt;Physical Transport&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#inflicting-backpressure-1&quot; id=&quot;markdown-toc-inflicting-backpressure-1&quot;&gt;Inflicting Backpressure (1)&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#credit-based-flow-control&quot; id=&quot;markdown-toc-credit-based-flow-control&quot;&gt;Credit-based Flow Control&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#inflicting-backpressure-2&quot; id=&quot;markdown-toc-inflicting-backpressure-2&quot;&gt;Inflicting Backpressure (2)&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#what-do-we-gain-where-is-the-catch&quot; id=&quot;markdown-toc-what-do-we-gain-where-is-the-catch&quot;&gt;What do we Gain? Where is the Catch?&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#writing-records-into-network-buffers-and-reading-them-again&quot; id=&quot;markdown-toc-writing-records-into-network-buffers-and-reading-them-again&quot;&gt;Writing Records into Network Buffers and Reading them again&lt;/a&gt;    &lt;ul&gt;
+      &lt;li&gt;&lt;a href=&quot;#flushing-buffers-to-netty&quot; id=&quot;markdown-toc-flushing-buffers-to-netty&quot;&gt;Flushing Buffers to Netty&lt;/a&gt;&lt;/li&gt;
+      &lt;li&gt;&lt;a href=&quot;#buffer-builder--buffer-consumer&quot; id=&quot;markdown-toc-buffer-builder--buffer-consumer&quot;&gt;Buffer Builder &amp;amp; Buffer Consumer&lt;/a&gt;&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#latency-vs-throughput&quot; id=&quot;markdown-toc-latency-vs-throughput&quot;&gt;Latency vs. Throughput&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#conclusion&quot; id=&quot;markdown-toc-conclusion&quot;&gt;Conclusion&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-{% toc %}
+&lt;/div&gt;
 
-## Logical View
+&lt;h2 id=&quot;logical-view&quot;&gt;Logical View&lt;/h2&gt;
 
-Flink’s network stack provides the following logical view to the subtasks when communicating with each other, for example during a network shuffle as required by a `keyBy()`.
+&lt;p&gt;Flink’s network stack provides the following logical view to the subtasks when communicating with each other, for example during a network shuffle as required by a &lt;code&gt;keyBy()&lt;/code&gt;.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack1.png&quot; width=&quot;400px&quot; alt=&quot;Logical View on Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack1.png&quot; width=&quot;400px&quot; alt=&quot;Logical View on Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-It abstracts over the different settings of the following three concepts:
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-* Subtask output type (`ResultPartitionType`):
-    * **pipelined (bounded or unbounded):**
-    Sending data downstream as soon as it is produced, potentially one-by-one, either as a bounded or unbounded stream of records.
-    * **blocking:**
-    Sending data downstream only when the full result was produced.
+&lt;p&gt;It abstracts over the different settings of the following three concepts:&lt;/p&gt;
 
-* Scheduling type:
-    * **all at once (eager):**
-    Deploy all subtasks of the job at the same time (for streaming applications).
-    * **next stage on first output (lazy):**
-    Deploy downstream tasks as soon as any of their producers generated output.
-    * **next stage on complete output:**
-    Deploy downstream tasks when any or all of their producers have generated their full output set.
-
-* Transport:
-    * **high throughput:**
-    Instead of sending each record one-by-one, Flink buffers a bunch of records into its network buffers and sends them altogether. This reduces the costs per record and leads to higher throughput.
-    * **low latency via buffer timeout:**
-    By reducing the timeout of sending an incompletely filled buffer, you may sacrifice throughput for latency.
+&lt;ul&gt;
+  &lt;li&gt;Subtask output type (&lt;code&gt;ResultPartitionType&lt;/code&gt;):
+    &lt;ul&gt;
+      &lt;li&gt;&lt;strong&gt;pipelined (bounded or unbounded):&lt;/strong&gt;
+  Sending data downstream as soon as it is produced, potentially one-by-one, either as a bounded or unbounded stream of records.&lt;/li&gt;
+      &lt;li&gt;&lt;strong&gt;blocking:&lt;/strong&gt;
+  Sending data downstream only when the full result was produced.&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;Scheduling type:
+    &lt;ul&gt;
+      &lt;li&gt;&lt;strong&gt;all at once (eager):&lt;/strong&gt;
+  Deploy all subtasks of the job at the same time (for streaming applications).&lt;/li&gt;
+      &lt;li&gt;&lt;strong&gt;next stage on first output (lazy):&lt;/strong&gt;
+  Deploy downstream tasks as soon as any of their producers generated output.&lt;/li&gt;
+      &lt;li&gt;&lt;strong&gt;next stage on complete output:&lt;/strong&gt;
+  Deploy downstream tasks when any or all of their producers have generated their full output set.&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+  &lt;li&gt;Transport:
+    &lt;ul&gt;
+      &lt;li&gt;&lt;strong&gt;high throughput:&lt;/strong&gt;
+  Instead of sending each record one-by-one, Flink buffers a bunch of records into its network buffers and sends them altogether. This reduces the costs per record and leads to higher throughput.&lt;/li&gt;
+      &lt;li&gt;&lt;strong&gt;low latency via buffer timeout:&lt;/strong&gt;
+  By reducing the timeout of sending an incompletely filled buffer, you may sacrifice throughput for latency.&lt;/li&gt;
+    &lt;/ul&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-We will have a look at the throughput and low latency optimisations in the sections below which look at the physical layers of the network stack. For this part, let us elaborate a bit more on the output and scheduling types. First of all, it is important to know that the subtask output type and the scheduling type are closely intertwined making only specific combinations of the two valid.
+&lt;p&gt;We will have a look at the throughput and low latency optimisations in the sections below which look at the physical layers of the network stack. For this part, let us elaborate a bit more on the output and scheduling types. First of all, it is important to know that the subtask output type and the scheduling type are closely intertwined making only specific combinations of the two valid.&lt;/p&gt;
 
-Pipelined result partitions are streaming-style outputs which need a live target subtask to send data to. The target can be scheduled before results are produced or at first output. Batch jobs produce bounded result partitions while streaming jobs produce unbounded results.
+&lt;p&gt;Pipelined result partitions are streaming-style outputs which need a live target subtask to send data to. The target can be scheduled before results are produced or at first output. Batch jobs produce bounded result partitions while streaming jobs produce unbounded results.&lt;/p&gt;
 
-Batch jobs may also produce results in a blocking fashion, depending on the operator and connection pattern that is used. In that case, the complete result must be produced first before the receiving task can be scheduled. This allows batch jobs to work more efficiently and with lower resource usage.
+&lt;p&gt;Batch jobs may also produce results in a blocking fashion, depending on the operator and connection pattern that is used. In that case, the complete result must be produced first before the receiving task can be scheduled. This allows batch jobs to work more efficiently and with lower resource usage.&lt;/p&gt;
 
-The following table summarises the valid combinations:
-&lt;br&gt;
+&lt;p&gt;The following table summarises the valid combinations:
+&lt;br /&gt;&lt;/p&gt;
 &lt;center&gt;
 &lt;table class=&quot;tg&quot;&gt;
   &lt;tr&gt;
@@ -3919,24 +4055,22 @@ The following table summarises the valid combinations:
   &lt;/tr&gt;
 &lt;/table&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;sup&gt;1&lt;/sup&gt; Currently not used by Flink. &lt;br&gt;
-&lt;sup&gt;2&lt;/sup&gt; This may become applicable to streaming jobs once the [Batch/Streaming unification]({{ site.baseurl }}/roadmap.html#batch-and-streaming-unification) is done.
+&lt;p&gt;&lt;sup&gt;1&lt;/sup&gt; Currently not used by Flink. &lt;br /&gt;
+&lt;sup&gt;2&lt;/sup&gt; This may become applicable to streaming jobs once the &lt;a href=&quot;/roadmap.html#batch-and-streaming-unification&quot;&gt;Batch/Streaming unification&lt;/a&gt; is done.&lt;/p&gt;
 
+&lt;p&gt;&lt;br /&gt;
+Additionally, for subtasks with more than one input, scheduling start in two ways: after &lt;em&gt;all&lt;/em&gt; or after &lt;em&gt;any&lt;/em&gt; input producers to have produced a record/their complete dataset. For tuning the output types and scheduling decisions in batch jobs, please have a look at &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/api/common/ExecutionConfig.html#setExecutionMode-org.apache.flink.api.common.Executio [...]
 
-&lt;br&gt;
-Additionally, for subtasks with more than one input, scheduling start in two ways: after *all* or after *any* input producers to have produced a record/their complete dataset. For tuning the output types and scheduling decisions in batch jobs, please have a look at [ExecutionConfig#setExecutionMode()]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/api/common/ExecutionConfig.html#setExecutionMode-org.apache.flink.api.common.ExecutionMode-) - and [ExecutionMode]({ [...]
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;br&gt;
+&lt;h2 id=&quot;physical-transport&quot;&gt;Physical Transport&lt;/h2&gt;
 
-## Physical Transport
+&lt;p&gt;In order to understand the physical data connections, please recall that, in Flink, different tasks may share the same slot via &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/operators/#task-chaining-and-resource-groups&quot;&gt;slot sharing groups&lt;/a&gt;. TaskManagers may also provide more than one slot to allow multiple subtasks of the same task to be scheduled onto the same TaskManager.&lt;/p&gt;
 
-In order to understand the physical data connections, please recall that, in Flink, different tasks may share the same slot via [slot sharing groups]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/stream/operators/#task-chaining-and-resource-groups). TaskManagers may also provide more than one slot to allow multiple subtasks of the same task to be scheduled onto the same TaskManager.
-
-For the example pictured below, we will assume a parallelism of 4 and a deployment with two task managers offering 2 slots each. TaskManager 1 executes subtasks A.1, A.2, B.1, and B.2 and TaskManager 2 executes subtasks A.3, A.4, B.3, and B.4. In a shuffle-type connection between task A and task B, for example from a `keyBy()`, there are 2x4 logical connections to handle on each TaskManager, some of which are local, some remote:
-&lt;br&gt;
+&lt;p&gt;For the example pictured below, we will assume a parallelism of 4 and a deployment with two task managers offering 2 slots each. TaskManager 1 executes subtasks A.1, A.2, B.1, and B.2 and TaskManager 2 executes subtasks A.3, A.4, B.3, and B.4. In a shuffle-type connection between task A and task B, for example from a &lt;code&gt;keyBy()&lt;/code&gt;, there are 2x4 logical connections to handle on each TaskManager, some of which are local, some remote:
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
 &lt;table class=&quot;tg&quot;&gt;
@@ -3966,69 +4100,70 @@ For the example pictured below, we will assume a parallelism of 4 and a deployme
 &lt;/table&gt;
 &lt;/center&gt;
 
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Each (remote) network connection between different tasks will get its own TCP channel in Flink’s network stack. However, if different subtasks of the same task are scheduled onto the same TaskManager, their network connections towards the same TaskManagers will be multiplexed and share a single TCP channel for reduced resource usage. In our example, this would apply to A.1 → B.3, A.1 → B.4, as well as A.2 → B.3, and A.2 → B.4 as pictured below:
-&lt;br&gt;
+&lt;p&gt;Each (remote) network connection between different tasks will get its own TCP channel in Flink’s network stack. However, if different subtasks of the same task are scheduled onto the same TaskManager, their network connections towards the same TaskManagers will be multiplexed and share a single TCP channel for reduced resource usage. In our example, this would apply to A.1 → B.3, A.1 → B.4, as well as A.2 → B.3, and A.2 → B.4 as pictured below:
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack2.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack2.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The results of each subtask are called [ResultPartition]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultPartition.html), each split into separate [ResultSubpartitions]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultSubpartition.html) — one for each logical channel. At this point in the stack, Flink is not dealing with individual records anymore but instead with a grou [...]
+&lt;p&gt;The results of each subtask are called &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultPartition.html&quot;&gt;ResultPartition&lt;/a&gt;, each split into separate &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/partition/ResultSubpartition.html&quot;&gt;ResultSubpartitions&lt;/a&gt; — one for each logical channel. At  [...]
 
-    #channels * buffers-per-channel + floating-buffers-per-gate
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;#channels * buffers-per-channel + floating-buffers-per-gate
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-The total number of buffers on a single TaskManager usually does not need configuration. See the [Configuring the Network Buffers]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#configuring-the-network-buffers) documentation for details on how to do so if needed.
+&lt;p&gt;The total number of buffers on a single TaskManager usually does not need configuration. See the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#configuring-the-network-buffers&quot;&gt;Configuring the Network Buffers&lt;/a&gt; documentation for details on how to do so if needed.&lt;/p&gt;
 
-### Inflicting Backpressure (1)
+&lt;h3 id=&quot;inflicting-backpressure-1&quot;&gt;Inflicting Backpressure (1)&lt;/h3&gt;
 
-Whenever a subtask’s sending buffer pool is exhausted — buffers reside in either a result subpartition&#39;s buffer queue or inside the lower, Netty-backed network stack — the producer is blocked, cannot continue, and experiences backpressure. The receiver works in a similar fashion: any incoming Netty buffer in the lower network stack needs to be made available to Flink via a network buffer. If there is no network buffer available in the appropriate subtask&#39;s buffer pool, Flink will [...]
+&lt;p&gt;Whenever a subtask’s sending buffer pool is exhausted — buffers reside in either a result subpartition’s buffer queue or inside the lower, Netty-backed network stack — the producer is blocked, cannot continue, and experiences backpressure. The receiver works in a similar fashion: any incoming Netty buffer in the lower network stack needs to be made available to Flink via a network buffer. If there is no network buffer available in the appropriate subtask’s buffer pool, Flink wil [...]
 
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack3.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-backpressure-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack3.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-backpressure-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-To prevent this situation from even happening, Flink 1.5 introduced its own flow control mechanism.
+&lt;p&gt;To prevent this situation from even happening, Flink 1.5 introduced its own flow control mechanism.&lt;/p&gt;
 
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## Credit-based Flow Control
+&lt;h2 id=&quot;credit-based-flow-control&quot;&gt;Credit-based Flow Control&lt;/h2&gt;
 
-Credit-based flow control makes sure that whatever is “on the wire” will have capacity at the receiver to handle. It is based on the availability of network buffers as a natural extension of the mechanisms Flink had before. Instead of only having a shared local buffer pool, each remote input channel now has its own set of **exclusive buffers**. Conversely, buffers in the local buffer pool are called **floating buffers** as they will float around and are available to every input channel.
+&lt;p&gt;Credit-based flow control makes sure that whatever is “on the wire” will have capacity at the receiver to handle. It is based on the availability of network buffers as a natural extension of the mechanisms Flink had before. Instead of only having a shared local buffer pool, each remote input channel now has its own set of &lt;strong&gt;exclusive buffers&lt;/strong&gt;. Conversely, buffers in the local buffer pool are called &lt;strong&gt;floating buffers&lt;/strong&gt; as they w [...]
 
-Receivers will announce the availability of buffers as **credits** to the sender (1 buffer = 1 credit). Each result subpartition will keep track of its **channel credits**. Buffers are only forwarded to the lower network stack if credit is available and each sent buffer reduces the credit score by one. In addition to the buffers, we also send information about the current **backlog** size which specifies how many buffers are waiting in this subpartition’s queue. The receiver will use thi [...]
-&lt;br&gt;
+&lt;p&gt;Receivers will announce the availability of buffers as &lt;strong&gt;credits&lt;/strong&gt; to the sender (1 buffer = 1 credit). Each result subpartition will keep track of its &lt;strong&gt;channel credits&lt;/strong&gt;. Buffers are only forwarded to the lower network stack if credit is available and each sent buffer reduces the credit score by one. In addition to the buffers, we also send information about the current &lt;strong&gt;backlog&lt;/strong&gt; size which specifies  [...]
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack4.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-credit-flow-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack4.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-credit-flow-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Credit-based flow control will use [buffers-per-channel]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-buffers-per-channel) to specify how many buffers are exclusive (mandatory) and [floating-buffers-per-gate]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-floating-buffers-per-gate) for the local buffer pool (optional&lt;sup&gt;3&lt;/sup&gt;) thus achieving the same buffer limit as without flow control [...]
-&lt;br&gt;
+&lt;p&gt;Credit-based flow control will use &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-buffers-per-channel&quot;&gt;buffers-per-channel&lt;/a&gt; to specify how many buffers are exclusive (mandatory) and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskmanager-network-memory-floating-buffers-per-gate&quot;&gt;floating-buffers-per-gate&lt;/a&gt; for the local buffer [...]
+&lt;br /&gt;&lt;/p&gt;
 
-&lt;sup&gt;3&lt;/sup&gt;If there are not enough buffers available, each buffer pool will get the same share of the globally available ones (± 1).
+&lt;p&gt;&lt;sup&gt;3&lt;/sup&gt;If there are not enough buffers available, each buffer pool will get the same share of the globally available ones (± 1).&lt;/p&gt;
 
-### Inflicting Backpressure (2)
+&lt;h3 id=&quot;inflicting-backpressure-2&quot;&gt;Inflicting Backpressure (2)&lt;/h3&gt;
 
-As opposed to the receiver&#39;s backpressure mechanisms without flow control, credits provide a more direct control: If a receiver cannot keep up, its available credits will eventually hit 0 and stop the sender from forwarding buffers to the lower network stack. There is backpressure on this logical channel only and there is no need to block reading from a multiplexed TCP channel. Other receivers are therefore not affected in processing available buffers.
+&lt;p&gt;As opposed to the receiver’s backpressure mechanisms without flow control, credits provide a more direct control: If a receiver cannot keep up, its available credits will eventually hit 0 and stop the sender from forwarding buffers to the lower network stack. There is backpressure on this logical channel only and there is no need to block reading from a multiplexed TCP channel. Other receivers are therefore not affected in processing available buffers.&lt;/p&gt;
 
-### What do we Gain? Where is the Catch?
+&lt;h3 id=&quot;what-do-we-gain-where-is-the-catch&quot;&gt;What do we Gain? Where is the Catch?&lt;/h3&gt;
 
-&lt;img align=&quot;right&quot; src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack5.png&quot; width=&quot;300&quot; height=&quot;200&quot; alt=&quot;Physical-transport-credit-flow-checkpoints-Flink&#39;s Network Stack&quot;/&gt;
+&lt;p&gt;&lt;img align=&quot;right&quot; src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack5.png&quot; width=&quot;300&quot; height=&quot;200&quot; alt=&quot;Physical-transport-credit-flow-checkpoints-Flink&#39;s Network Stack&quot; /&gt;&lt;/p&gt;
 
-Since, with flow control, a channel in a multiplex cannot block another of its logical channels, the overall resource utilisation should increase. In addition, by having full control over how much data is “on the wire”, we are also able to improve [checkpoint alignments]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/internals/stream_checkpointing.html#checkpointing): without flow control, it would take a while for the channel to fill the network stack’s internal buffers and propagate th [...]
+&lt;p&gt;Since, with flow control, a channel in a multiplex cannot block another of its logical channels, the overall resource utilisation should increase. In addition, by having full control over how much data is “on the wire”, we are also able to improve &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/internals/stream_checkpointing.html#checkpointing&quot;&gt;checkpoint alignments&lt;/a&gt;: without flow control, it would take a while for the channel to fil [...]
 
-However, the additional announce messages from the receiver may come at some additional costs, especially in setup using SSL-encrypted channels. Also, a single input channel cannot make use of all buffers in the buffer pool because exclusive buffers are not shared. It can also not start right away with sending as much data as is available so that during ramp-up (if you are producing data faster than announcing credits in return) it may take longer to send data through. While this may aff [...]
+&lt;p&gt;However, the additional announce messages from the receiver may come at some additional costs, especially in setup using SSL-encrypted channels. Also, a single input channel cannot make use of all buffers in the buffer pool because exclusive buffers are not shared. It can also not start right away with sending as much data as is available so that during ramp-up (if you are producing data faster than announcing credits in return) it may take longer to send data through. While thi [...]
 
-There is one more thing you may notice when using credit-based flow control: since we buffer less data between the sender and receiver, you may experience backpressure earlier. This is, however, desired and you do not really get any advantage by buffering more data. If you want to buffer more but keep flow control, you could consider increasing the number of floating buffers via [floating-buffers-per-gate]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/config.html#taskmanager-network [...]
+&lt;p&gt;There is one more thing you may notice when using credit-based flow control: since we buffer less data between the sender and receiver, you may experience backpressure earlier. This is, however, desired and you do not really get any advantage by buffering more data. If you want to buffer more but keep flow control, you could consider increasing the number of floating buffers via &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/config.html#taskmana [...]
 
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
 &lt;table class=&quot;tg&quot;&gt;
@@ -4038,12 +4173,12 @@ There is one more thing you may notice when using credit-based flow control: sin
   &lt;/tr&gt;
   &lt;tr&gt;
     &lt;td class=&quot;tg-top&quot;&gt;
-    • better resource utilisation with data skew in multiplexed connections &lt;br&gt;&lt;br&gt;
-    • improved checkpoint alignment&lt;br&gt;&lt;br&gt;
+    • better resource utilisation with data skew in multiplexed connections &lt;br /&gt;&lt;br /&gt;
+    • improved checkpoint alignment&lt;br /&gt;&lt;br /&gt;
     • reduced memory use (less data in lower network layers)&lt;/td&gt;
     &lt;td class=&quot;tg-top&quot;&gt;
-    • additional credit-announce messages&lt;br&gt;&lt;br&gt;
-    • additional backlog-announce messages (piggy-backed with buffer messages, almost no overhead)&lt;br&gt;&lt;br&gt;
+    • additional credit-announce messages&lt;br /&gt;&lt;br /&gt;
+    • additional backlog-announce messages (piggy-backed with buffer messages, almost no overhead)&lt;br /&gt;&lt;br /&gt;
     • potential round-trip latency&lt;/td&gt;
   &lt;/tr&gt;
   &lt;tr&gt;
@@ -4051,967 +4186,1032 @@ There is one more thing you may notice when using credit-based flow control: sin
   &lt;/tr&gt;
 &lt;/table&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;div class=&quot;alert alert-info&quot; markdown=&quot;1&quot;&gt;
-&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
-If you need to turn off credit-based flow control, you can add this to your `flink-conf.yaml`:
+&lt;div class=&quot;alert alert-info&quot;&gt;
+  &lt;p&gt;&lt;span class=&quot;label label-info&quot; style=&quot;display: inline-block&quot;&gt;&lt;span class=&quot;glyphicon glyphicon-info-sign&quot; aria-hidden=&quot;true&quot;&gt;&lt;/span&gt; Note&lt;/span&gt;
+If you need to turn off credit-based flow control, you can add this to your &lt;code&gt;flink-conf.yaml&lt;/code&gt;:&lt;/p&gt;
 
-`taskmanager.network.credit-model: false`
+  &lt;p&gt;&lt;code&gt;taskmanager.network.credit-model: false&lt;/code&gt;&lt;/p&gt;
 
-This parameter, however, is deprecated and will eventually be removed along with the non-credit-based flow control code.
+  &lt;p&gt;This parameter, however, is deprecated and will eventually be removed along with the non-credit-based flow control code.&lt;/p&gt;
 &lt;/div&gt;
 
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## Writing Records into Network Buffers and Reading them again
+&lt;h2 id=&quot;writing-records-into-network-buffers-and-reading-them-again&quot;&gt;Writing Records into Network Buffers and Reading them again&lt;/h2&gt;
 
-The following picture extends the slightly more high-level view from above with further details of the network stack and its surrounding components, from the collection of a record in your sending operator to the receiving operator getting it:
-&lt;br&gt;
+&lt;p&gt;The following picture extends the slightly more high-level view from above with further details of the network stack and its surrounding components, from the collection of a record in your sending operator to the receiving operator getting it:
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack6.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-complete-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack6.png&quot; width=&quot;700px&quot; alt=&quot;Physical-transport-complete-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-After creating a record and passing it along, for example via `Collector#collect()`, it is given to the [RecordWriter]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.html) which serialises the record from a Java object into a sequence of bytes which eventually ends up in a network buffer that is handed along as described above. The RecordWriter first serialises the record to a flexible on-heap byte array using the [Span [...]
+&lt;p&gt;After creating a record and passing it along, for example via &lt;code&gt;Collector#collect()&lt;/code&gt;, it is given to the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/writer/RecordWriter.html&quot;&gt;RecordWriter&lt;/a&gt; which serialises the record from a Java object into a sequence of bytes which eventually ends up in a network buffer that is handed along as described above. The RecordWrite [...]
 
-On the receiver’s side, the lower network stack (netty) is writing received buffers into the appropriate input channels. The (stream) tasks’s thread eventually reads from these queues and tries to deserialise the accumulated bytes into Java objects with the help of the [RecordReader]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/reader/RecordReader.html) and going through the [SpillingAdaptiveSpanningRecordDeserializer]({{ site.DOCS_BASE_ [...]
-&lt;br&gt;
+&lt;p&gt;On the receiver’s side, the lower network stack (netty) is writing received buffers into the appropriate input channels. The (stream) tasks’s thread eventually reads from these queues and tries to deserialise the accumulated bytes into Java objects with the help of the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/api/reader/RecordReader.html&quot;&gt;RecordReader&lt;/a&gt; and going through the &lt;a hr [...]
+&lt;br /&gt;&lt;/p&gt;
 
-### Flushing Buffers to Netty
+&lt;h3 id=&quot;flushing-buffers-to-netty&quot;&gt;Flushing Buffers to Netty&lt;/h3&gt;
 
-In the picture above, the credit-based flow control mechanics actually sit inside the “Netty Server” (and “Netty Client”) components and the buffer the RecordWriter is writing to is always added to the result subpartition in an empty state and then gradually filled with (serialised) records. But when does Netty actually get the buffer? Obviously, it cannot take bytes whenever they become available since that would not only add substantial costs due to cross-thread communication and synch [...]
+&lt;p&gt;In the picture above, the credit-based flow control mechanics actually sit inside the “Netty Server” (and “Netty Client”) components and the buffer the RecordWriter is writing to is always added to the result subpartition in an empty state and then gradually filled with (serialised) records. But when does Netty actually get the buffer? Obviously, it cannot take bytes whenever they become available since that would not only add substantial costs due to cross-thread communication  [...]
 
-In Flink, there are three situations that make a buffer available for consumption by the Netty server:
+&lt;p&gt;In Flink, there are three situations that make a buffer available for consumption by the Netty server:&lt;/p&gt;
 
-* a buffer becomes full when writing a record to it, or&lt;br&gt;
-* the buffer timeout hits, or&lt;br&gt;
-* a special event such as a checkpoint barrier is sent.&lt;br&gt;
-&lt;br&gt;
+&lt;ul&gt;
+  &lt;li&gt;a buffer becomes full when writing a record to it, or&lt;br /&gt;&lt;/li&gt;
+  &lt;li&gt;the buffer timeout hits, or&lt;br /&gt;&lt;/li&gt;
+  &lt;li&gt;a special event such as a checkpoint barrier is sent.&lt;br /&gt;
+&lt;br /&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-#### Flush after Buffer Full
+&lt;h4 id=&quot;flush-after-buffer-full&quot;&gt;Flush after Buffer Full&lt;/h4&gt;
 
-The RecordWriter works with a local serialisation buffer for the current record and will gradually write these bytes to one or more network buffers sitting at the appropriate result subpartition queue. Although a RecordWriter can work on multiple subpartitions, each subpartition has only one RecordWriter writing data to it. The Netty server, on the other hand, is reading from multiple result subpartitions and multiplexing the appropriate ones into a single channel as described above. Thi [...]
-&lt;br&gt;
+&lt;p&gt;The RecordWriter works with a local serialisation buffer for the current record and will gradually write these bytes to one or more network buffers sitting at the appropriate result subpartition queue. Although a RecordWriter can work on multiple subpartitions, each subpartition has only one RecordWriter writing data to it. The Netty server, on the other hand, is reading from multiple result subpartitions and multiplexing the appropriate ones into a single channel as described a [...]
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack7.png&quot; width=&quot;500px&quot; alt=&quot;Record-writer-to-network-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack7.png&quot; width=&quot;500px&quot; alt=&quot;Record-writer-to-network-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;sup&gt;4&lt;/sup&gt;We can assume it already got the notification if there are more finished buffers in the queue.
-&lt;br&gt;
+&lt;p&gt;&lt;sup&gt;4&lt;/sup&gt;We can assume it already got the notification if there are more finished buffers in the queue.
+&lt;br /&gt;&lt;/p&gt;
 
-#### Flush after Buffer Timeout
+&lt;h4 id=&quot;flush-after-buffer-timeout&quot;&gt;Flush after Buffer Timeout&lt;/h4&gt;
 
-In order to support low-latency use cases, we cannot only rely on buffers being full in order to send data downstream. There may be cases where a certain communication channel does not have too many records flowing through and unnecessarily increase the latency of the few records you actually have. Therefore, a periodic process will flush whatever data is available down the stack: the output flusher. The periodic interval can be configured via [StreamExecutionEnvironment#setBufferTimeout [...]
-&lt;br&gt;
+&lt;p&gt;In order to support low-latency use cases, we cannot only rely on buffers being full in order to send data downstream. There may be cases where a certain communication channel does not have too many records flowing through and unnecessarily increase the latency of the few records you actually have. Therefore, a periodic process will flush whatever data is available down the stack: the output flusher. The periodic interval can be configured via &lt;a href=&quot;https://ci.apache. [...]
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack8.png&quot; width=&quot;500px&quot; alt=&quot;Record-writer-to-network-with-flusher-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack8.png&quot; width=&quot;500px&quot; alt=&quot;Record-writer-to-network-with-flusher-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;sup&gt;5&lt;/sup&gt;Strictly speaking, the output flusher does not give any guarantees - it only sends a notification to Netty which can pick it up at will / capacity. This also means that the output flusher has no effect if the channel is backpressured.
-&lt;br&gt;
+&lt;p&gt;&lt;sup&gt;5&lt;/sup&gt;Strictly speaking, the output flusher does not give any guarantees - it only sends a notification to Netty which can pick it up at will / capacity. This also means that the output flusher has no effect if the channel is backpressured.
+&lt;br /&gt;&lt;/p&gt;
 
-#### Flush after special event
+&lt;h4 id=&quot;flush-after-special-event&quot;&gt;Flush after special event&lt;/h4&gt;
 
-Some special events also trigger immediate flushes if being sent through the RecordWriter. The most important ones are checkpoint barriers or end-of-partition events which obviously should go quickly and not wait for the output flusher to kick in.
-&lt;br&gt;
+&lt;p&gt;Some special events also trigger immediate flushes if being sent through the RecordWriter. The most important ones are checkpoint barriers or end-of-partition events which obviously should go quickly and not wait for the output flusher to kick in.
+&lt;br /&gt;&lt;/p&gt;
 
-#### Further remarks
+&lt;h4 id=&quot;further-remarks&quot;&gt;Further remarks&lt;/h4&gt;
 
-In contrast to Flink &lt; 1.5, please note that (a) network buffers are now placed in the subpartition queues directly and (b) we are not closing the buffer on each flush. This gives us a few advantages:
+&lt;p&gt;In contrast to Flink &amp;lt; 1.5, please note that (a) network buffers are now placed in the subpartition queues directly and (b) we are not closing the buffer on each flush. This gives us a few advantages:&lt;/p&gt;
 
-* less synchronisation overhead (output flusher and RecordWriter are independent)
-* in high-load scenarios where Netty is the bottleneck (either through backpressure or directly), we can still accumulate data in incomplete buffers
-* significant reduction of Netty notifications
+&lt;ul&gt;
+  &lt;li&gt;less synchronisation overhead (output flusher and RecordWriter are independent)&lt;/li&gt;
+  &lt;li&gt;in high-load scenarios where Netty is the bottleneck (either through backpressure or directly), we can still accumulate data in incomplete buffers&lt;/li&gt;
+  &lt;li&gt;significant reduction of Netty notifications&lt;/li&gt;
+&lt;/ul&gt;
 
-However, you may notice an increased CPU use and TCP packet rate during low load scenarios. This is because, with the changes, Flink will use any *available* CPU cycles to try to maintain the desired latency. Once the load increases, this will self-adjust by buffers filling up more. High load scenarios are not affected and even get a better throughput because of the reduced synchronisation overhead.
-&lt;br&gt;
+&lt;p&gt;However, you may notice an increased CPU use and TCP packet rate during low load scenarios. This is because, with the changes, Flink will use any &lt;em&gt;available&lt;/em&gt; CPU cycles to try to maintain the desired latency. Once the load increases, this will self-adjust by buffers filling up more. High load scenarios are not affected and even get a better throughput because of the reduced synchronisation overhead.
+&lt;br /&gt;&lt;/p&gt;
 
-### Buffer Builder &amp; Buffer Consumer
+&lt;h3 id=&quot;buffer-builder--buffer-consumer&quot;&gt;Buffer Builder &amp;amp; Buffer Consumer&lt;/h3&gt;
 
-If you want to dig deeper into how the producer-consumer mechanics are implemented in Flink, please take a closer look at the [BufferBuilder]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.html) and [BufferConsumer]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.html) classes which have been introduced in Flink 1.5. While reading is potentially only *per bu [...]
+&lt;p&gt;If you want to dig deeper into how the producer-consumer mechanics are implemented in Flink, please take a closer look at the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferBuilder.html&quot;&gt;BufferBuilder&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/api/java/org/apache/flink/runtime/io/network/buffer/BufferConsumer.html&quot;&gt;BufferConsumer [...]
 
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## Latency vs. Throughput
+&lt;h2 id=&quot;latency-vs-throughput&quot;&gt;Latency vs. Throughput&lt;/h2&gt;
 
-Network buffers were introduced to get higher resource utilisation and higher throughput at the cost of having some records wait in buffers a little longer. Although an upper limit to this wait time can be given via the buffer timeout, you may be curious to find out more about the trade-off between these two dimensions: latency and throughput, as, obviously, you cannot get both. The following plot shows various values for the buffer timeout starting at 0 (flush with every record) to 100m [...]
-&lt;br&gt;
+&lt;p&gt;Network buffers were introduced to get higher resource utilisation and higher throughput at the cost of having some records wait in buffers a little longer. Although an upper limit to this wait time can be given via the buffer timeout, you may be curious to find out more about the trade-off between these two dimensions: latency and throughput, as, obviously, you cannot get both. The following plot shows various values for the buffer timeout starting at 0 (flush with every record [...]
+&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-06-05-network-stack/flink-network-stack9.png&quot; width=&quot;650px&quot; alt=&quot;Network-buffertimeout-Flink&#39;s Network Stack&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-06-05-network-stack/flink-network-stack9.png&quot; width=&quot;650px&quot; alt=&quot;Network-buffertimeout-Flink&#39;s Network Stack&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-As you can see, with Flink 1.5+, even very low buffer timeouts such as 1ms (for low-latency scenarios) provide a maximum throughput as high as 75% of the default timeout where more data is buffered before being sent over the wire.
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-&lt;br&gt;
+&lt;p&gt;As you can see, with Flink 1.5+, even very low buffer timeouts such as 1ms (for low-latency scenarios) provide a maximum throughput as high as 75% of the default timeout where more data is buffered before being sent over the wire.&lt;/p&gt;
 
-## Conclusion
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Now you know about result partitions, the different network connections and scheduling types for both batch and streaming. You also know about credit-based flow control and how the network stack works internally, in order to reason about network-related tuning parameters and about certain job behaviours. Future blog posts in this series will build upon this knowledge and go into more operational details including relevant metrics to look at, further network stack tuning, and common antip [...]
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
+&lt;p&gt;Now you know about result partitions, the different network connections and scheduling types for both batch and streaming. You also know about credit-based flow control and how the network stack works internally, in order to reason about network-related tuning parameters and about certain job behaviours. Future blog posts in this series will build upon this knowledge and go into more operational details including relevant metrics to look at, further network stack tuning, and com [...]
 
 </description>
-<pubDate>Wed, 05 Jun 2019 08:45:00 +0000</pubDate>
+<pubDate>Wed, 05 Jun 2019 10:45:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/06/05/flink-network-stack.html</link>
 <guid isPermaLink="true">/2019/06/05/flink-network-stack.html</guid>
 </item>
 
 <item>
 <title>State TTL in Flink 1.8.0: How to Automatically Cleanup Application State in Apache Flink</title>
-<description>A common requirement for many stateful streaming applications is to automatically cleanup application state for effective management of your state size, or to control how long the application state can be accessed (e.g. due to legal regulations like the GDPR). The state time-to-live (TTL) feature was initiated in Flink 1.6.0 and enabled application state cleanup and efficient state size management in Apache Flink. 
+<description>&lt;p&gt;A common requirement for many stateful streaming applications is to automatically cleanup application state for effective management of your state size, or to control how long the application state can be accessed (e.g. due to legal regulations like the GDPR). The state time-to-live (TTL) feature was initiated in Flink 1.6.0 and enabled application state cleanup and efficient state size management in Apache Flink.&lt;/p&gt;
 
-In this post, we motivate the State TTL feature and discuss its use cases. Moreover, we show how to use and configure it. We explain how Flink internally manages state with TTL and present some exciting additions to the feature in Flink 1.8.0. The blog post concludes with an outlook on future improvements and extensions.
+&lt;p&gt;In this post, we motivate the State TTL feature and discuss its use cases. Moreover, we show how to use and configure it. We explain how Flink internally manages state with TTL and present some exciting additions to the feature in Flink 1.8.0. The blog post concludes with an outlook on future improvements and extensions.&lt;/p&gt;
 
-# The Transient Nature of State
+&lt;h1 id=&quot;the-transient-nature-of-state&quot;&gt;The Transient Nature of State&lt;/h1&gt;
 
-There are two major reasons why state should be maintained only for a limited time. For example, let’s imagine a Flink application that ingests a stream of user login events and stores for each user the time of the last login to improve the experience of frequent visitors.
+&lt;p&gt;There are two major reasons why state should be maintained only for a limited time. For example, let’s imagine a Flink application that ingests a stream of user login events and stores for each user the time of the last login to improve the experience of frequent visitors.&lt;/p&gt;
 
-* **Controlling the size of state.**
-Being able to efficiently manage an ever-growing state size is a primary use case for state TTL. Oftentimes, data needs to be persisted temporarily while there is some user activity around it, e.g. web sessions. When the activity ends there is no longer interest in that data while it still occupies storage. Flink 1.8.0 introduces background cleanup of old state based on TTL that makes the eviction of no-longer-necessary data frictionless. Previously, the application developer had to take [...]
-
-* **Complying with data protection and sensitive data requirements.**
-Recent developments around data privacy regulations, such as the General Data Protection Regulation (GDPR) introduced by the European Union, make compliance with such data requirements or treating sensitive data a top priority for many use cases and applications. An example of such use cases includes applications that require keeping data for a specific timeframe and preventing access to it thereafter. This is a common challenge for companies providing short-term services to their custom [...]
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Controlling the size of state.&lt;/strong&gt;
+Being able to efficiently manage an ever-growing state size is a primary use case for state TTL. Oftentimes, data needs to be persisted temporarily while there is some user activity around it, e.g. web sessions. When the activity ends there is no longer interest in that data while it still occupies storage. Flink 1.8.0 introduces background cleanup of old state based on TTL that makes the eviction of no-longer-necessary data frictionless. Previously, the application developer had to take [...]
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Complying with data protection and sensitive data requirements.&lt;/strong&gt;
+Recent developments around data privacy regulations, such as the General Data Protection Regulation (GDPR) introduced by the European Union, make compliance with such data requirements or treating sensitive data a top priority for many use cases and applications. An example of such use cases includes applications that require keeping data for a specific timeframe and preventing access to it thereafter. This is a common challenge for companies providing short-term services to their custom [...]
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-Both requirements can be addressed by a feature that periodically, yet continuously, removes the state for a key once it becomes unnecessary or unimportant and there is no requirement to keep it in storage any more.
+&lt;p&gt;Both requirements can be addressed by a feature that periodically, yet continuously, removes the state for a key once it becomes unnecessary or unimportant and there is no requirement to keep it in storage any more.&lt;/p&gt;
 
-# State TTL for continuous cleanup of application state
+&lt;h1 id=&quot;state-ttl-for-continuous-cleanup-of-application-state&quot;&gt;State TTL for continuous cleanup of application state&lt;/h1&gt;
 
-The 1.6.0 release of Apache Flink introduced the State TTL feature. It enabled developers of stream processing applications to configure the state of operators to expire and be cleaned up after a defined timeout (time-to-live). In Flink 1.8.0 the feature was extended, including continuous cleanup of old entries for both the RocksDB and the heap state backends (FSStateBackend and MemoryStateBackend), enabling a continuous cleanup process of old entries (according to the TTL setting).
+&lt;p&gt;The 1.6.0 release of Apache Flink introduced the State TTL feature. It enabled developers of stream processing applications to configure the state of operators to expire and be cleaned up after a defined timeout (time-to-live). In Flink 1.8.0 the feature was extended, including continuous cleanup of old entries for both the RocksDB and the heap state backends (FSStateBackend and MemoryStateBackend), enabling a continuous cleanup process of old entries (according to the TTL setti [...]
 
-In Flink’s DataStream API, application state is defined by a [state descriptor]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/stream/state/state.html#using-managed-keyed-state). State TTL is configured by passing a `StateTtlConfiguration` object to a state descriptor. The following Java example shows how to create a state TTL configuration and provide it to the state descriptor that holds the last login time of a user as a `Long` value:
+&lt;p&gt;In Flink’s DataStream API, application state is defined by a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/stream/state/state.html#using-managed-keyed-state&quot;&gt;state descriptor&lt;/a&gt;. State TTL is configured by passing a &lt;code&gt;StateTtlConfiguration&lt;/code&gt; object to a state descriptor. The following Java example shows how to create a state TTL configuration and provide it to the state descriptor that holds the last login ti [...]
 
-```java
-import org.apache.flink.api.common.state.StateTtlConfig;
-import org.apache.flink.api.common.time.Time;
-import org.apache.flink.api.common.state.ValueStateDescriptor;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.api.common.state.StateTtlConfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.api.common.time.Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.api.common.state.ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
 
-StateTtlConfig ttlConfig = StateTtlConfig
-    .newBuilder(Time.days(7))
-    .setUpdateType(StateTtlConfig.UpdateType.OnCreateAndWrite)
-    .setStateVisibility(StateTtlConfig.StateVisibility.NeverReturnExpired)
-    .build();
+&lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ttlConfig&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newBuilder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;days&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;7&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setUpdateType&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;UpdateType&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;OnCreateAndWrite&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;setStateVisibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;StateVisibility&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;NeverReturnExpired&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
     
-ValueStateDescriptor&lt;Long&gt; lastUserLogin = 
-    new ValueStateDescriptor&lt;&gt;(&quot;lastUserLogin&quot;, Long.class);
+&lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;lastUserLogin&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; 
+    &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ValueStateDescriptor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;lastUserLogin&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;class&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
 
-lastUserLogin.enableTimeToLive(ttlConfig);
-```
+&lt;span class=&quot;n&quot;&gt;lastUserLogin&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;enableTimeToLive&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ttlConfig&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Flink provides multiple options to configure the behavior of the state TTL functionality.
+&lt;p&gt;Flink provides multiple options to configure the behavior of the state TTL functionality.&lt;/p&gt;
 
-* **When is the Time-to-Live reset?** 
-By default, the expiration time of a state entry is updated when the state is modified. Optionally, it can also be updated on read access at the cost of an additional write operation to update the timestamp.
-
-* **Can the expired state be accessed one last time?** 
-State TTL employs a lazy strategy to clean up expired state. This can lead to the situation that an application attempts to read state which is expired but hasn’t been removed yet. You can configure whether such a read request returns the expired state or not. In either case, the expired state is immediately removed afterwards. While the option of returning expired state favors data availability, not returning expired state can be required for data protection regulations.
-
-* **Which time semantics are used for the Time-to-Live timers?** 
-With Flink 1.8.0, users can only define a state TTL in terms of processing time. The support for event time is planned for future Apache Flink releases.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;When is the Time-to-Live reset?&lt;/strong&gt; 
+By default, the expiration time of a state entry is updated when the state is modified. Optionally, it can also be updated on read access at the cost of an additional write operation to update the timestamp.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Can the expired state be accessed one last time?&lt;/strong&gt; 
+State TTL employs a lazy strategy to clean up expired state. This can lead to the situation that an application attempts to read state which is expired but hasn’t been removed yet. You can configure whether such a read request returns the expired state or not. In either case, the expired state is immediately removed afterwards. While the option of returning expired state favors data availability, not returning expired state can be required for data protection regulations.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Which time semantics are used for the Time-to-Live timers?&lt;/strong&gt; 
+With Flink 1.8.0, users can only define a state TTL in terms of processing time. The support for event time is planned for future Apache Flink releases.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-You can read more about how to use state TTL in the [Apache Flink documentation]({{ site.DOCS_BASE_URL }}flink-docs-stable/dev/stream/state/state.html#state-time-to-live-ttl).
+&lt;p&gt;You can read more about how to use state TTL in the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-stable/dev/stream/state/state.html#state-time-to-live-ttl&quot;&gt;Apache Flink documentation&lt;/a&gt;.&lt;/p&gt;
 
-Internally, the State TTL feature is implemented by storing an additional timestamp of the last relevant state access, along with the actual state value. While this approach adds some storage overhead, it allows Flink to check for the expired state during state access, checkpointing, recovery, or dedicated storage cleanup procedures.
+&lt;p&gt;Internally, the State TTL feature is implemented by storing an additional timestamp of the last relevant state access, along with the actual state value. While this approach adds some storage overhead, it allows Flink to check for the expired state during state access, checkpointing, recovery, or dedicated storage cleanup procedures.&lt;/p&gt;
 
-# “Taking out the Garbage”
+&lt;h1 id=&quot;taking-out-the-garbage&quot;&gt;“Taking out the Garbage”&lt;/h1&gt;
 
-When a state object is accessed in a read operation, Flink will check its timestamp and clear the state if it is expired (depending on the configured state visibility, the expired state is returned or not). Due to this lazy removal, expired state that is never accessed again will forever occupy storage space unless it is garbage collected.
+&lt;p&gt;When a state object is accessed in a read operation, Flink will check its timestamp and clear the state if it is expired (depending on the configured state visibility, the expired state is returned or not). Due to this lazy removal, expired state that is never accessed again will forever occupy storage space unless it is garbage collected.&lt;/p&gt;
 
-So how can the expired state be removed without the application logic explicitly taking care of it? In general, there are different possible strategies to remove it in the background.
+&lt;p&gt;So how can the expired state be removed without the application logic explicitly taking care of it? In general, there are different possible strategies to remove it in the background.&lt;/p&gt;
 
-## Keep full state snapshots clean
+&lt;h2 id=&quot;keep-full-state-snapshots-clean&quot;&gt;Keep full state snapshots clean&lt;/h2&gt;
 
-Flink 1.6.0 already supported automatic eviction of the expired state when a full snapshot for a checkpoint or savepoint is taken. Note that state eviction is not applied for incremental checkpoints. State eviction on full snapshots must be explicitly enabled as shown in the following example:
+&lt;p&gt;Flink 1.6.0 already supported automatic eviction of the expired state when a full snapshot for a checkpoint or savepoint is taken. Note that state eviction is not applied for incremental checkpoints. State eviction on full snapshots must be explicitly enabled as shown in the following example:&lt;/p&gt;
 
-```java
-StateTtlConfig ttlConfig = StateTtlConfig
-    .newBuilder(Time.days(7))
-    .cleanupFullSnapshot()
-    .build();
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ttlConfig&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newBuilder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;days&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;7&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;cleanupFullSnapshot&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-The local storage stays untouched but the size of the stored snapshot is reduced. The local state of an operator will only be cleaned up when the operator reloads its state from a snapshot, i.e. in case of recovery or when starting from a savepoint. 
+&lt;p&gt;The local storage stays untouched but the size of the stored snapshot is reduced. The local state of an operator will only be cleaned up when the operator reloads its state from a snapshot, i.e. in case of recovery or when starting from a savepoint.&lt;/p&gt;
 
-Due to these limitations, applications still need to actively remove state after it expired in Flink 1.6.0. To improve the user experience, Flink 1.8.0 introduces two more autonomous cleanup strategies, one for each of Flink’s two state backend types. We describe them below.
+&lt;p&gt;Due to these limitations, applications still need to actively remove state after it expired in Flink 1.6.0. To improve the user experience, Flink 1.8.0 introduces two more autonomous cleanup strategies, one for each of Flink’s two state backend types. We describe them below.&lt;/p&gt;
 
-## Incremental cleanup in Heap state backends
+&lt;h2 id=&quot;incremental-cleanup-in-heap-state-backends&quot;&gt;Incremental cleanup in Heap state backends&lt;/h2&gt;
 
-This approach is specific to the Heap state backends (FSStateBackend and MemoryStateBackend). The idea is that the storage backend keeps a lazy global iterator over all state entries. Certain events, for instance state access, trigger an incremental cleanup. Every time an incremental cleanup is triggered, the iterator is advanced. The traversed state entries are checked and expired once are removed. The following code example shows how to enable incremental cleanup:
+&lt;p&gt;This approach is specific to the Heap state backends (FSStateBackend and MemoryStateBackend). The idea is that the storage backend keeps a lazy global iterator over all state entries. Certain events, for instance state access, trigger an incremental cleanup. Every time an incremental cleanup is triggered, the iterator is advanced. The traversed state entries are checked and expired once are removed. The following code example shows how to enable incremental cleanup:&lt;/p&gt;
 
-```java
-StateTtlConfig ttlConfig = StateTtlConfig
-    .newBuilder(Time.days(7))
-    // check 10 keys for every state access
-    .cleanupIncrementally(10, false)
-    .build();
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ttlConfig&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newBuilder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;days&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;7&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+    &lt;span class=&quot;c1&quot;&gt;// check 10 keys for every state access&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;cleanupIncrementally&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;10&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;kc&quot;&gt;false&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-If enabled, every state access triggers a cleanup step. For every clean up step, a certain number of state entries are checked for expiration. There are two tuning parameters. The first defines the number of state entries to check for each cleanup step. The second parameter is a flag to trigger a cleanup step after each processed record, additionally to each state access.
+&lt;p&gt;If enabled, every state access triggers a cleanup step. For every clean up step, a certain number of state entries are checked for expiration. There are two tuning parameters. The first defines the number of state entries to check for each cleanup step. The second parameter is a flag to trigger a cleanup step after each processed record, additionally to each state access.&lt;/p&gt;
 
-There are two important caveats about this approach: 
+&lt;p&gt;There are two important caveats about this approach: 
 * The first one is that the time spent for the incremental cleanup increases the record processing latency.
-* The second one should be practically negligible but still worth mentioning: if no state is accessed or no records are processed, expired state won’t be removed.
+* The second one should be practically negligible but still worth mentioning: if no state is accessed or no records are processed, expired state won’t be removed.&lt;/p&gt;
 
-## RocksDB background compaction to filter out expired state
+&lt;h2 id=&quot;rocksdb-background-compaction-to-filter-out-expired-state&quot;&gt;RocksDB background compaction to filter out expired state&lt;/h2&gt;
 
-If your application uses the RocksDB state backend, you can enable another cleanup strategy which is based on a Flink specific compaction filter. RocksDB periodically runs asynchronous compactions to merge state updates and reduce storage. The Flink compaction filter checks the expiration timestamp of state entries with TTL and discards all expired values.
+&lt;p&gt;If your application uses the RocksDB state backend, you can enable another cleanup strategy which is based on a Flink specific compaction filter. RocksDB periodically runs asynchronous compactions to merge state updates and reduce storage. The Flink compaction filter checks the expiration timestamp of state entries with TTL and discards all expired values.&lt;/p&gt;
 
-The first step to activate this feature is to configure the RocksDB state backend by setting the following Flink configuration option: `state.backend.rocksdb.ttl.compaction.filter.enabled`. Once the RocksDB state backend is configured, the compaction cleanup strategy is enabled for a state as shown in the following code example:
+&lt;p&gt;The first step to activate this feature is to configure the RocksDB state backend by setting the following Flink configuration option: &lt;code&gt;state.backend.rocksdb.ttl.compaction.filter.enabled&lt;/code&gt;. Once the RocksDB state backend is configured, the compaction cleanup strategy is enabled for a state as shown in the following code example:&lt;/p&gt;
 
-```java
-StateTtlConfig ttlConfig = StateTtlConfig
-    .newBuilder(Time.days(7))
-    .cleanupInRocksdbCompactFilter()
-    .build();
-```
-Keep in mind that calling the Flink TTL filter slows down the RocksDB compaction.
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ttlConfig&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StateTtlConfig&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;newBuilder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;days&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;7&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;cleanupInRocksdbCompactFilter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+&lt;p&gt;Keep in mind that calling the Flink TTL filter slows down the RocksDB compaction.&lt;/p&gt;
 
-## Eager State Cleanup with Timers
+&lt;h2 id=&quot;eager-state-cleanup-with-timers&quot;&gt;Eager State Cleanup with Timers&lt;/h2&gt;
 
-Another way to manually cleanup state is based on Flink timers. This is an idea that the community is currently evaluating for future releases. With this approach, a cleanup timer is registered for every state access. This approach is more predictable because state is eagerly removed as soon as it expires. However, it is more expensive because the timers consume storage along with the original state. 
+&lt;p&gt;Another way to manually cleanup state is based on Flink timers. This is an idea that the community is currently evaluating for future releases. With this approach, a cleanup timer is registered for every state access. This approach is more predictable because state is eagerly removed as soon as it expires. However, it is more expensive because the timers consume storage along with the original state.&lt;/p&gt;
 
-# Future work
+&lt;h1 id=&quot;future-work&quot;&gt;Future work&lt;/h1&gt;
 
-Apart from including the timer-based cleanup strategy, mentioned above, the Flink community has plans to further improve the state TTL feature. The possible improvements include adding support of TTL for event time scale (only processing time is supported at the moment) and enabling State TTL for queryable state.
+&lt;p&gt;Apart from including the timer-based cleanup strategy, mentioned above, the Flink community has plans to further improve the state TTL feature. The possible improvements include adding support of TTL for event time scale (only processing time is supported at the moment) and enabling State TTL for queryable state.&lt;/p&gt;
 
-We encourage you to join the conversation and share your thoughts and ideas in the [Apache Flink JIRA board](https://issues.apache.org/jira/projects/FLINK/summary) or by subscribing to the Apache Flink dev mailing list. Feedback or suggestions are always appreciated and we look forward to hearing your thoughts on the Flink mailing lists.
+&lt;p&gt;We encourage you to join the conversation and share your thoughts and ideas in the &lt;a href=&quot;https://issues.apache.org/jira/projects/FLINK/summary&quot;&gt;Apache Flink JIRA board&lt;/a&gt; or by subscribing to the Apache Flink dev mailing list. Feedback or suggestions are always appreciated and we look forward to hearing your thoughts on the Flink mailing lists.&lt;/p&gt;
 
-# Summary
+&lt;h1 id=&quot;summary&quot;&gt;Summary&lt;/h1&gt;
 
-Time-based state access restrictions and controlling the size of application state are common challenges in the world of stateful stream processing. Flink’s 1.8.0 release significantly improves the State TTL feature by adding support for continuous background cleanup of expired state objects. The new clean up mechanisms relieve you from manually implementing state cleanup. They are also more efficient due to their lazy nature. State TTL gives you control over the size of your application [...]
+&lt;p&gt;Time-based state access restrictions and controlling the size of application state are common challenges in the world of stateful stream processing. Flink’s 1.8.0 release significantly improves the State TTL feature by adding support for continuous background cleanup of expired state objects. The new clean up mechanisms relieve you from manually implementing state cleanup. They are also more efficient due to their lazy nature. State TTL gives you control over the size of your ap [...]
 </description>
-<pubDate>Sun, 19 May 2019 12:00:00 +0000</pubDate>
+<pubDate>Sun, 19 May 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/05/19/state-ttl.html</link>
 <guid isPermaLink="true">/2019/05/19/state-ttl.html</guid>
 </item>
 
 <item>
 <title>Flux capacitor, huh? Temporal Tables and Joins in Streaming SQL</title>
-<description>Figuring out how to manage and model temporal data for effective point-in-time analysis was a longstanding battle, dating as far back as the early 80’s, that culminated with the introduction of temporal tables in the SQL standard in 2011. Up to that point, users were doomed to implement this as part of the application logic, often hurting the length of the development lifecycle as well as the maintainability of the code. And, although there isn’t a single, commonly accepted  [...]
+<description>&lt;p&gt;Figuring out how to manage and model temporal data for effective point-in-time analysis was a longstanding battle, dating as far back as the early 80’s, that culminated with the introduction of temporal tables in the SQL standard in 2011. Up to that point, users were doomed to implement this as part of the application logic, often hurting the length of the development lifecycle as well as the maintainability of the code. And, although there isn’t a single, commonly  [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-05-13-temporal-tables/TemporalTables1.png&quot; width=&quot;500px&quot; alt=&quot;Taxi Fares and Conversion Rates&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-05-13-temporal-tables/TemporalTables1.png&quot; width=&quot;500px&quot; alt=&quot;Taxi Fares and Conversion Rates&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**For example:** given a stream with Taxi Fare events tied to the local currency of the ride location, we might want to convert the fare price to a common currency for further processing. As conversion rates excel at fluctuating over time, each Taxi Fare event would need to be matched to the rate that was valid at the time the event occurred in order to produce a reliable result.
+&lt;p&gt;&lt;strong&gt;For example:&lt;/strong&gt; given a stream with Taxi Fare events tied to the local currency of the ride location, we might want to convert the fare price to a common currency for further processing. As conversion rates excel at fluctuating over time, each Taxi Fare event would need to be matched to the rate that was valid at the time the event occurred in order to produce a reliable result.&lt;/p&gt;
 
-## Modelling Temporal Data with Flink
+&lt;h2 id=&quot;modelling-temporal-data-with-flink&quot;&gt;Modelling Temporal Data with Flink&lt;/h2&gt;
 
-In the 1.7 release, Flink has introduced the concept of **temporal tables** into its streaming SQL and Table API: parameterized views on append-only tables — or, any table that only allows records to be inserted, never updated or deleted — that are interpreted as a changelog and keep data closely tied to time context, so that it can be interpreted as valid only within a specific period of time. Transforming a stream into a temporal table requires: 
+&lt;p&gt;In the 1.7 release, Flink has introduced the concept of &lt;strong&gt;temporal tables&lt;/strong&gt; into its streaming SQL and Table API: parameterized views on append-only tables — or, any table that only allows records to be inserted, never updated or deleted — that are interpreted as a changelog and keep data closely tied to time context, so that it can be interpreted as valid only within a specific period of time. Transforming a stream into a temporal table requires:&lt;/p&gt;
 
-* Defining a **primary key** and a **versioning field** that can be used to keep track of the changes that happen over time;
-
-* Exposing the stream as a **temporal table function** that maps each point in time to a static relation.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Defining a &lt;strong&gt;primary key&lt;/strong&gt; and a &lt;strong&gt;versioning field&lt;/strong&gt; that can be used to keep track of the changes that happen over time;&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Exposing the stream as a &lt;strong&gt;temporal table function&lt;/strong&gt; that maps each point in time to a static relation.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-Going back to our example use case, a temporal table is just what we need to model the conversion rate data such as to make it useful for point-in-time querying. Temporal table functions are implemented as an extension of Flink’s generic [table function]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/udfs.html#table-functions) class and can be defined in the same straightforward way to be used with the Table API or SQL parser.
+&lt;p&gt;Going back to our example use case, a temporal table is just what we need to model the conversion rate data such as to make it useful for point-in-time querying. Temporal table functions are implemented as an extension of Flink’s generic &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/udfs.html#table-functions&quot;&gt;table function&lt;/a&gt; class and can be defined in the same straightforward way to be used with the Table API or SQL pars [...]
 
-```java
-import org.apache.flink.table.functions.TemporalTableFunction;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kn&quot;&gt;import&lt;/span&gt; &lt;span class=&quot;nn&quot;&gt;org.apache.flink.table.functions.TemporalTableFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
  
-(...)
+&lt;span class=&quot;o&quot;&gt;(...)&lt;/span&gt;
  
-// Get the stream and table environments.
-StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
-StreamTableEnvironment tEnv = StreamTableEnvironment.getTableEnvironment(env);
+&lt;span class=&quot;c1&quot;&gt;// Get the stream and table environments.&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;StreamExecutionEnvironment&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;StreamTableEnvironment&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;StreamTableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getTableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
  
-// Provide a sample static data set of the rates history table.
-List &lt;Tuple2&lt;String, Long&gt;&gt;ratesHistoryData =new ArrayList&lt;&gt;();
+&lt;span class=&quot;c1&quot;&gt;// Provide a sample static data set of the rates history table.&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;List&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ratesHistoryData&lt;/span&gt; &lt;span class= [...]
  
-ratesHistoryData.add(Tuple2.of(&quot;USD&quot;, 102L)); 
-ratesHistoryData.add(Tuple2.of(&quot;EUR&quot;, 114L)); 
-ratesHistoryData.add(Tuple2.of(&quot;YEN&quot;, 1L)); 
-ratesHistoryData.add(Tuple2.of(&quot;EUR&quot;, 116L)); 
-ratesHistoryData.add(Tuple2.of(&quot;USD&quot;, 105L));
+&lt;span class=&quot;n&quot;&gt;ratesHistoryData&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;USD&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt [...]
+&lt;span class=&quot;n&quot;&gt;ratesHistoryData&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;EUR&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt [...]
+&lt;span class=&quot;n&quot;&gt;ratesHistoryData&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;YEN&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt [...]
+&lt;span class=&quot;n&quot;&gt;ratesHistoryData&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;EUR&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt [...]
+&lt;span class=&quot;n&quot;&gt;ratesHistoryData&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;add&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;USD&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt [...]
  
-// Create and register an example table using the sample data set.
-DataStream&lt;Tuple2&lt;String, Long&gt;&gt; ratesHistoryStream = env.fromCollection(ratesHistoryData);
+&lt;span class=&quot;c1&quot;&gt;// Create and register an example table using the sample data set.&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Tuple2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Long&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ratesHistoryStream&lt;/span&gt; &lt;spa [...]
  
-Table ratesHistory = tEnv.fromDataStream(ratesHistoryStream, &quot;r_currency, r_rate, r_proctime.proctime&quot;);
+&lt;span class=&quot;n&quot;&gt;Table&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ratesHistory&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;fromDataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ratesHistoryStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&qu [...]
  
-tEnv.registerTable(&quot;RatesHistory&quot;, ratesHistory);
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerTable&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;RatesHistory&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ratesHistory&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
  
-// Create and register the temporal table function &quot;rates&quot;.
-// Define &quot;r_proctime&quot; as the versioning field and &quot;r_currency&quot; as the primary key.
-TemporalTableFunction rates = ratesHistory.createTemporalTableFunction(&quot;r_proctime&quot;, &quot;r_currency&quot;);
+&lt;span class=&quot;c1&quot;&gt;// Create and register the temporal table function &amp;quot;rates&amp;quot;.&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;// Define &amp;quot;r_proctime&amp;quot; as the versioning field and &amp;quot;r_currency&amp;quot; as the primary key.&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;TemporalTableFunction&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rates&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ratesHistory&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;createTemporalTableFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;r_proctime&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&g [...]
  
-tEnv.registerFunction(&quot;Rates&quot;, rates);
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;Rates&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;rates&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
  
-(...)
-```
+&lt;span class=&quot;o&quot;&gt;(...)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-What does this **Rates** function do, in practice? Imagine we would like to check what the conversion rates looked like at a given time — say, 11:00. We could simply do something like:
+&lt;p&gt;What does this &lt;strong&gt;Rates&lt;/strong&gt; function do, in practice? Imagine we would like to check what the conversion rates looked like at a given time — say, 11:00. We could simply do something like:&lt;/p&gt;
 
-```sql
-SELECT * FROM Rates(&#39;11:00&#39;);
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-sql&quot;&gt;&lt;span class=&quot;k&quot;&gt;SELECT&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;FROM&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Rates&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s1&quot;&gt;&amp;#39;11:00&amp;#39;&lt;/span&gt;&lt;span class=&quot;p&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-05-13-temporal-tables/TemporalTables2.png&quot; width=&quot;650px&quot; alt=&quot;Point-in-time Querying&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-05-13-temporal-tables/TemporalTables2.png&quot; width=&quot;650px&quot; alt=&quot;Point-in-time Querying&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Even though Flink does not yet support querying temporal table functions with a constant time attribute parameter, these functions can be used to cover a much more interesting scenario: temporal table joins.
+&lt;p&gt;Even though Flink does not yet support querying temporal table functions with a constant time attribute parameter, these functions can be used to cover a much more interesting scenario: temporal table joins.&lt;/p&gt;
 
-## Streaming Joins using Temporal Tables
+&lt;h2 id=&quot;streaming-joins-using-temporal-tables&quot;&gt;Streaming Joins using Temporal Tables&lt;/h2&gt;
 
-Temporal tables reach their full potential when used in combination — erm, joined — with streaming data, for instance to power applications that must continuously whitelist against a reference dataset that changes over time for auditing or regulatory compliance. While efficient joins have long been an enduring challenge for query processors due to computational cost and resource consumption, joins over streaming data carry some additional challenges:
+&lt;p&gt;Temporal tables reach their full potential when used in combination — erm, joined — with streaming data, for instance to power applications that must continuously whitelist against a reference dataset that changes over time for auditing or regulatory compliance. While efficient joins have long been an enduring challenge for query processors due to computational cost and resource consumption, joins over streaming data carry some additional challenges:&lt;/p&gt;
 
-* The **unbounded** nature of streams means that inputs are continuously evaluated and intermediate join results can consume memory resources indefinitely. Flink gracefully manages its memory consumption out-of-the-box (even for heavier cases where joins require spilling to disk) and supports time-windowed joins to bound the amount of data that needs to be kept around as state;
-* Streaming data might be **out-of-order** and **late**, so it is not possible to enforce an ordering upfront and time handling requires some thinking to avoid unnecessary outputs and retractions.
+&lt;ul&gt;
+  &lt;li&gt;The &lt;strong&gt;unbounded&lt;/strong&gt; nature of streams means that inputs are continuously evaluated and intermediate join results can consume memory resources indefinitely. Flink gracefully manages its memory consumption out-of-the-box (even for heavier cases where joins require spilling to disk) and supports time-windowed joins to bound the amount of data that needs to be kept around as state;&lt;/li&gt;
+  &lt;li&gt;Streaming data might be &lt;strong&gt;out-of-order&lt;/strong&gt; and &lt;strong&gt;late&lt;/strong&gt;, so it is not possible to enforce an ordering upfront and time handling requires some thinking to avoid unnecessary outputs and retractions.&lt;/li&gt;
+&lt;/ul&gt;
 
-In the particular case of temporal data, time-windowed joins are not enough (well, at least not without getting into some expensive tweaking): sooner or later, each reference record will fall outside of the window and be wiped from state, no longer being considered for future join results. To address this limitation, Flink has introduced support for temporal table joins to cover time-varying relations.
+&lt;p&gt;In the particular case of temporal data, time-windowed joins are not enough (well, at least not without getting into some expensive tweaking): sooner or later, each reference record will fall outside of the window and be wiped from state, no longer being considered for future join results. To address this limitation, Flink has introduced support for temporal table joins to cover time-varying relations.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-05-13-temporal-tables/TemporalTables3.png&quot; width=&quot;500px&quot; alt=&quot;Temporal Table Join between Taxi Fares and Conversion Rates&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-05-13-temporal-tables/TemporalTables3.png&quot; width=&quot;500px&quot; alt=&quot;Temporal Table Join between Taxi Fares and Conversion Rates&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Each record from the append-only table on the probe side (```Taxi Fare```) is joined with the version of the record from the temporal table on the build side (```Conversion Rate```) that most closely matches the probe side record time attribute (```time```) for the same value of the primary key (```currency```). Remember the temporal table function (```Rates```) we registered earlier? It can now be used to express this join as a simple SQL statement that would otherwise require a heavier [...]
+&lt;p&gt;Each record from the append-only table on the probe side (&lt;code&gt;Taxi Fare&lt;/code&gt;) is joined with the version of the record from the temporal table on the build side (&lt;code&gt;Conversion Rate&lt;/code&gt;) that most closely matches the probe side record time attribute (&lt;code&gt;time&lt;/code&gt;) for the same value of the primary key (&lt;code&gt;currency&lt;/code&gt;). Remember the temporal table function (&lt;code&gt;Rates&lt;/code&gt;) we registered earlier?  [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-05-13-temporal-tables/TemporalTables4.png&quot; width=&quot;700px&quot; alt=&quot;Regular Join vs. Temporal Table Join&quot;/&gt;
+&lt;img src=&quot;/img/blog/2019-05-13-temporal-tables/TemporalTables4.png&quot; width=&quot;700px&quot; alt=&quot;Regular Join vs. Temporal Table Join&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Temporal table joins support both [processing]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/streaming/joins.html#processing-time-temporal-joins) and [event time]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/streaming/joins.html#event-time-temporal-joins) semantics and effectively limit the amount of data kept in state while also allowing records on the build side to be arbitrarily old, as opposed to time-windowed joins. Probe-side records only need to be kept in s [...]
+&lt;p&gt;Temporal table joins support both &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/joins.html#processing-time-temporal-joins&quot;&gt;processing&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/joins.html#event-time-temporal-joins&quot;&gt;event time&lt;/a&gt; semantics and effectively limit the amount of data kept in state while also allowing records on the build side t [...]
 
-* Narrowing the **scope** of the join: only the time-matching version of ```ratesHistory``` is visible for a given ```taxiFare.time```;
-* Pruning **unneeded records** from state: for cases using event time, records between current time and the [watermark]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/event_time.html#event-time-and-watermarks) delay are persisted for both the probe and build side. These are discarded as soon as the watermark arrives and the results are emitted — allowing the join operation to move forward in time and the build table to “refresh” its version in state.
+&lt;ul&gt;
+  &lt;li&gt;Narrowing the &lt;strong&gt;scope&lt;/strong&gt; of the join: only the time-matching version of &lt;code&gt;ratesHistory&lt;/code&gt; is visible for a given &lt;code&gt;taxiFare.time&lt;/code&gt;;&lt;/li&gt;
+  &lt;li&gt;Pruning &lt;strong&gt;unneeded records&lt;/strong&gt; from state: for cases using event time, records between current time and the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html#event-time-and-watermarks&quot;&gt;watermark&lt;/a&gt; delay are persisted for both the probe and build side. These are discarded as soon as the watermark arrives and the results are emitted — allowing the join operation to move forward in time and the [...]
+&lt;/ul&gt;
 
-## Conclusion
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-All this means it is now possible to express continuous stream enrichment in relational and time-varying terms using Flink without dabbling into syntactic patchwork or compromising performance. In other words: stream time-travelling minus the flux capacitor. Extending this syntax to batch processing for enriching historic data with proper (event) time semantics is also part of the Flink roadmap! 
+&lt;p&gt;All this means it is now possible to express continuous stream enrichment in relational and time-varying terms using Flink without dabbling into syntactic patchwork or compromising performance. In other words: stream time-travelling minus the flux capacitor. Extending this syntax to batch processing for enriching historic data with proper (event) time semantics is also part of the Flink roadmap!&lt;/p&gt;
 
-If you&#39;d like to get some **hands-on practice in joining streams with Flink SQL** (and Flink SQL in general), checkout this [free training for Flink SQL](https://github.com/ververica/sql-training/wiki). The training environment is based on Docker and set up in just a few minutes.
+&lt;p&gt;If you’d like to get some &lt;strong&gt;hands-on practice in joining streams with Flink SQL&lt;/strong&gt; (and Flink SQL in general), checkout this &lt;a href=&quot;https://github.com/ververica/sql-training/wiki&quot;&gt;free training for Flink SQL&lt;/a&gt;. The training environment is based on Docker and set up in just a few minutes.&lt;/p&gt;
 
-Subscribe to the [Apache Flink mailing lists]({{ site.baseurl }}/community.html#mailing-lists) to stay up-to-date with the latest developments in this space.
+&lt;p&gt;Subscribe to the &lt;a href=&quot;/community.html#mailing-lists&quot;&gt;Apache Flink mailing lists&lt;/a&gt; to stay up-to-date with the latest developments in this space.&lt;/p&gt;
 </description>
-<pubDate>Tue, 14 May 2019 12:00:00 +0000</pubDate>
+<pubDate>Tue, 14 May 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/05/14/temporal-tables.html</link>
 <guid isPermaLink="true">/2019/05/14/temporal-tables.html</guid>
 </item>
 
 <item>
 <title>When Flink &amp; Pulsar Come Together</title>
-<description>The open source data technology frameworks [Apache Flink](https://flink.apache.org/) and [Apache Pulsar](https://pulsar.apache.org/en/) can integrate in different ways to provide elastic data processing at large scale. I recently gave a talk at [Flink Forward](https://www.flink-forward.org/) San Francisco 2019 and presented some of the integrations between the two frameworks for batch and streaming applications. In this post, I will give a short introduction to Apache Pulsar [...]
+<description>&lt;p&gt;The open source data technology frameworks &lt;a href=&quot;https://flink.apache.org/&quot;&gt;Apache Flink&lt;/a&gt; and &lt;a href=&quot;https://pulsar.apache.org/en/&quot;&gt;Apache Pulsar&lt;/a&gt; can integrate in different ways to provide elastic data processing at large scale. I recently gave a talk at &lt;a href=&quot;https://www.flink-forward.org/&quot;&gt;Flink Forward&lt;/a&gt; San Francisco 2019 and presented some of the integrations between the two fram [...]
 
-## A brief introduction to Apache Pulsar
+&lt;h2 id=&quot;a-brief-introduction-to-apache-pulsar&quot;&gt;A brief introduction to Apache Pulsar&lt;/h2&gt;
 
-[Apache Pulsar](https://pulsar.apache.org/en/) is an open-source distributed pub-sub messaging system under the stewardship of the [Apache Software Foundation](https://www.apache.org/). Pulsar is a multi-tenant, high-performance solution for server-to-server messaging including multiple features such as native support for multiple clusters in a Pulsar instance, with seamless [geo-replication](https://pulsar.apache.org/docs/en/administration-geo) of messages across clusters, very low publ [...]
+&lt;p&gt;&lt;a href=&quot;https://pulsar.apache.org/en/&quot;&gt;Apache Pulsar&lt;/a&gt; is an open-source distributed pub-sub messaging system under the stewardship of the &lt;a href=&quot;https://www.apache.org/&quot;&gt;Apache Software Foundation&lt;/a&gt;. Pulsar is a multi-tenant, high-performance solution for server-to-server messaging including multiple features such as native support for multiple clusters in a Pulsar instance, with seamless &lt;a href=&quot;https://pulsar.apache. [...]
 
-The first differentiating factor stems from the fact that although Pulsar provides a flexible pub-sub messaging system it is also backed by durable log storage — hence combining both messaging and storage under one framework. Because of that layered architecture, Pulsar provides instant failure recovery, independent scalability and balance-free cluster expansion. 
+&lt;p&gt;The first differentiating factor stems from the fact that although Pulsar provides a flexible pub-sub messaging system it is also backed by durable log storage — hence combining both messaging and storage under one framework. Because of that layered architecture, Pulsar provides instant failure recovery, independent scalability and balance-free cluster expansion.&lt;/p&gt;
 
-Pulsar’s architecture follows a similar pattern to other pub-sub systems as the framework is organized in topics as the main data entity, with producers sending data to, and consumers receiving data from a topic as shown in the diagram below.
+&lt;p&gt;Pulsar’s architecture follows a similar pattern to other pub-sub systems as the framework is organized in topics as the main data entity, with producers sending data to, and consumers receiving data from a topic as shown in the diagram below.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/pulsar-flink/image-1.png&quot; width=&quot;400px&quot; alt=&quot;Pulsar producers and consumers&quot;/&gt;
+&lt;img src=&quot;/img/blog/pulsar-flink/image-1.png&quot; width=&quot;400px&quot; alt=&quot;Pulsar producers and consumers&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The second differentiator of Pulsar is that the framework is built from the get-go with [multi-tenancy](https://pulsar.apache.org/docs/en/concepts-multi-tenancy/) in mind. What that means is that each Pulsar topic has a hierarchical management structure making the allocation of resources as well as the resource management and coordination between teams efficient and easy. With Pulsar’s multi-tenancy structure, data platform maintainers can onboard new teams with no friction as Pulsar pro [...]
+&lt;p&gt;The second differentiator of Pulsar is that the framework is built from the get-go with &lt;a href=&quot;https://pulsar.apache.org/docs/en/concepts-multi-tenancy/&quot;&gt;multi-tenancy&lt;/a&gt; in mind. What that means is that each Pulsar topic has a hierarchical management structure making the allocation of resources as well as the resource management and coordination between teams efficient and easy. With Pulsar’s multi-tenancy structure, data platform maintainers can onboar [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/pulsar-flink/image-2.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot;/&gt;
+&lt;img src=&quot;/img/blog/pulsar-flink/image-2.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Finally, Pulsar’s flexible messaging framework unifies the streaming and queuing data consumption models and provides greater flexibility. As shown in the below diagram, Pulsar holds the data in the topic while multiple teams can consume the data independently depending on their workloads and data consumption patterns.
+&lt;p&gt;Finally, Pulsar’s flexible messaging framework unifies the streaming and queuing data consumption models and provides greater flexibility. As shown in the below diagram, Pulsar holds the data in the topic while multiple teams can consume the data independently depending on their workloads and data consumption patterns.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/pulsar-flink/image-3.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot;/&gt;
+&lt;img src=&quot;/img/blog/pulsar-flink/image-3.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## Pulsar’s view on data: Segmented data streams
+&lt;h2 id=&quot;pulsars-view-on-data-segmented-data-streams&quot;&gt;Pulsar’s view on data: Segmented data streams&lt;/h2&gt;
 
-Apache Flink is a streaming-first computation framework that perceives [batch processing as a special case of streaming]({{ site.baseurl }}/news/2019/02/13/unified-batch-streaming-blink.html). Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.
+&lt;p&gt;Apache Flink is a streaming-first computation framework that perceives &lt;a href=&quot;/news/2019/02/13/unified-batch-streaming-blink.html&quot;&gt;batch processing as a special case of streaming&lt;/a&gt;. Flink’s view on data streams distinguishes batch and stream processing between bounded and unbounded data streams, assuming that for batch workloads the data stream is finite, with a beginning and an end.&lt;/p&gt;
 
-Apache Pulsar has a similar perspective to that of Apache Flink with regards to the data layer. The framework also uses streams as a unified view on all data, while its layered architecture allows traditional pub-sub messaging for streaming workloads and continuous data processing or usage of *Segmented Streams* and bounded data stream for batch and static workloads. 
+&lt;p&gt;Apache Pulsar has a similar perspective to that of Apache Flink with regards to the data layer. The framework also uses streams as a unified view on all data, while its layered architecture allows traditional pub-sub messaging for streaming workloads and continuous data processing or usage of &lt;em&gt;Segmented Streams&lt;/em&gt; and bounded data stream for batch and static workloads.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/pulsar-flink/image-4.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot;/&gt;
+&lt;img src=&quot;/img/blog/pulsar-flink/image-4.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-With Pulsar, once a producer sends data to a topic, it is partitioned depending on the data traffic and then further segmented under those partitions — using Apache Bookkeeper as segment store —  to allow for parallel data processing as illustrated in the diagram below. This allows a combination of traditional pub-sub messaging and distributed parallel computations in one framework.
+&lt;p&gt;With Pulsar, once a producer sends data to a topic, it is partitioned depending on the data traffic and then further segmented under those partitions — using Apache Bookkeeper as segment store —  to allow for parallel data processing as illustrated in the diagram below. This allows a combination of traditional pub-sub messaging and distributed parallel computations in one framework.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/pulsar-flink/image-5.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot;/&gt;
+&lt;img src=&quot;/img/blog/pulsar-flink/image-5.png&quot; width=&quot;640px&quot; alt=&quot;Apache Flink and Apache Pulsar&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-## When Flink + Pulsar come together
-
-Apache Flink and Apache Pulsar integrate in multiple ways already. In the following sections, I will present some potential future integrations between the frameworks and share examples of existing ways in which you can utilize the frameworks together.
-
-### Potential Integrations
-
-Pulsar can integrate with Apache Flink in different ways. Some potential integrations include providing support for streaming workloads with the use of *Streaming Connectors* and support for batch workloads with the use of *Batch Source Connectors*. Pulsar also comes with native support for schema that can integrate with Flink and provide structured access to the data, for example by using Flink SQL as a way of querying data in Pulsar. Finally, an alternative way of integrating the techn [...]
-
-From an architecture point of view, we can imagine the integration between the two frameworks as one that uses Apache Pulsar for a unified view of the data layer and Apache Flink as a unified computation and data processing framework and API. 
-
-
-### Existing Integrations
-
-Integration between the two frameworks is ongoing and developers can already use Pulsar with Flink in multiple ways. For example, Pulsar can be used as a streaming source and streaming sink in Flink DataStream applications. Developers can ingest data from Pulsar into a Flink job that makes computations and processes real-time data, to then send the data back to a Pulsar topic as a streaming sink. Such an example is shown below: 
-
-
-```java
-// create and configure Pulsar consumer
-PulsarSourceBuilder&lt;String&gt;builder = PulsarSourceBuilder
-   .builder(new SimpleStringSchema())
-   .serviceUrl(serviceUrl)
-   .topic(inputTopic)
-   .subscriptionName(subscription);
-SourceFunction&lt;String&gt; src = builder.build();
-// ingest DataStream with Pulsar consumer
-DataStream&lt;String&gt; words = env.addSource(src);
-
-// perform computation on DataStream (here a simple WordCount)
-DataStream&lt;WordWithCount&gt; wc = words
-   .flatMap((FlatMapFunction&lt;String, WordWithCount&gt;) (word, collector) -&gt; {
-       collector.collect(new WordWithCount(word, 1));
-   })
-   .returns(WordWithCount.class)
-   .keyBy(&quot;word&quot;)
-   .timeWindow(Time.seconds(5))
-   .reduce((ReduceFunction&lt;WordWithCount&gt;) (c1, c2) -&gt;
-       new WordWithCount(c1.word, c1.count + c2.count));
-
-// emit result via Pulsar producer
-wc.addSink(new FlinkPulsarProducer&lt;&gt;(
-   serviceUrl,
-   outputTopic,
-   new AuthenticationDisabled(),
-   wordWithCount -&gt; wordWithCount.toString().getBytes(UTF_8),
-   wordWithCount -&gt; wordWithCount.word)
-);
-```
-
-Another integration between the two frameworks that developers can take advantage of includes using Pulsar as both a streaming source and a streaming table sink for Flink SQL or Table API queries as shown in the example below:
-
-```java
-// obtain a DataStream with words
-DataStream&lt;String&gt; words = ...
-
-// register DataStream as Table &quot;words&quot; with two attributes (&quot;word&quot;, &quot;ts&quot;). 
-//   &quot;ts&quot; is an event-time timestamp.
-tableEnvironment.registerDataStream(&quot;words&quot;, words, &quot;word, ts.rowtime&quot;);
-
-// create a TableSink that produces to Pulsar
-TableSink sink = new PulsarJsonTableSink(
-   serviceUrl,
-   outputTopic,
-   new AuthenticationDisabled(),
-   ROUTING_KEY);
-
-// register Pulsar TableSink as table &quot;wc&quot;
-tableEnvironment.registerTableSink(
-   &quot;wc&quot;,
-   sink.configure(
-      new String[]{&quot;word&quot;, &quot;cnt&quot;},
-      new TypeInformation[]{Types.STRING, Types.LONG}));
-
-// count words per 5 seconds and write result to table &quot;wc&quot;
-tableEnvironment.sqlUpdate(
-   &quot;INSERT INTO wc &quot; +
-   &quot;SELECT word, COUNT(*) AS cnt &quot; +
-   &quot;FROM words &quot; +
-   &quot;GROUP BY word, TUMBLE(ts, INTERVAL &#39;5&#39; SECOND)&quot;);
-```
-
-Finally, Flink integrates with Pulsar for batch workloads as a batch sink where all results get pushed to Pulsar after Apache Flink has completed the computation in a static data set. Such an example is shown below: 
-
-```java
-// obtain DataSet from arbitrary computation
-DataSet&lt;WordWithCount&gt; wc = ...
-
-// create PulsarOutputFormat instance
-OutputFormat pulsarOutputFormat = new PulsarOutputFormat(
-   serviceUrl, 
-   topic, 
-   new AuthenticationDisabled(), 
-   wordWithCount -&gt; wordWithCount.toString().getBytes());
-// write DataSet to Pulsar
-wc.output(pulsarOutputFormat);
-```
-
-## Conclusion
-
-Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be *“streaming-first”* with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the [Apache Flink]({{ site.baseurl }}/community.html#mailing [...]
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
+
+&lt;h2 id=&quot;when-flink--pulsar-come-together&quot;&gt;When Flink + Pulsar come together&lt;/h2&gt;
+
+&lt;p&gt;Apache Flink and Apache Pulsar integrate in multiple ways already. In the following sections, I will present some potential future integrations between the frameworks and share examples of existing ways in which you can utilize the frameworks together.&lt;/p&gt;
+
+&lt;h3 id=&quot;potential-integrations&quot;&gt;Potential Integrations&lt;/h3&gt;
+
+&lt;p&gt;Pulsar can integrate with Apache Flink in different ways. Some potential integrations include providing support for streaming workloads with the use of &lt;em&gt;Streaming Connectors&lt;/em&gt; and support for batch workloads with the use of &lt;em&gt;Batch Source Connectors&lt;/em&gt;. Pulsar also comes with native support for schema that can integrate with Flink and provide structured access to the data, for example by using Flink SQL as a way of querying data in Pulsar. Final [...]
+
+&lt;p&gt;From an architecture point of view, we can imagine the integration between the two frameworks as one that uses Apache Pulsar for a unified view of the data layer and Apache Flink as a unified computation and data processing framework and API.&lt;/p&gt;
+
+&lt;h3 id=&quot;existing-integrations&quot;&gt;Existing Integrations&lt;/h3&gt;
+
+&lt;p&gt;Integration between the two frameworks is ongoing and developers can already use Pulsar with Flink in multiple ways. For example, Pulsar can be used as a streaming source and streaming sink in Flink DataStream applications. Developers can ingest data from Pulsar into a Flink job that makes computations and processes real-time data, to then send the data back to a Pulsar topic as a streaming sink. Such an example is shown below:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// create and configure Pulsar consumer&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;PulsarSourceBuilder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;builder&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;PulsarSourceBuilder&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;builder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;SimpleStringSchema&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;())&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;topic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;inputTopic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;subscriptionName&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;subscription&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;SourceFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;src&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;builder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&g [...]
+&lt;span class=&quot;c1&quot;&gt;// ingest DataStream with Pulsar consumer&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;words&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt; [...]
+
+&lt;span class=&quot;c1&quot;&gt;// perform computation on DataStream (here a simple WordCount)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wc&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;words&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;flatMap&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;FlatMapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt; &lt;span class=&quo [...]
+       &lt;span class=&quot;n&quot;&gt;collector&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;word&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;mi&quot;&gt;1&l [...]
+   &lt;span class=&quot;o&quot;&gt;})&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;returns&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;class&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;keyBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;word&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;timeWindow&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Time&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;seconds&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;5&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+   &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;reduce&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ReduceFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;o&quo [...]
+       &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;word&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;c1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;count&lt;/span& [...]
+
+&lt;span class=&quot;c1&quot;&gt;// emit result via Pulsar producer&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;wc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;addSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;FlinkPulsarProducer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&amp;gt;(&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;outputTopic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+   &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;AuthenticationDisabled&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;toString&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getBytes&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;UTF_8&lt;/span&gt;&lt;span class=&quo [...]
+   &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;word&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Another integration between the two frameworks that developers can take advantage of includes using Pulsar as both a streaming source and a streaming table sink for Flink SQL or Table API queries as shown in the example below:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// obtain a DataStream with words&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;words&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;...&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// register DataStream as Table &amp;quot;words&amp;quot; with two attributes (&amp;quot;word&amp;quot;, &amp;quot;ts&amp;quot;). &lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;//   &amp;quot;ts&amp;quot; is an event-time timestamp.&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;tableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerDataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;words&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;words&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;word, ts.rowtime&am [...]
+
+&lt;span class=&quot;c1&quot;&gt;// create a TableSink that produces to Pulsar&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;TableSink&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;sink&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;PulsarJsonTableSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;outputTopic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+   &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;AuthenticationDisabled&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;ROUTING_KEY&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// register Pulsar TableSink as table &amp;quot;wc&amp;quot;&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;tableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;registerTableSink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+   &lt;span class=&quot;s&quot;&gt;&amp;quot;wc&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;sink&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;configure&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[]{&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;word&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;cnt&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;},&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;TypeInformation&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[]{&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;LONG& [...]
+
+&lt;span class=&quot;c1&quot;&gt;// count words per 5 seconds and write result to table &amp;quot;wc&amp;quot;&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;tableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;sqlUpdate&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+   &lt;span class=&quot;s&quot;&gt;&amp;quot;INSERT INTO wc &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;
+   &lt;span class=&quot;s&quot;&gt;&amp;quot;SELECT word, COUNT(*) AS cnt &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;
+   &lt;span class=&quot;s&quot;&gt;&amp;quot;FROM words &amp;quot;&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;+&lt;/span&gt;
+   &lt;span class=&quot;s&quot;&gt;&amp;quot;GROUP BY word, TUMBLE(ts, INTERVAL &amp;#39;5&amp;#39; SECOND)&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Finally, Flink integrates with Pulsar for batch workloads as a batch sink where all results get pushed to Pulsar after Apache Flink has completed the computation in a static data set. Such an example is shown below:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// obtain DataSet from arbitrary computation&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;DataSet&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;WordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wc&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;...&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// create PulsarOutputFormat instance&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;OutputFormat&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pulsarOutputFormat&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;PulsarOutputFormat&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+   &lt;span class=&quot;n&quot;&gt;serviceUrl&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; 
+   &lt;span class=&quot;n&quot;&gt;topic&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; 
+   &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;AuthenticationDisabled&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; 
+   &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;wordWithCount&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;toString&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getBytes&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;());&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;// write DataSet to Pulsar&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;wc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;output&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;pulsarOutputFormat&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;);&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
+
+&lt;p&gt;Both Pulsar and Flink share a similar view on how the data and the computation level of an application can be &lt;em&gt;“streaming-first”&lt;/em&gt; with batch as a special case streaming. With Pulsar’s Segmented Streams approach and Flink’s steps to unify batch and stream processing workloads under one framework, there are numerous ways of integrating the two technologies together to provide elastic data processing at massive scale. Subscribe to the &lt;a href=&quot;/community. [...]
 </description>
-<pubDate>Fri, 03 May 2019 12:00:00 +0000</pubDate>
+<pubDate>Fri, 03 May 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/2019/05/03/pulsar-flink.html</link>
 <guid isPermaLink="true">/2019/05/03/pulsar-flink.html</guid>
 </item>
 
 <item>
 <title>Apache Flink&#39;s Application to Season of Docs</title>
-<description>The Apache Flink community is happy to announce its application to the first edition of [Season of Docs](https://developers.google.com/season-of-docs/) by Google. The program is bringing together Open Source projects and technical writers to raise awareness for and improve documentation of Open Source projects. While the community is continuously looking for new contributors to collaborate on our documentation, we would like to take this chance to work with one or two techni [...]
+<description>&lt;p&gt;The Apache Flink community is happy to announce its application to the first edition of &lt;a href=&quot;https://developers.google.com/season-of-docs/&quot;&gt;Season of Docs&lt;/a&gt; by Google. The program is bringing together Open Source projects and technical writers to raise awareness for and improve documentation of Open Source projects. While the community is continuously looking for new contributors to collaborate on our documentation, we would like to take  [...]
 
-The community has discussed this opportunity on the [dev mailinglist](https://lists.apache.org/thread.html/3c789b6187da23ad158df59bbc598543b652e3cfc1010a14e294e16a@%3Cdev.flink.apache.org%3E) and agreed on three project ideas to submit to the program. We have a great team of mentors (Stephan, Fabian, David, Jark &amp; Konstantin) lined up and are very much looking forward to the first proposals by potential technical writers (given we are admitted to the program ;)). In case of questions [...]
+&lt;p&gt;The community has discussed this opportunity on the &lt;a href=&quot;https://lists.apache.org/thread.html/3c789b6187da23ad158df59bbc598543b652e3cfc1010a14e294e16a@%3Cdev.flink.apache.org%3E&quot;&gt;dev mailinglist&lt;/a&gt; and agreed on three project ideas to submit to the program. We have a great team of mentors (Stephan, Fabian, David, Jark &amp;amp; Konstantin) lined up and are very much looking forward to the first proposals by potential technical writers (given we are adm [...]
 
-## Project Ideas List
+&lt;h2 id=&quot;project-ideas-list&quot;&gt;Project Ideas List&lt;/h2&gt;
 
-### Project 1: Improve Documentation of Stream Processing Concepts
+&lt;h3 id=&quot;project-1-improve-documentation-of-stream-processing-concepts&quot;&gt;Project 1: Improve Documentation of Stream Processing Concepts&lt;/h3&gt;
 
-**Description:** Stream processing is the processing of data in motion―in other words, computing on data directly as it is produced or received. Apache Flink has pioneered the field of distributed, stateful stream processing over the last several years. As the community has pushed the boundaries of stream processing, we have introduced new concepts that users need to become familiar with to develop and operate Apache Flink applications efficiently.
-The Apache Flink documentation \[1\] already contains a “concepts” section, but it is a ) incomplete and b) lacks an overall structure &amp; reading flow. In addition, “concepts”-content is also spread over the development \[2\] &amp; operations \[3\] documentation without references to the “concepts” section. An example of this can be found in \[4\] and \[5\].
+&lt;p&gt;&lt;strong&gt;Description:&lt;/strong&gt; Stream processing is the processing of data in motion―in other words, computing on data directly as it is produced or received. Apache Flink has pioneered the field of distributed, stateful stream processing over the last several years. As the community has pushed the boundaries of stream processing, we have introduced new concepts that users need to become familiar with to develop and operate Apache Flink applications efficiently.
+The Apache Flink documentation [1] already contains a “concepts” section, but it is a ) incomplete and b) lacks an overall structure &amp;amp; reading flow. In addition, “concepts”-content is also spread over the development [2] &amp;amp; operations [3] documentation without references to the “concepts” section. An example of this can be found in [4] and [5].&lt;/p&gt;
 
-In this project, we would like to restructure, consolidate and extend the concepts documentation for Apache Flink to better guide users who want to become productive as quickly as possible. This includes better conceptual introductions to topics such as event time, state, and fault tolerance with proper linking to and from relevant deployment and development guides.
+&lt;p&gt;In this project, we would like to restructure, consolidate and extend the concepts documentation for Apache Flink to better guide users who want to become productive as quickly as possible. This includes better conceptual introductions to topics such as event time, state, and fault tolerance with proper linking to and from relevant deployment and development guides.&lt;/p&gt;
 
-**Related material:**
+&lt;p&gt;&lt;strong&gt;Related material:&lt;/strong&gt;&lt;/p&gt;
 
-1. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/)
-2. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev)
-3. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops)
-4. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/concepts/programming-model.html#time]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/concepts/programming-model.html#time)
-5. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/event_time.html]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/event_time.html)
+&lt;ol&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/concepts/programming-model.html#time&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/concepts/programming-model.html#time&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/event_time.html&lt;/a&gt;&lt;/li&gt;
+&lt;/ol&gt;
 
-### Project 2: Improve Documentation of Flink Deployments &amp; Operations
+&lt;h3 id=&quot;project-2-improve-documentation-of-flink-deployments--operations&quot;&gt;Project 2: Improve Documentation of Flink Deployments &amp;amp; Operations&lt;/h3&gt;
 
-**Description:** Stream processing is the processing of data in motion―in other words, computing on data directly as it is produced or received. Apache Flink has pioneered the field of distributed, stateful stream processing for the last few years. As a stateful distributed system in general and a continuously running, low-latency system in particular, Apache Flink deployments are non-trivial to setup and manage.
-Unfortunately, the operations \[1\] and monitoring documentation \[2\] are arguably the weakest spots of the Apache Flink documentation. While it is comprehensive and often goes into a lot of detail, it lacks an overall structure and does not address common overarching concerns of operations teams in an efficient way.
+&lt;p&gt;&lt;strong&gt;Description:&lt;/strong&gt; Stream processing is the processing of data in motion―in other words, computing on data directly as it is produced or received. Apache Flink has pioneered the field of distributed, stateful stream processing for the last few years. As a stateful distributed system in general and a continuously running, low-latency system in particular, Apache Flink deployments are non-trivial to setup and manage.
+Unfortunately, the operations [1] and monitoring documentation [2] are arguably the weakest spots of the Apache Flink documentation. While it is comprehensive and often goes into a lot of detail, it lacks an overall structure and does not address common overarching concerns of operations teams in an efficient way.&lt;/p&gt;
 
-In this project, we would like to restructure this part of the documentation and extend it if possible. Ideas for extension include: discussion of session and per-job clusters, better documentation for containerized deployments (incl. K8s), capacity planning &amp; integration into CI/CD pipelines.
+&lt;p&gt;In this project, we would like to restructure this part of the documentation and extend it if possible. Ideas for extension include: discussion of session and per-job clusters, better documentation for containerized deployments (incl. K8s), capacity planning &amp;amp; integration into CI/CD pipelines.&lt;/p&gt;
 
-**Related material:**
+&lt;p&gt;&lt;strong&gt;Related material:&lt;/strong&gt;&lt;/p&gt;
 
-1. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/ops/)
-2. [{{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/monitoring)
+&lt;ol&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops/&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/ops&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring&quot;&gt;https://ci.apache.org/projects/flink/flink-docs-release-1.8/monitoring&lt;/a&gt;&lt;/li&gt;
+&lt;/ol&gt;
 
-### Project 3: Improve Documentation for Relational APIs (Table API &amp; SQL)
+&lt;h3 id=&quot;project-3-improve-documentation-for-relational-apis-table-api--sql&quot;&gt;Project 3: Improve Documentation for Relational APIs (Table API &amp;amp; SQL)&lt;/h3&gt;
 
-**Description:** Apache Flink features APIs at different levels of abstraction which enables its users to trade conciseness for expressiveness. Flink’s relational APIs, SQL and the Table API, are “younger” than the DataStream and DataSet APIs, more high-level and focus on data analytics use cases. A core principle of Flink’s SQL and Table API is that they can be used to process static (batch) and continuous (streaming) data and that a program or query produces the same result in both cases.
-The documentation of Flink’s relational APIs has organically grown and can be improved in a few areas. There are several on-going development efforts (e.g. Hive Integration, Python Support or Support for Interactive Programming) that aim to extend the scope of the Table API and SQL.
+&lt;p&gt;&lt;strong&gt;Description:&lt;/strong&gt; Apache Flink features APIs at different levels of abstraction which enables its users to trade conciseness for expressiveness. Flink’s relational APIs, SQL and the Table API, are “younger” than the DataStream and DataSet APIs, more high-level and focus on data analytics use cases. A core principle of Flink’s SQL and Table API is that they can be used to process static (batch) and continuous (streaming) data and that a program or query pr [...]
+The documentation of Flink’s relational APIs has organically grown and can be improved in a few areas. There are several on-going development efforts (e.g. Hive Integration, Python Support or Support for Interactive Programming) that aim to extend the scope of the Table API and SQL.&lt;/p&gt;
 
-The existing documentation could be reorganized to prepare for covering the new features. Moreover, it could be improved by adding a concepts section that describes the use cases and internals of the APIs in more detail. Moreover, the documentation of built-in functions could be improved by adding more concrete examples.
+&lt;p&gt;The existing documentation could be reorganized to prepare for covering the new features. Moreover, it could be improved by adding a concepts section that describes the use cases and internals of the APIs in more detail. Moreover, the documentation of built-in functions could be improved by adding more concrete examples.&lt;/p&gt;
 
-**Related material:**
+&lt;p&gt;&lt;strong&gt;Related material:&lt;/strong&gt;&lt;/p&gt;
 
-1. [Table API &amp; SQL docs main page]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table)
-2. [Built-in functions]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/functions.html)
-3. [Concepts]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/common.html)
-4. [Streaming Concepts]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/dev/table/streaming/)
+&lt;ol&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table&quot;&gt;Table API &amp;amp; SQL docs main page&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/functions.html&quot;&gt;Built-in functions&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/common.html&quot;&gt;Concepts&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/dev/table/streaming/&quot;&gt;Streaming Concepts&lt;/a&gt;&lt;/li&gt;
+&lt;/ol&gt;
 
 </description>
-<pubDate>Wed, 17 Apr 2019 12:00:00 +0000</pubDate>
+<pubDate>Wed, 17 Apr 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/04/17/sod.html</link>
 <guid isPermaLink="true">/news/2019/04/17/sod.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.8.0 Release Announcement</title>
-<description>The Apache Flink community is pleased to announce Apache Flink 1.8.0.  The
+<description>&lt;p&gt;The Apache Flink community is pleased to announce Apache Flink 1.8.0.  The
 latest release includes more than 420 resolved issues and some exciting
 additions to Flink that we describe in the following sections of this post.
-Please check the [complete changelog](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;version=12344274)
-for more details.
+Please check the &lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;amp;version=12344274&quot;&gt;complete changelog&lt;/a&gt;
+for more details.&lt;/p&gt;
 
-Flink 1.8.0 is API-compatible with previous 1.x.y releases for APIs annotated
-with the `@Public` annotation.  The release is available now and we encourage
-everyone to [download the release]({{ site.baseurl }}/downloads.html) and
+&lt;p&gt;Flink 1.8.0 is API-compatible with previous 1.x.y releases for APIs annotated
+with the &lt;code&gt;@Public&lt;/code&gt; annotation.  The release is available now and we encourage
+everyone to &lt;a href=&quot;/downloads.html&quot;&gt;download the release&lt;/a&gt; and
 check out the updated
-[documentation]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/).
-Feedback through the Flink [mailing
-lists]({{ site.baseurl }}/community.html#mailing-lists) or
-[JIRA](https://issues.apache.org/jira/projects/FLINK/summary) is, as always,
-very much appreciated!
-
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html) on the Flink project site.
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/&quot;&gt;documentation&lt;/a&gt;.
+Feedback through the Flink &lt;a href=&quot;/community.html#mailing-lists&quot;&gt;mailing
+lists&lt;/a&gt; or
+&lt;a href=&quot;https://issues.apache.org/jira/projects/FLINK/summary&quot;&gt;JIRA&lt;/a&gt; is, as always,
+very much appreciated!&lt;/p&gt;
+
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt; on the Flink project site.&lt;/p&gt;
+
+&lt;div class=&quot;page-toc&quot;&gt;
+&lt;ul id=&quot;markdown-toc&quot;&gt;
+  &lt;li&gt;&lt;a href=&quot;#new-features-and-improvements&quot; id=&quot;markdown-toc-new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#important-changes&quot; id=&quot;markdown-toc-important-changes&quot;&gt;Important Changes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#known-issues&quot; id=&quot;markdown-toc-known-issues&quot;&gt;Known Issues&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#release-notes&quot; id=&quot;markdown-toc-release-notes&quot;&gt;Release Notes&lt;/a&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;a href=&quot;#list-of-contributors&quot; id=&quot;markdown-toc-list-of-contributors&quot;&gt;List of Contributors&lt;/a&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-{% toc %}
+&lt;/div&gt;
 
-With Flink 1.8.0 we come closer to our goals of enabling fast data processing
+&lt;p&gt;With Flink 1.8.0 we come closer to our goals of enabling fast data processing
 and building data-intensive applications for the Flink community in a seamless
 way. We do this by cleaning up and refactoring Flink under the hood to allow
 more efficient feature development in the future. This includes removal of the
-legacy runtime components that were subsumed in the major rework of Flink&#39;s
+legacy runtime components that were subsumed in the major rework of Flink’s
 underlying distributed system architecture
-([FLIP-6](https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077))
+(&lt;a href=&quot;https://cwiki.apache.org/confluence/pages/viewpage.action?pageId=65147077&quot;&gt;FLIP-6&lt;/a&gt;)
 as well as refactorings on the Table API that prepare it for the future
 addition of the Blink enhancements
-([FLINK-11439](https://issues.apache.org/jira/browse/FLINK-11439)).
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11439&quot;&gt;FLINK-11439&lt;/a&gt;).&lt;/p&gt;
 
-Nevertheless, this release includes some important new features and bug fixes.
+&lt;p&gt;Nevertheless, this release includes some important new features and bug fixes.
 The most interesting of those are highlighted below. Please consult the
-[complete changelog](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;version=12344274)
-and the [release notes]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/release-notes/flink-1.8.html)
-for more details.
-
-
-## New Features and Improvements
-
-* **Finalized State Schema Evolution Story**: This release completes
-  the community driven effort to provide a schema evolution story for
-  user state managed by Flink. This has been an effort that spanned 2
-  releases, starting from 1.7.0 with the introduction of support for
-  Avro state schema evolution as well as a revamped serialization
-  compatibility abstraction.
-
-  Flink 1.8.0 finalizes this effort by extending support for schema
-  evolution to POJOs, upgrading all Flink built-in serializers to use
-  the new serialization compatibility abstractions, as well as making it
-  easier for advanced users who use custom state serializers to
-  implement the abstractions.  These different aspects for a complete
-  out-of-the-box schema evolution story are explained in detail below:
-
-  1. Support for POJO state schema evolution: The pool of data types
-     that support state schema evolution has been expanded to include
-     POJOs. For state types that use POJOs, you can now add or remove
-     fields from your POJO while retaining backwards
-     compatibility. For a full overview of the list of data types that
-     now support schema evolution as well as their evolution
-     specifications and limitations, please refer to the State Schema
-     Evolution documentation page.
-
-
-  2. Upgrade all Flink serializers to use new serialization
-     compatibility asbtractions: Back in 1.7.0, we introduced the new
-     serialization compatibility abstractions `TypeSerializerSnapshot`
-     and `TypeSerializerSchemaCompatibility`. Besides providing a more
-     expressible API to reflect schema compatibility between the data
-     stored in savepoints and the data registered at runtime, another
-     important aspect about the new abstraction is that it avoids the
-     need for Flink to Java-serialize the state serializer as state
-     metadata in savepoints.
-
-     In 1.8.0, all of Flink&#39;s built-in serializers have been upgraded to
-     use the new abstractions, and therefore the serializers
-     themselves are no longer Java-serialized into savepoints. This
-     greatly improves interoperability of Flink savepoints, in terms
-     of state schema evolvability. For example, one outcome was the
-     support for POJO schema evolution, as previously mentioned
-     above. Another outcome is that all composite data types supported
-     by Flink (such as `Either`, Scala case classes, Flink Java
-     `Tuple`s, etc.) are generally evolve-able as well when they have
-     a nested evolvable type, such as a POJO. For example, the `MyPojo`
-     type in `ValueState&lt;Tuple2&lt;Integer, MyPojo&gt;&gt;` or
-     `ListState&lt;Either&lt;Integer, MyPojo&gt;&gt;`, which is a POJO, is allowed
-     to evolve its schema.
-
-     For users who are using custom `TypeSerializer` implementations
-     for their state serializer and are still using the outdated
-     abstractions (i.e. `TypeSerializerConfigSnapshot` and
-     `CompatiblityResult`), we highly recommend upgrading to the new
-     abstractions to be future proof. Please refer to the Custom State
-     Serialization documentation page for a detailed description on
-     the new abstractions.
-
-  3. Provide pre-defined snapshot implementations for common
-     serializers: For convenience, Flink 1.8.0 comes with two
-     predefined implementations for the `TypeSerializerSnapshot` that
-     make the task of implementing these new abstractions easier
-     for most implementations of `TypeSerializer`s -
-     `SimpleTypeSerializerSnapshot` and
-     `CompositeTypeSerializerSnapshot`. This section in the
-     documentation provides information on how to use these classes.
-
-* **Continuous cleanup of old state based on TTL
-  ([FLINK-7811](https://issues.apache.org/jira/browse/FLINK-7811))**: We
-  introduced TTL (time-to-live) for Keyed state in Flink 1.6
-  ([FLINK-9510](https://issues.apache.org/jira/browse/FLINK-9510)). This
-  feature enabled cleanup and made keyed state entries inaccessible after a
-  defined timeout. In addition state would now also be cleaned up when
-  writing a savepoint/checkpoint.
-
-  Flink 1.8 introduces continuous cleanup of old entries for both the RocksDB
-  state backend
-  ([FLINK-10471](https://issues.apache.org/jira/browse/FLINK-10471)) and the heap
-  state backend
-  ([FLINK-10473](https://issues.apache.org/jira/browse/FLINK-10473)). This means
-  that old entries (according to the TTL setting) are continuously cleaned up.
-
-* **SQL pattern detection with user-defined functions and
-  aggregations**: The support of the MATCH_RECOGNIZE clause has been
-  extended by multiple features.  The addition of user-defined
-  functions allows for custom logic during pattern detection
-  ([FLINK-10597](https://issues.apache.org/jira/browse/FLINK-10597)),
-  while adding aggregations allows for more complex CEP definitions,
-  such as the following
-  ([FLINK-7599](https://issues.apache.org/jira/browse/FLINK-7599)).
-
-  ```
-  SELECT *
-  FROM Ticker
-      MATCH_RECOGNIZE (
-          ORDER BY rowtime
-          MEASURES
-              AVG(A.price) AS avgPrice
-          ONE ROW PER MATCH
-          AFTER MATCH SKIP TO FIRST B
-          PATTERN (A+ B)
-          DEFINE
-              A AS AVG(A.price) &lt; 15
-      ) MR;
-  ```
-
-* **RFC-compliant CSV format ([FLINK-9964](https://issues.apache.org/jira/browse/FLINK-9964))**: The SQL tables can now be read and written in
-  an RFC-4180 standard compliant CSV table format. The format might also be
-  useful for general DataStream API users.
-
-* **New KafkaDeserializationSchema that gives direct access to ConsumerRecord
-  ([FLINK-8354](https://issues.apache.org/jira/browse/FLINK-8354))**: For the
-  Flink `KafkaConsumers`, we introduced a new `KafkaDeserializationSchema` that
-  gives direct access to the Kafka `ConsumerRecord`. This now allows access to
-  all data that Kafka provides for a record, including the headers. This
-  subsumes the `KeyedSerializationSchema` functionality, which is deprecated but
-  still available for now.
-
-* **Per-shard watermarking option in FlinkKinesisConsumer
-  ([FLINK-5697](https://issues.apache.org/jira/browse/FLINK-5697))**: The Kinesis
-  Consumer can now emit periodic watermarks that are derived from per-shard watermarks,
-  for correct event time processing with subtasks that consume multiple Kinesis shards.
-
-* **New consumer for DynamoDB Streams to capture table changes
-  ([FLINK-4582](https://issues.apache.org/jira/browse/FLINK-4582))**: `FlinkDynamoDBStreamsConsumer`
-  is a variant of the Kinesis consumer that supports retrieval of CDC-like streams from DynamoDB tables.
-
-* **Support for global aggregates for subtask coordination
-  ([FLINK-10887](https://issues.apache.org/jira/browse/FLINK-10887))**:
-  Designed as a solution for global source watermark tracking, `GlobalAggregateManager`
-  allows sharing of information between parallel subtasks. This feature will
-  be integrated into streaming connectors for watermark synchronization and
-  can be used for other purposes with a user defined aggregator.
-
-## Important Changes
-
-* **Changes to bundling of Hadoop libraries with Flink
-  ([FLINK-11266](https://issues.apache.org/jira/browse/FLINK-11266))**:
-  Convenience binaries that include hadoop are no longer released.
-
-  If a deployment relies on `flink-shaded-hadoop2` being included in
-  `flink-dist`, then you must manually download a pre-packaged Hadoop
-  jar from the optional components section of the [download
-  page]({{ site.baseurl }}/downloads.html) and copy it into the
-  `/lib` directory.  Alternatively, a Flink distribution that includes
-  hadoop can be built by packaging `flink-dist` and activating the
-  `include-hadoop` maven profile.
-
-  As hadoop is no longer included in `flink-dist` by default, specifying
-  `-DwithoutHadoop` when packaging `flink-dist` no longer impacts the build.
-
-* **FlinkKafkaConsumer will now filter restored partitions based on topic
-  specification
-  ([FLINK-10342](https://issues.apache.org/jira/browse/FLINK-10342))**:
-  Starting from Flink 1.8.0, the `FlinkKafkaConsumer` now always filters out
-  restored partitions that are no longer associated with a specified topic to
-  subscribe to in the restored execution. This behaviour did not exist in
-  previous versions of the `FlinkKafkaConsumer`. If you wish to retain the
-  previous behaviour, please use the
-  `disableFilterRestoredPartitionsWithSubscribedTopics()` configuration method
-  on the `FlinkKafkaConsumer`.
-
-  Consider this example: if you had a Kafka Consumer that was consuming from
-  topic `A`, you did a savepoint, then changed your Kafka consumer to instead
-  consume from topic `B`, and then restarted your job from the savepoint.
-  Before this change, your consumer would now consume from both topic `A` and
-  `B` because it was stored in state that the consumer was consuming from topic
-  `A`. With the change, your consumer would only consume from topic `B` after
-  restore because it now filters the topics that are stored in state using the
-  configured topics.
-
- * **Change in the Maven modules of Table API
-   ([FLINK-11064](https://issues.apache.org/jira/browse/FLINK-11064))**: Users
-   that had a `flink-table` dependency before, need to update their
-   dependencies to `flink-table-planner` and the correct dependency of
-   `flink-table-api-*`, depending on whether Java or Scala is used: one of
-   `flink-table-api-java-bridge` or `flink-table-api-scala-bridge`.
-
-## Known Issues
-
-* **Discarded checkpoint can cause Tasks to fail
-  ([FLINK-11662](https://issues.apache.org/jira/browse/FLINK-11662))**: There is
-  a race condition that can lead to erroneous checkpoint failures. This mostly
-  occurs when restarting from a savepoint or checkpoint takes a long time at the
-  sources of a job. If you see random checkpointing failures that don&#39;t seem to
-  have a good explanation you might be affected. Please see the Jira issue for
-  more details and a workaround for the problem.
-
-
-## Release Notes
-
-Please review the [release
-notes]({{ site.DOCS_BASE_URL }}flink-docs-release-1.8/release-notes/flink-1.8.html)
+&lt;a href=&quot;https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315522&amp;amp;version=12344274&quot;&gt;complete changelog&lt;/a&gt;
+and the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/release-notes/flink-1.8.html&quot;&gt;release notes&lt;/a&gt;
+for more details.&lt;/p&gt;
+
+&lt;h2 id=&quot;new-features-and-improvements&quot;&gt;New Features and Improvements&lt;/h2&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Finalized State Schema Evolution Story&lt;/strong&gt;: This release completes
+the community driven effort to provide a schema evolution story for
+user state managed by Flink. This has been an effort that spanned 2
+releases, starting from 1.7.0 with the introduction of support for
+Avro state schema evolution as well as a revamped serialization
+compatibility abstraction.&lt;/p&gt;
+
+    &lt;p&gt;Flink 1.8.0 finalizes this effort by extending support for schema
+evolution to POJOs, upgrading all Flink built-in serializers to use
+the new serialization compatibility abstractions, as well as making it
+easier for advanced users who use custom state serializers to
+implement the abstractions.  These different aspects for a complete
+out-of-the-box schema evolution story are explained in detail below:&lt;/p&gt;
+
+    &lt;ol&gt;
+      &lt;li&gt;
+        &lt;p&gt;Support for POJO state schema evolution: The pool of data types
+that support state schema evolution has been expanded to include
+POJOs. For state types that use POJOs, you can now add or remove
+fields from your POJO while retaining backwards
+compatibility. For a full overview of the list of data types that
+now support schema evolution as well as their evolution
+specifications and limitations, please refer to the State Schema
+Evolution documentation page.&lt;/p&gt;
+      &lt;/li&gt;
+      &lt;li&gt;
+        &lt;p&gt;Upgrade all Flink serializers to use new serialization
+compatibility asbtractions: Back in 1.7.0, we introduced the new
+serialization compatibility abstractions &lt;code&gt;TypeSerializerSnapshot&lt;/code&gt;
+and &lt;code&gt;TypeSerializerSchemaCompatibility&lt;/code&gt;. Besides providing a more
+expressible API to reflect schema compatibility between the data
+stored in savepoints and the data registered at runtime, another
+important aspect about the new abstraction is that it avoids the
+need for Flink to Java-serialize the state serializer as state
+metadata in savepoints.&lt;/p&gt;
+
+        &lt;p&gt;In 1.8.0, all of Flink’s built-in serializers have been upgraded to
+use the new abstractions, and therefore the serializers
+themselves are no longer Java-serialized into savepoints. This
+greatly improves interoperability of Flink savepoints, in terms
+of state schema evolvability. For example, one outcome was the
+support for POJO schema evolution, as previously mentioned
+above. Another outcome is that all composite data types supported
+by Flink (such as &lt;code&gt;Either&lt;/code&gt;, Scala case classes, Flink Java
+&lt;code&gt;Tuple&lt;/code&gt;s, etc.) are generally evolve-able as well when they have
+a nested evolvable type, such as a POJO. For example, the &lt;code&gt;MyPojo&lt;/code&gt;
+type in &lt;code&gt;ValueState&amp;lt;Tuple2&amp;lt;Integer, MyPojo&amp;gt;&amp;gt;&lt;/code&gt; or
+&lt;code&gt;ListState&amp;lt;Either&amp;lt;Integer, MyPojo&amp;gt;&amp;gt;&lt;/code&gt;, which is a POJO, is allowed
+to evolve its schema.&lt;/p&gt;
+
+        &lt;p&gt;For users who are using custom &lt;code&gt;TypeSerializer&lt;/code&gt; implementations
+for their state serializer and are still using the outdated
+abstractions (i.e. &lt;code&gt;TypeSerializerConfigSnapshot&lt;/code&gt; and
+&lt;code&gt;CompatiblityResult&lt;/code&gt;), we highly recommend upgrading to the new
+abstractions to be future proof. Please refer to the Custom State
+Serialization documentation page for a detailed description on
+the new abstractions.&lt;/p&gt;
+      &lt;/li&gt;
+      &lt;li&gt;
+        &lt;p&gt;Provide pre-defined snapshot implementations for common
+serializers: For convenience, Flink 1.8.0 comes with two
+predefined implementations for the &lt;code&gt;TypeSerializerSnapshot&lt;/code&gt; that
+make the task of implementing these new abstractions easier
+for most implementations of &lt;code&gt;TypeSerializer&lt;/code&gt;s -
+&lt;code&gt;SimpleTypeSerializerSnapshot&lt;/code&gt; and
+&lt;code&gt;CompositeTypeSerializerSnapshot&lt;/code&gt;. This section in the
+documentation provides information on how to use these classes.&lt;/p&gt;
+      &lt;/li&gt;
+    &lt;/ol&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Continuous cleanup of old state based on TTL
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-7811&quot;&gt;FLINK-7811&lt;/a&gt;)&lt;/strong&gt;: We
+introduced TTL (time-to-live) for Keyed state in Flink 1.6
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9510&quot;&gt;FLINK-9510&lt;/a&gt;). This
+feature enabled cleanup and made keyed state entries inaccessible after a
+defined timeout. In addition state would now also be cleaned up when
+writing a savepoint/checkpoint.&lt;/p&gt;
+
+    &lt;p&gt;Flink 1.8 introduces continuous cleanup of old entries for both the RocksDB
+state backend
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10471&quot;&gt;FLINK-10471&lt;/a&gt;) and the heap
+state backend
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10473&quot;&gt;FLINK-10473&lt;/a&gt;). This means
+that old entries (according to the TTL setting) are continuously cleaned up.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;SQL pattern detection with user-defined functions and
+aggregations&lt;/strong&gt;: The support of the MATCH_RECOGNIZE clause has been
+extended by multiple features.  The addition of user-defined
+functions allows for custom logic during pattern detection
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10597&quot;&gt;FLINK-10597&lt;/a&gt;),
+while adding aggregations allows for more complex CEP definitions,
+such as the following
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-7599&quot;&gt;FLINK-7599&lt;/a&gt;).&lt;/p&gt;
+
+    &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;SELECT *
+FROM Ticker
+    MATCH_RECOGNIZE (
+        ORDER BY rowtime
+        MEASURES
+            AVG(A.price) AS avgPrice
+        ONE ROW PER MATCH
+        AFTER MATCH SKIP TO FIRST B
+        PATTERN (A+ B)
+        DEFINE
+            A AS AVG(A.price) &amp;lt; 15
+    ) MR;
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;RFC-compliant CSV format (&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9964&quot;&gt;FLINK-9964&lt;/a&gt;)&lt;/strong&gt;: The SQL tables can now be read and written in
+an RFC-4180 standard compliant CSV table format. The format might also be
+useful for general DataStream API users.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;New KafkaDeserializationSchema that gives direct access to ConsumerRecord
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-8354&quot;&gt;FLINK-8354&lt;/a&gt;)&lt;/strong&gt;: For the
+Flink &lt;code&gt;KafkaConsumers&lt;/code&gt;, we introduced a new &lt;code&gt;KafkaDeserializationSchema&lt;/code&gt; that
+gives direct access to the Kafka &lt;code&gt;ConsumerRecord&lt;/code&gt;. This now allows access to
+all data that Kafka provides for a record, including the headers. This
+subsumes the &lt;code&gt;KeyedSerializationSchema&lt;/code&gt; functionality, which is deprecated but
+still available for now.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Per-shard watermarking option in FlinkKinesisConsumer
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-5697&quot;&gt;FLINK-5697&lt;/a&gt;)&lt;/strong&gt;: The Kinesis
+Consumer can now emit periodic watermarks that are derived from per-shard watermarks,
+for correct event time processing with subtasks that consume multiple Kinesis shards.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;New consumer for DynamoDB Streams to capture table changes
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-4582&quot;&gt;FLINK-4582&lt;/a&gt;)&lt;/strong&gt;: &lt;code&gt;FlinkDynamoDBStreamsConsumer&lt;/code&gt;
+is a variant of the Kinesis consumer that supports retrieval of CDC-like streams from DynamoDB tables.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Support for global aggregates for subtask coordination
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10887&quot;&gt;FLINK-10887&lt;/a&gt;)&lt;/strong&gt;:
+Designed as a solution for global source watermark tracking, &lt;code&gt;GlobalAggregateManager&lt;/code&gt;
+allows sharing of information between parallel subtasks. This feature will
+be integrated into streaming connectors for watermark synchronization and
+can be used for other purposes with a user defined aggregator.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2 id=&quot;important-changes&quot;&gt;Important Changes&lt;/h2&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Changes to bundling of Hadoop libraries with Flink
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11266&quot;&gt;FLINK-11266&lt;/a&gt;)&lt;/strong&gt;:
+Convenience binaries that include hadoop are no longer released.&lt;/p&gt;
+
+    &lt;p&gt;If a deployment relies on &lt;code&gt;flink-shaded-hadoop2&lt;/code&gt; being included in
+&lt;code&gt;flink-dist&lt;/code&gt;, then you must manually download a pre-packaged Hadoop
+jar from the optional components section of the &lt;a href=&quot;/downloads.html&quot;&gt;download
+page&lt;/a&gt; and copy it into the
+&lt;code&gt;/lib&lt;/code&gt; directory.  Alternatively, a Flink distribution that includes
+hadoop can be built by packaging &lt;code&gt;flink-dist&lt;/code&gt; and activating the
+&lt;code&gt;include-hadoop&lt;/code&gt; maven profile.&lt;/p&gt;
+
+    &lt;p&gt;As hadoop is no longer included in &lt;code&gt;flink-dist&lt;/code&gt; by default, specifying
+&lt;code&gt;-DwithoutHadoop&lt;/code&gt; when packaging &lt;code&gt;flink-dist&lt;/code&gt; no longer impacts the build.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;FlinkKafkaConsumer will now filter restored partitions based on topic
+specification
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10342&quot;&gt;FLINK-10342&lt;/a&gt;)&lt;/strong&gt;:
+Starting from Flink 1.8.0, the &lt;code&gt;FlinkKafkaConsumer&lt;/code&gt; now always filters out
+restored partitions that are no longer associated with a specified topic to
+subscribe to in the restored execution. This behaviour did not exist in
+previous versions of the &lt;code&gt;FlinkKafkaConsumer&lt;/code&gt;. If you wish to retain the
+previous behaviour, please use the
+&lt;code&gt;disableFilterRestoredPartitionsWithSubscribedTopics()&lt;/code&gt; configuration method
+on the &lt;code&gt;FlinkKafkaConsumer&lt;/code&gt;.&lt;/p&gt;
+
+    &lt;p&gt;Consider this example: if you had a Kafka Consumer that was consuming from
+topic &lt;code&gt;A&lt;/code&gt;, you did a savepoint, then changed your Kafka consumer to instead
+consume from topic &lt;code&gt;B&lt;/code&gt;, and then restarted your job from the savepoint.
+Before this change, your consumer would now consume from both topic &lt;code&gt;A&lt;/code&gt; and
+&lt;code&gt;B&lt;/code&gt; because it was stored in state that the consumer was consuming from topic
+&lt;code&gt;A&lt;/code&gt;. With the change, your consumer would only consume from topic &lt;code&gt;B&lt;/code&gt; after
+restore because it now filters the topics that are stored in state using the
+configured topics.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Change in the Maven modules of Table API
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11064&quot;&gt;FLINK-11064&lt;/a&gt;)&lt;/strong&gt;: Users
+that had a &lt;code&gt;flink-table&lt;/code&gt; dependency before, need to update their
+dependencies to &lt;code&gt;flink-table-planner&lt;/code&gt; and the correct dependency of
+&lt;code&gt;flink-table-api-*&lt;/code&gt;, depending on whether Java or Scala is used: one of
+&lt;code&gt;flink-table-api-java-bridge&lt;/code&gt; or &lt;code&gt;flink-table-api-scala-bridge&lt;/code&gt;.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2 id=&quot;known-issues&quot;&gt;Known Issues&lt;/h2&gt;
+
+&lt;ul&gt;
+  &lt;li&gt;&lt;strong&gt;Discarded checkpoint can cause Tasks to fail
+(&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11662&quot;&gt;FLINK-11662&lt;/a&gt;)&lt;/strong&gt;: There is
+a race condition that can lead to erroneous checkpoint failures. This mostly
+occurs when restarting from a savepoint or checkpoint takes a long time at the
+sources of a job. If you see random checkpointing failures that don’t seem to
+have a good explanation you might be affected. Please see the Jira issue for
+more details and a workaround for the problem.&lt;/li&gt;
+&lt;/ul&gt;
+
+&lt;h2 id=&quot;release-notes&quot;&gt;Release Notes&lt;/h2&gt;
+
+&lt;p&gt;Please review the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.8/release-notes/flink-1.8.html&quot;&gt;release
+notes&lt;/a&gt;
 for a more detailed list of changes and new features if you plan to upgrade
-your Flink setup to Flink 1.8.
+your Flink setup to Flink 1.8.&lt;/p&gt;
 
-## List of Contributors
+&lt;h2 id=&quot;list-of-contributors&quot;&gt;List of Contributors&lt;/h2&gt;
 
-We would like to acknowledge all community members for contributing to this
+&lt;p&gt;We would like to acknowledge all community members for contributing to this
 release.  Special credits go to the following members for contributing to the
-1.8.0 release (according to `git log --pretty=&quot;%an&quot; release-1.7.0..release-1.8.0 | sort | uniq` without manual deduplication):
+1.8.0 release (according to &lt;code&gt;git log --pretty=&quot;%an&quot; release-1.7.0..release-1.8.0 | sort | uniq&lt;/code&gt; without manual deduplication):&lt;/p&gt;
 
-Addison Higham, Aitozi, Aleksey Pak, Alexander Fedulov, Alexey Trenikhin, Aljoscha Krettek, Andrey Zagrebin, Artsem Semianenka, Asura7969, Avi, Barisa Obradovic, Benchao Li, Bo WANG, Chesnay Schepler, Congxian Qiu, Cristian, David Anderson, Dawid Wysakowicz, Dian Fu, DuBin, EAlexRojas, EronWright, Eugen Yushin, Fabian Hueske, Fokko Driesprong, Gary Yao, Hequn Cheng, Igal Shilman, Jamie Grier, JaryZhen, Jeff Zhang, Jihyun Cho, Jinhu Wu, Joerg Schad, KarmaGYZ, Kezhu Wang, Konstantin Knauf, [...]
+&lt;p&gt;Addison Higham, Aitozi, Aleksey Pak, Alexander Fedulov, Alexey Trenikhin, Aljoscha Krettek, Andrey Zagrebin, Artsem Semianenka, Asura7969, Avi, Barisa Obradovic, Benchao Li, Bo WANG, Chesnay Schepler, Congxian Qiu, Cristian, David Anderson, Dawid Wysakowicz, Dian Fu, DuBin, EAlexRojas, EronWright, Eugen Yushin, Fabian Hueske, Fokko Driesprong, Gary Yao, Hequn Cheng, Igal Shilman, Jamie Grier, JaryZhen, Jeff Zhang, Jihyun Cho, Jinhu Wu, Joerg Schad, KarmaGYZ, Kezhu Wang, Konstant [...]
 
 </description>
-<pubDate>Tue, 09 Apr 2019 12:00:00 +0000</pubDate>
+<pubDate>Tue, 09 Apr 2019 14:00:00 +0200</pubDate>
 <link>https://flink.apache.org/news/2019/04/09/release-1.8.0.html</link>
 <guid isPermaLink="true">/news/2019/04/09/release-1.8.0.html</guid>
 </item>
 
 <item>
 <title>Flink and Prometheus: Cloud-native monitoring of streaming applications</title>
-<description>This blog post describes how developers can leverage Apache Flink&#39;s built-in [metrics system]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html) together with [Prometheus](https://prometheus.io/) to observe and monitor streaming applications in an effective way. This is a follow-up post from my [Flink Forward](https://flink-forward.org/) Berlin 2018 talk ([slides](https://www.slideshare.net/MaximilianBode1/monitoring-flink-with-prometheus), [video](h [...]
+<description>&lt;p&gt;This blog post describes how developers can leverage Apache Flink’s built-in &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html&quot;&gt;metrics system&lt;/a&gt; together with &lt;a href=&quot;https://prometheus.io/&quot;&gt;Prometheus&lt;/a&gt; to observe and monitor streaming applications in an effective way. This is a follow-up post from my &lt;a href=&quot;https://flink-forward.org/&quot;&gt;Flink Forward&lt;/a&g [...]
 
-## Why Prometheus?
+&lt;h2 id=&quot;why-prometheus&quot;&gt;Why Prometheus?&lt;/h2&gt;
 
-Prometheus is a metrics-based monitoring system that was originally created in 2012. The system is completely open-source (under the Apache License 2) with a vibrant community behind it and it has graduated from the Cloud Native Foundation last year – a sign of maturity, stability and production-readiness. As we mentioned, the system is based on metrics and it is designed to measure the overall health, behavior and performance of a service. Prometheus features a multi-dimensional data mo [...]
+&lt;p&gt;Prometheus is a metrics-based monitoring system that was originally created in 2012. The system is completely open-source (under the Apache License 2) with a vibrant community behind it and it has graduated from the Cloud Native Foundation last year – a sign of maturity, stability and production-readiness. As we mentioned, the system is based on metrics and it is designed to measure the overall health, behavior and performance of a service. Prometheus features a multi-dimensiona [...]
 
-* **Metrics:** Prometheus defines metrics as floats of information that change in time. These time series have millisecond precision.
-
-* **Labels** are the key-value pairs associated with time series that support Prometheus&#39; flexible and powerful data model – in contrast to hierarchical data structures that one might experience with traditional metrics systems.
-
-* **Scrape:** Prometheus is a pull-based system and fetches (&quot;scrapes&quot;) metrics data from specified sources that expose HTTP endpoints with a text-based format.
-
-* **PromQL** is Prometheus&#39; [query language](https://prometheus.io/docs/prometheus/latest/querying/basics/). It can be used for both building dashboards and setting up alert rules that will trigger when specific conditions are met.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Metrics:&lt;/strong&gt; Prometheus defines metrics as floats of information that change in time. These time series have millisecond precision.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Labels&lt;/strong&gt; are the key-value pairs associated with time series that support Prometheus’ flexible and powerful data model – in contrast to hierarchical data structures that one might experience with traditional metrics systems.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;Scrape:&lt;/strong&gt; Prometheus is a pull-based system and fetches (“scrapes”) metrics data from specified sources that expose HTTP endpoints with a text-based format.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;strong&gt;PromQL&lt;/strong&gt; is Prometheus’ &lt;a href=&quot;https://prometheus.io/docs/prometheus/latest/querying/basics/&quot;&gt;query language&lt;/a&gt;. It can be used for both building dashboards and setting up alert rules that will trigger when specific conditions are met.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-When considering metrics and monitoring systems for your Flink jobs, there are many [options]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html). Flink offers native support for exposing data to Prometheus via the `PrometheusReporter` configuration. Setting up this integration is very easy.
+&lt;p&gt;When considering metrics and monitoring systems for your Flink jobs, there are many &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html&quot;&gt;options&lt;/a&gt;. Flink offers native support for exposing data to Prometheus via the &lt;code&gt;PrometheusReporter&lt;/code&gt; configuration. Setting up this integration is very easy.&lt;/p&gt;
 
-Prometheus is a great choice as usually Flink jobs are not running in isolation but in a greater context of microservices. For making metrics available to Prometheus from other parts of a larger system, there are two options: There exist [libraries for all major languages](https://prometheus.io/docs/instrumenting/clientlibs/) to instrument other applications. Additionally, there is a wide variety of [exporters](https://prometheus.io/docs/instrumenting/exporters/), which are tools that ex [...]
+&lt;p&gt;Prometheus is a great choice as usually Flink jobs are not running in isolation but in a greater context of microservices. For making metrics available to Prometheus from other parts of a larger system, there are two options: There exist &lt;a href=&quot;https://prometheus.io/docs/instrumenting/clientlibs/&quot;&gt;libraries for all major languages&lt;/a&gt; to instrument other applications. Additionally, there is a wide variety of &lt;a href=&quot;https://prometheus.io/docs/ins [...]
 
-## Prometheus and Flink in Action
+&lt;h2 id=&quot;prometheus-and-flink-in-action&quot;&gt;Prometheus and Flink in Action&lt;/h2&gt;
 
-We have provided a [GitHub repository](https://github.com/mbode/flink-prometheus-example) that demonstrates the integration described above. To have a look, clone the repository, make sure [Docker](https://docs.docker.com/install/) is installed and run: 
+&lt;p&gt;We have provided a &lt;a href=&quot;https://github.com/mbode/flink-prometheus-example&quot;&gt;GitHub repository&lt;/a&gt; that demonstrates the integration described above. To have a look, clone the repository, make sure &lt;a href=&quot;https://docs.docker.com/install/&quot;&gt;Docker&lt;/a&gt; is installed and run:&lt;/p&gt;
 
-```
-./gradlew composeUp
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;./gradlew composeUp
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-This builds a Flink job using the build tool [Gradle](https://gradle.org/) and starts up a local environment based on [Docker Compose](https://docs.docker.com/compose/) running the job in a [Flink job cluster]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/ops/deployment/docker.html#flink-job-cluster) (reachable at [http://localhost:8081](http://localhost:8081/)) as well as a Prometheus instance ([http://localhost:9090](http://localhost:9090/)).
+&lt;p&gt;This builds a Flink job using the build tool &lt;a href=&quot;https://gradle.org/&quot;&gt;Gradle&lt;/a&gt; and starts up a local environment based on &lt;a href=&quot;https://docs.docker.com/compose/&quot;&gt;Docker Compose&lt;/a&gt; running the job in a &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/deployment/docker.html#flink-job-cluster&quot;&gt;Flink job cluster&lt;/a&gt; (reachable at &lt;a href=&quot;http://localhost:8081/&quot;&gt;http: [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-03-11-prometheus-monitoring/prometheusexamplejob.png&quot; width=&quot;600px&quot; alt=&quot;PrometheusExampleJob in Flink Web UI&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-03-11-prometheus-monitoring/prometheusexamplejob.png&quot; width=&quot;600px&quot; alt=&quot;PrometheusExampleJob in Flink Web UI&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Job graph and custom metric for example job in Flink web interface.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The `PrometheusExampleJob` has three operators: Random numbers up to 10,000 are generated, then a map counts the events and creates a histogram of the values passed through. Finally, the events are discarded without further output. The very simple code below is from the second operator. It illustrates how easy it is to add custom metrics relevant to your business logic into your Flink job.
+&lt;p&gt;The &lt;code&gt;PrometheusExampleJob&lt;/code&gt; has three operators: Random numbers up to 10,000 are generated, then a map counts the events and creates a histogram of the values passed through. Finally, the events are discarded without further output. The very simple code below is from the second operator. It illustrates how easy it is to add custom metrics relevant to your business logic into your Flink job.&lt;/p&gt;
 
-```java
-class FlinkMetricsExposingMapFunction extends RichMapFunction&lt;Integer, Integer&gt; {
-  private transient Counter eventCounter;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-java&quot;&gt;&lt;span class=&quot;kd&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;FlinkMetricsExposingMapFunction&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;RichMapFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;&amp;lt;&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &l [...]
+  &lt;span class=&quot;kd&quot;&gt;private&lt;/span&gt; &lt;span class=&quot;kd&quot;&gt;transient&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Counter&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;eventCounter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
 
-  @Override
-  public void open(Configuration parameters) {
-    eventCounter = getRuntimeContext().getMetricGroup().counter(&quot;events&quot;);
-  }
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;void&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;open&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Configuration&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parameters&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;eventCounter&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getRuntimeContext&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;getMetricGroup&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;().&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;counter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;events&amp;quot;&lt;/spa [...]
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
 
-  @Override
-  public Integer map(Integer value) {
-    eventCounter.inc();
-    return value;
-  }
-}
-```
+  &lt;span class=&quot;nd&quot;&gt;@Override&lt;/span&gt;
+  &lt;span class=&quot;kd&quot;&gt;public&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt; &lt;span class=&quot;nf&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;Integer&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;eventCounter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;na&quot;&gt;inc&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;();&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;return&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;value&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;;&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 &lt;center&gt;&lt;i&gt;&lt;small&gt;Excerpt from &lt;a href=&quot;https://github.com/mbode/flink-prometheus-example/blob/master/src/main/java/com/github/mbode/flink_prometheus_example/FlinkMetricsExposingMapFunction.java&quot;&gt;FlinkMetricsExposingMapFunction.java&lt;/a&gt; demonstrating custom Flink metric.&lt;/small&gt;&lt;/i&gt;&lt;/center&gt;
 
-## Configuring Prometheus with Flink
-
-To start monitoring Flink with Prometheus, the following steps are necessary:
-
-1. Make the `PrometheusReporter` jar available to the classpath of the Flink cluster (it comes with the Flink distribution):
+&lt;h2 id=&quot;configuring-prometheus-with-flink&quot;&gt;Configuring Prometheus with Flink&lt;/h2&gt;
 
-        cp /opt/flink/opt/flink-metrics-prometheus-1.7.2.jar /opt/flink/lib
+&lt;p&gt;To start monitoring Flink with Prometheus, the following steps are necessary:&lt;/p&gt;
 
-2. [Configure the reporter]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#reporter) in Flink&#39;s _flink-conf.yaml_. All job managers and task managers will expose the metrics on the configured port.
+&lt;ol&gt;
+  &lt;li&gt;
+    &lt;p&gt;Make the &lt;code&gt;PrometheusReporter&lt;/code&gt; jar available to the classpath of the Flink cluster (it comes with the Flink distribution):&lt;/p&gt;
 
-        metrics.reporters: prom
-        metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
-        metrics.reporter.prom.port: 9999
+    &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt; cp /opt/flink/opt/flink-metrics-prometheus-1.7.2.jar /opt/flink/lib
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#reporter&quot;&gt;Configure the reporter&lt;/a&gt; in Flink’s &lt;em&gt;flink-conf.yaml&lt;/em&gt;. All job managers and task managers will expose the metrics on the configured port.&lt;/p&gt;
 
-3. Prometheus needs to know where to scrape metrics. In a static scenario, you can simply [configure Prometheus](https://prometheus.io/docs/prometheus/latest/configuration/configuration/) in _prometheus.yml_ with the following:
+    &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt; metrics.reporters: prom
+ metrics.reporter.prom.class: org.apache.flink.metrics.prometheus.PrometheusReporter
+ metrics.reporter.prom.port: 9999
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Prometheus needs to know where to scrape metrics. In a static scenario, you can simply &lt;a href=&quot;https://prometheus.io/docs/prometheus/latest/configuration/configuration/&quot;&gt;configure Prometheus&lt;/a&gt; in &lt;em&gt;prometheus.yml&lt;/em&gt; with the following:&lt;/p&gt;
 
-        scrape_configs:
-        - job_name: &#39;flink&#39;
-          static_configs:
-          - targets: [&#39;job-cluster:9999&#39;, &#39;taskmanager1:9999&#39;, &#39;taskmanager2:9999&#39;]
+    &lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt; scrape_configs:
+ - job_name: &#39;flink&#39;
+   static_configs:
+   - targets: [&#39;job-cluster:9999&#39;, &#39;taskmanager1:9999&#39;, &#39;taskmanager2:9999&#39;]
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-    In more dynamic scenarios we recommend using Prometheus&#39; service discovery support for different platforms such as Kubernetes, AWS EC2 and more.
+    &lt;p&gt;In more dynamic scenarios we recommend using Prometheus’ service discovery support for different platforms such as Kubernetes, AWS EC2 and more.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ol&gt;
 
-Both custom metrics are now available in Prometheus:
+&lt;p&gt;Both custom metrics are now available in Prometheus:&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-03-11-prometheus-monitoring/prometheus.png&quot; width=&quot;600px&quot; alt=&quot;Prometheus web UI with example metric&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-03-11-prometheus-monitoring/prometheus.png&quot; width=&quot;600px&quot; alt=&quot;Prometheus web UI with example metric&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Example metric in Prometheus web UI.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-More technical metrics from the Flink cluster (like checkpoint sizes or duration, Kafka offsets or resource consumption) are also available. If you are interested, you can check out the HTTP endpoints exposing all Prometheus metrics for the job managers and the two task managers on [http://localhost:9249](http://localhost:9249/metrics), [http://localhost:9250](http://localhost:9250/metrics) and [http://localhost:9251](http://localhost:9251/metrics), respectively.
+&lt;p&gt;More technical metrics from the Flink cluster (like checkpoint sizes or duration, Kafka offsets or resource consumption) are also available. If you are interested, you can check out the HTTP endpoints exposing all Prometheus metrics for the job managers and the two task managers on &lt;a href=&quot;http://localhost:9249/metrics&quot;&gt;http://localhost:9249&lt;/a&gt;, &lt;a href=&quot;http://localhost:9250/metrics&quot;&gt;http://localhost:9250&lt;/a&gt; and &lt;a href=&quot;ht [...]
 
-To test Prometheus&#39; alerting feature, kill one of the Flink task managers via
+&lt;p&gt;To test Prometheus’ alerting feature, kill one of the Flink task managers via&lt;/p&gt;
 
-```
-docker kill taskmanager1
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;docker kill taskmanager1
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-Our Flink job can recover from this partial failure via the mechanism of [Checkpointing]({{ site.DOCS_BASE_URL }}flink-docs-release-1.7/dev/stream/state/checkpointing.html). Nevertheless, after roughly one minute (as configured in the alert rule) the following alert will fire:
+&lt;p&gt;Our Flink job can recover from this partial failure via the mechanism of &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/state/checkpointing.html&quot;&gt;Checkpointing&lt;/a&gt;. Nevertheless, after roughly one minute (as configured in the alert rule) the following alert will fire:&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-03-11-prometheus-monitoring/prometheusalerts.png&quot; width=&quot;600px&quot; alt=&quot;Prometheus web UI with example alert&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-03-11-prometheus-monitoring/prometheusalerts.png&quot; width=&quot;600px&quot; alt=&quot;Prometheus web UI with example alert&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Example alert in Prometheus web UI.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-In real-world situations alerts like this one can be routed through a component called [Alertmanager](https://prometheus.io/docs/alerting/alertmanager/) and be grouped into notifications to systems like email, PagerDuty or Slack.
+&lt;p&gt;In real-world situations alerts like this one can be routed through a component called &lt;a href=&quot;https://prometheus.io/docs/alerting/alertmanager/&quot;&gt;Alertmanager&lt;/a&gt; and be grouped into notifications to systems like email, PagerDuty or Slack.&lt;/p&gt;
 
-Go ahead and play around with the setup, and check out the [Grafana](https://grafana.com/grafana) instance reachable at [http://localhost:3000](http://localhost:3000/) (credentials _admin:flink_) for visualizing Prometheus metrics. If there are any questions or problems, feel free to [create an issue](https://github.com/mbode/flink-prometheus-example/issues). Once finished, do not forget to tear down the setup via
+&lt;p&gt;Go ahead and play around with the setup, and check out the &lt;a href=&quot;https://grafana.com/grafana&quot;&gt;Grafana&lt;/a&gt; instance reachable at &lt;a href=&quot;http://localhost:3000/&quot;&gt;http://localhost:3000&lt;/a&gt; (credentials &lt;em&gt;admin:flink&lt;/em&gt;) for visualizing Prometheus metrics. If there are any questions or problems, feel free to &lt;a href=&quot;https://github.com/mbode/flink-prometheus-example/issues&quot;&gt;create an issue&lt;/a&gt;. Onc [...]
 
-```
-./gradlew composeDown
-```
-&lt;br/&gt;
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;./gradlew composeDown
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## Conclusion
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-Using Prometheus together with Flink provides an easy way for effective monitoring and alerting of your Flink jobs. Both projects have exciting and vibrant communities behind them with new developments and additions scheduled for upcoming releases. We encourage you to try the two technologies together as it has immensely improved our insights into Flink jobs running in production.
+&lt;p&gt;Using Prometheus together with Flink provides an easy way for effective monitoring and alerting of your Flink jobs. Both projects have exciting and vibrant communities behind them with new developments and additions scheduled for upcoming releases. We encourage you to try the two technologies together as it has immensely improved our insights into Flink jobs running in production.&lt;/p&gt;
 </description>
-<pubDate>Mon, 11 Mar 2019 12:00:00 +0000</pubDate>
+<pubDate>Mon, 11 Mar 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/features/2019/03/11/prometheus-monitoring.html</link>
 <guid isPermaLink="true">/features/2019/03/11/prometheus-monitoring.html</guid>
 </item>
 
 <item>
 <title>What to expect from Flink Forward San Francisco 2019</title>
-<description>The third annual Flink Forward San Francisco is just a few weeks away! As always, Flink Forward will be the right place to meet and mingle with experienced Flink users, contributors, and committers. Attendees will hear and chat about the latest developments around Flink and learn from technical deep-dive sessions and exciting use cases that were put into production with Flink. The event will take place on April 1-2, 2019 at Hotel Nikko in San Francisco. The [program committe [...]
+<description>&lt;p&gt;The third annual Flink Forward San Francisco is just a few weeks away! As always, Flink Forward will be the right place to meet and mingle with experienced Flink users, contributors, and committers. Attendees will hear and chat about the latest developments around Flink and learn from technical deep-dive sessions and exciting use cases that were put into production with Flink. The event will take place on April 1-2, 2019 at Hotel Nikko in San Francisco. The &lt;a hr [...]
 
-Some highlights of the program are:
+&lt;p&gt;Some highlights of the program are:&lt;/p&gt;
 
-* [Realtime Store Visit Predictions at Scale](https://sf-2019.flink-forward.org/conference-program#realtime-store-visit-predictions-at-scale): Luca Giovagnoli from *Yelp* will talk about a &quot;multidisciplinary&quot; Flink application that combines geospatial clustering algorithms, Machine Learning models, and cutting-edge stream-processing technology.
-
-* [Real-time Processing with Flink for Machine Learning at Netflix](https://sf-2019.flink-forward.org/conference-program#real-time-processing-with-flink-for-machine-learning-at-netflix): Elliot Chow will discuss the practical aspects of using Apache Flink to power Machine Learning algorithms for video recommendations, search results ranking, and selection of artwork images at *Netflix*.
-
-* [Building production Flink jobs with Airstream at Airbnb](https://sf-2019.flink-forward.org/conference-program#building-production-flink-jobs-with-airstream-at-airbnb): Pala Muthiah and Hao Wang will reveal how *Airbnb* builds real time data pipelines with Airstream, Airbnb&#39;s computation framework that is powered by Flink SQL.
-
-* [When Table meets AI: Build Flink AI Ecosystem on Table API](https://sf-2019.flink-forward.org/conference-program#when-table-meets-ai--build-flink-ai-ecosystem-on-table-api): Shaoxuan Wang from *Alibaba* will discuss how they are building a solid AI ecosystem for unified batch/streaming Machine Learning data pipelines on top of Flink&#39;s Table API.
-
-* [Adventures in Scaling from Zero to 5 Billion Data Points per Day](https://sf-2019.flink-forward.org/conference-program#adventures-in-scaling-from-zero-to-5-billion-data-points-per-day): Dave Torok will take us through *Comcast&#39;s* journey in scaling the company&#39;s operationalized Machine Learning framework from the very early days in production to processing more than 5 billion data points per day.
-
-If you&#39;re new to Apache Flink or want to deepen your knowledge around the framework, Flink Forward features again a full day of training. 
-
-You can choose from 3 training tracks:
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/conference-program#realtime-store-visit-predictions-at-scale&quot;&gt;Realtime Store Visit Predictions at Scale&lt;/a&gt;: Luca Giovagnoli from &lt;em&gt;Yelp&lt;/em&gt; will talk about a “multidisciplinary” Flink application that combines geospatial clustering algorithms, Machine Learning models, and cutting-edge stream-processing technology.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/conference-program#real-time-processing-with-flink-for-machine-learning-at-netflix&quot;&gt;Real-time Processing with Flink for Machine Learning at Netflix&lt;/a&gt;: Elliot Chow will discuss the practical aspects of using Apache Flink to power Machine Learning algorithms for video recommendations, search results ranking, and selection of artwork images at &lt;em&gt;Netflix&lt;/em&gt;.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/conference-program#building-production-flink-jobs-with-airstream-at-airbnb&quot;&gt;Building production Flink jobs with Airstream at Airbnb&lt;/a&gt;: Pala Muthiah and Hao Wang will reveal how &lt;em&gt;Airbnb&lt;/em&gt; builds real time data pipelines with Airstream, Airbnb’s computation framework that is powered by Flink SQL.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/conference-program#when-table-meets-ai--build-flink-ai-ecosystem-on-table-api&quot;&gt;When Table meets AI: Build Flink AI Ecosystem on Table API&lt;/a&gt;: Shaoxuan Wang from &lt;em&gt;Alibaba&lt;/em&gt; will discuss how they are building a solid AI ecosystem for unified batch/streaming Machine Learning data pipelines on top of Flink’s Table API.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/conference-program#adventures-in-scaling-from-zero-to-5-billion-data-points-per-day&quot;&gt;Adventures in Scaling from Zero to 5 Billion Data Points per Day&lt;/a&gt;: Dave Torok will take us through &lt;em&gt;Comcast’s&lt;/em&gt; journey in scaling the company’s operationalized Machine Learning framework from the very early days in production to processing more than 5 billion data points per day.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-* [Introduction to Streaming with Apache Flink](https://sf-2019.flink-forward.org/training-program#introduction-to-streaming-with-apache-flink): A hands-on, in-depth introduction to stream processing and Apache Flink, this course emphasizes those features of Flink that make it easy to build and manage accurate, fault tolerant applications on streams.
+&lt;p&gt;If you’re new to Apache Flink or want to deepen your knowledge around the framework, Flink Forward features again a full day of training.&lt;/p&gt;
 
-* [Analyzing Streaming Data with Flink SQL](https://sf-2019.flink-forward.org/training-program#analyzing-streaming-data-with-flink-sql): In this hands-on training, you will learn what it means to run SQL queries on data streams and how to fully leverage the potential of SQL on Flink. We&#39;ll also cover some of the more recent features such as time-versioned joins and the MATCH RECOGNIZE clause.
+&lt;p&gt;You can choose from 3 training tracks:&lt;/p&gt;
 
-* [Troubleshooting and Operating Flink at large scale](https://sf-2019.flink-forward.org/training-program#apache-flink-troubleshooting---operations): In this training, we will focus on everything you need to run Apache Flink applications reliably and efficiently in production including topics like capacity planning, monitoring, troubleshooting and tuning Apache Flink.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/training-program#introduction-to-streaming-with-apache-flink&quot;&gt;Introduction to Streaming with Apache Flink&lt;/a&gt;: A hands-on, in-depth introduction to stream processing and Apache Flink, this course emphasizes those features of Flink that make it easy to build and manage accurate, fault tolerant applications on streams.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/training-program#analyzing-streaming-data-with-flink-sql&quot;&gt;Analyzing Streaming Data with Flink SQL&lt;/a&gt;: In this hands-on training, you will learn what it means to run SQL queries on data streams and how to fully leverage the potential of SQL on Flink. We’ll also cover some of the more recent features such as time-versioned joins and the MATCH RECOGNIZE clause.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;a href=&quot;https://sf-2019.flink-forward.org/training-program#apache-flink-troubleshooting---operations&quot;&gt;Troubleshooting and Operating Flink at large scale&lt;/a&gt;: In this training, we will focus on everything you need to run Apache Flink applications reliably and efficiently in production including topics like capacity planning, monitoring, troubleshooting and tuning Apache Flink.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-If you haven&#39;t done so yet, check out the [full schedule](http://sf-2019.flink-forward.org/conference-program) and [register](https://sf-2019.flink-forward.org/register) your attendance. &lt;br&gt;
-I&#39;m looking forward to meet you at Flink Forward San Francisco.
+&lt;p&gt;If you haven’t done so yet, check out the &lt;a href=&quot;http://sf-2019.flink-forward.org/conference-program&quot;&gt;full schedule&lt;/a&gt; and &lt;a href=&quot;https://sf-2019.flink-forward.org/register&quot;&gt;register&lt;/a&gt; your attendance. &lt;br /&gt;
+I’m looking forward to meet you at Flink Forward San Francisco.&lt;/p&gt;
 
-*Fabian*</description>
-<pubDate>Wed, 06 Mar 2019 11:00:00 +0000</pubDate>
+&lt;p&gt;&lt;em&gt;Fabian&lt;/em&gt;&lt;/p&gt;
+</description>
+<pubDate>Wed, 06 Mar 2019 12:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/03/06/ffsf-preview.html</link>
 <guid isPermaLink="true">/news/2019/03/06/ffsf-preview.html</guid>
 </item>
@@ -5026,163 +5226,210 @@ I&#39;m looking forward to meet you at Flink Forward San Francisco.
   td { vertical-align: top }
 &lt;/style&gt;
 
-This blog post provides an introduction to Apache Flink’s built-in monitoring
+&lt;p&gt;This blog post provides an introduction to Apache Flink’s built-in monitoring
 and metrics system, that allows developers to effectively monitor their Flink
 jobs. Oftentimes, the task of picking the relevant metrics to monitor a
 Flink application can be overwhelming for a DevOps team that is just starting
 with stream processing and Apache Flink. Having worked with many organizations
 that deploy Flink at scale, I would like to share my experience and some best
-practice with the community.
+practice with the community.&lt;/p&gt;
 
-With business-critical applications running on Apache Flink, performance monitoring
+&lt;p&gt;With business-critical applications running on Apache Flink, performance monitoring
 becomes an increasingly important part of a successful production deployment. It 
 ensures that any degradation or downtime is immediately identified and resolved
-as quickly as possible.
+as quickly as possible.&lt;/p&gt;
 
-Monitoring goes hand-in-hand with observability, which is a prerequisite for
+&lt;p&gt;Monitoring goes hand-in-hand with observability, which is a prerequisite for
 troubleshooting and performance tuning. Nowadays, with the complexity of modern
 enterprise applications and the speed of delivery increasing, an engineering
 team must understand and have a complete overview of its applications’ status at
-any given point in time.
+any given point in time.&lt;/p&gt;
 
-## Flink’s Metrics System
+&lt;h2 id=&quot;flinks-metrics-system&quot;&gt;Flink’s Metrics System&lt;/h2&gt;
 
-The foundation for monitoring Flink jobs is its [metrics
-system](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html&gt;)
-which consists of two components; `Metrics` and `MetricsReporters`.
+&lt;p&gt;The foundation for monitoring Flink jobs is its &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html&quot;&gt;metrics
+system&lt;/a&gt;
+which consists of two components; &lt;code&gt;Metrics&lt;/code&gt; and &lt;code&gt;MetricsReporters&lt;/code&gt;.&lt;/p&gt;
 
-### Metrics
+&lt;h3 id=&quot;metrics&quot;&gt;Metrics&lt;/h3&gt;
 
-Flink comes with a comprehensive set of built-in metrics such as:
+&lt;p&gt;Flink comes with a comprehensive set of built-in metrics such as:&lt;/p&gt;
 
-* Used JVM Heap / NonHeap / Direct Memory (per Task-/JobManager)
-* Number of Job Restarts (per Job)
-* Number of Records Per Second (per Operator)
-* ...
+&lt;ul&gt;
+  &lt;li&gt;Used JVM Heap / NonHeap / Direct Memory (per Task-/JobManager)&lt;/li&gt;
+  &lt;li&gt;Number of Job Restarts (per Job)&lt;/li&gt;
+  &lt;li&gt;Number of Records Per Second (per Operator)&lt;/li&gt;
+  &lt;li&gt;…&lt;/li&gt;
+&lt;/ul&gt;
 
-These metrics have different scopes and measure more general (e.g. JVM or
-operating system) as well as Flink-specific aspects.
+&lt;p&gt;These metrics have different scopes and measure more general (e.g. JVM or
+operating system) as well as Flink-specific aspects.&lt;/p&gt;
 
-As a user, you can and should add application-specific metrics to your
+&lt;p&gt;As a user, you can and should add application-specific metrics to your
 functions. Typically these include counters for the number of invalid records or
 the number of records temporarily buffered in managed state. Besides counters,
 Flink offers additional metrics types like gauges and histograms. For
 instructions on how to register your own metrics with Flink’s metrics system
-please check out [Flink’s
-documentation](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#registering-metrics&gt;).
+please check out &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#registering-metrics&quot;&gt;Flink’s
+documentation&lt;/a&gt;.
 In this blog post, we will focus on how to get the most out of Flink’s built-in
-metrics.
+metrics.&lt;/p&gt;
 
-### MetricsReporters
+&lt;h3 id=&quot;metricsreporters&quot;&gt;MetricsReporters&lt;/h3&gt;
 
-All metrics can be queried via Flink’s REST API. However, users can configure
+&lt;p&gt;All metrics can be queried via Flink’s REST API. However, users can configure
 MetricsReporters to send the metrics to external systems. Apache Flink provides
 reporters to the most common monitoring tools out-of-the-box including JMX,
 Prometheus, Datadog, Graphite and InfluxDB. For information about how to
-configure a reporter check out Flink’s [MetricsReporter
-documentation](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#reporter&gt;).
+configure a reporter check out Flink’s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#reporter&quot;&gt;MetricsReporter
+documentation&lt;/a&gt;.&lt;/p&gt;
 
-In the remaining part of this blog post, we will go over some of the most
-important metrics to monitor your Apache Flink application.
+&lt;p&gt;In the remaining part of this blog post, we will go over some of the most
+important metrics to monitor your Apache Flink application.&lt;/p&gt;
 
-## Monitoring General Health
+&lt;h2 id=&quot;monitoring-general-health&quot;&gt;Monitoring General Health&lt;/h2&gt;
 
-The first thing you want to monitor is whether your job is actually in a *RUNNING*
+&lt;p&gt;The first thing you want to monitor is whether your job is actually in a &lt;em&gt;RUNNING&lt;/em&gt;
 state. In addition, it pays off to monitor the number of restarts and the time
-since the last restart.
+since the last restart.&lt;/p&gt;
 
-Generally speaking, successful checkpointing is a strong indicator of the
+&lt;p&gt;Generally speaking, successful checkpointing is a strong indicator of the
 general health of your application. For each checkpoint, checkpoint barriers
 need to flow through the whole topology of your Flink job and events and
 barriers cannot overtake each other. Therefore, a successful checkpoint shows
-that no channel is fully congested.
+that no channel is fully congested.&lt;/p&gt;
 
-**Key Metrics**
+&lt;p&gt;&lt;strong&gt;Key Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`uptime` | job | The time that the job has been running without interruption. |
-`fullRestarts` | job | The total number of full restarts since this job was submitted. |
-`numberOfCompletedCheckpoints` | job | The number of successfully completed checkpoints. |
-`numberOfFailedCheckpoints` | job | The number of failed checkpoints. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;uptime&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job&lt;/td&gt;
+      &lt;td&gt;The time that the job has been running without interruption.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;fullRestarts&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job&lt;/td&gt;
+      &lt;td&gt;The total number of full restarts since this job was submitted.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;numberOfCompletedCheckpoints&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job&lt;/td&gt;
+      &lt;td&gt;The number of successfully completed checkpoints.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;numberOfFailedCheckpoints&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job&lt;/td&gt;
+      &lt;td&gt;The number of failed checkpoints.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Example Dashboard Panels**
+&lt;p&gt;&lt;strong&gt;Example Dashboard Panels&lt;/strong&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-1.png&quot; width=&quot;800px&quot; alt=&quot;Uptime (35 minutes), Restarting Time (3 milliseconds) and Number of Full Restarts (7)&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-1.png&quot; width=&quot;800px&quot; alt=&quot;Uptime (35 minutes), Restarting Time (3 milliseconds) and Number of Full Restarts (7)&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Uptime (35 minutes), Restarting Time (3 milliseconds) and Number of Full Restarts (7)&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-2.png&quot; width=&quot;800px&quot; alt=&quot;Completed Checkpoints (18336), Failed (14)&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-2.png&quot; width=&quot;800px&quot; alt=&quot;Completed Checkpoints (18336), Failed (14)&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Completed Checkpoints (18336), Failed (14)&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Possible Alerts**
-
-* `ΔfullRestarts` &amp;gt; `threshold`
-* `ΔnumberOfFailedCheckpoints` &amp;gt; `threshold`
+&lt;p&gt;&lt;strong&gt;Possible Alerts&lt;/strong&gt;&lt;/p&gt;
 
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;ΔfullRestarts&lt;/code&gt; &amp;gt; &lt;code&gt;threshold&lt;/code&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;ΔnumberOfFailedCheckpoints&lt;/code&gt; &amp;gt; &lt;code&gt;threshold&lt;/code&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-## Monitoring Progress &amp; Throughput
+&lt;h2 id=&quot;monitoring-progress--throughput&quot;&gt;Monitoring Progress &amp;amp; Throughput&lt;/h2&gt;
 
-Knowing that your application is RUNNING and checkpointing is working fine is good,
+&lt;p&gt;Knowing that your application is RUNNING and checkpointing is working fine is good,
 but it does not tell you whether the application is actually making progress and
-keeping up with the upstream systems.
+keeping up with the upstream systems.&lt;/p&gt;
 
-### Throughput
+&lt;h3 id=&quot;throughput&quot;&gt;Throughput&lt;/h3&gt;
 
-Flink provides multiple metrics to measure the throughput of our application.
-For each operator or task (remember: a task can contain multiple [chained
-tasks](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/dev/stream/operators/#task-chaining-and-resource-groups&gt;)
+&lt;p&gt;Flink provides multiple metrics to measure the throughput of our application.
+For each operator or task (remember: a task can contain multiple &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/dev/stream/operators/#task-chaining-and-resource-groups&quot;&gt;chained
+tasks&lt;/a&gt;
 Flink counts the number of records and bytes going in and out. Out of those
 metrics, the rate of outgoing records per operator is often the most intuitive
-and easiest to reason about.
+and easiest to reason about.&lt;/p&gt;
 
-**Key Metrics**
+&lt;p&gt;&lt;strong&gt;Key Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`numRecordsOutPerSecond` | task | The number of records this operator/task sends per second. |
-`numRecordsOutPerSecond` | operator | The number of records this operator sends per second. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;numRecordsOutPerSecond&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;task&lt;/td&gt;
+      &lt;td&gt;The number of records this operator/task sends per second.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;numRecordsOutPerSecond&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;operator&lt;/td&gt;
+      &lt;td&gt;The number of records this operator sends per second.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Example Dashboard Panels**
+&lt;p&gt;&lt;strong&gt;Example Dashboard Panels&lt;/strong&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-3.png&quot; width=&quot;800px&quot; alt=&quot;Mean Records Out per Second per Operator&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-3.png&quot; width=&quot;800px&quot; alt=&quot;Mean Records Out per Second per Operator&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Mean Records Out per Second per Operator&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Possible Alerts**
+&lt;p&gt;&lt;strong&gt;Possible Alerts&lt;/strong&gt;&lt;/p&gt;
 
-* `recordsOutPerSecond` = `0` (for a non-Sink operator)
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;recordsOutPerSecond&lt;/code&gt; = &lt;code&gt;0&lt;/code&gt; (for a non-Sink operator)&lt;/li&gt;
+&lt;/ul&gt;
 
-_Note_: Source operators always have zero incoming records. Sink operators
+&lt;p&gt;&lt;em&gt;Note&lt;/em&gt;: Source operators always have zero incoming records. Sink operators
 always have zero outgoing records because the metrics only count
-Flink-internal communication. There is a [JIRA
-ticket](&lt;https://issues.apache.org/jira/browse/FLINK-7286&gt;) to change this
-behavior.
+Flink-internal communication. There is a &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-7286&quot;&gt;JIRA
+ticket&lt;/a&gt; to change this
+behavior.&lt;/p&gt;
 
-### Progress
+&lt;h3 id=&quot;progress&quot;&gt;Progress&lt;/h3&gt;
 
-For applications, that use event time semantics, it is important that watermarks
-progress over time. A watermark of time _t_ tells the framework, that it
-should not anymore expect to receive  events with a timestamp earlier than _t_,
-and in turn, to trigger all operations that were scheduled for a timestamp &amp;lt; _t_.
-For example, an event time window that ends at _t_ = 30 will be closed and
-evaluated once the watermark passes 30.
+&lt;p&gt;For applications, that use event time semantics, it is important that watermarks
+progress over time. A watermark of time &lt;em&gt;t&lt;/em&gt; tells the framework, that it
+should not anymore expect to receive  events with a timestamp earlier than &lt;em&gt;t&lt;/em&gt;,
+and in turn, to trigger all operations that were scheduled for a timestamp &amp;lt; &lt;em&gt;t&lt;/em&gt;.
+For example, an event time window that ends at &lt;em&gt;t&lt;/em&gt; = 30 will be closed and
+evaluated once the watermark passes 30.&lt;/p&gt;
 
-As a consequence, you should monitor the watermark at event time-sensitive
+&lt;p&gt;As a consequence, you should monitor the watermark at event time-sensitive
 operators in your application, such as process functions and windows. If the
 difference between the current processing time and the watermark, known as
 even-time skew, is unusually high, then it typically implies one of two issues.
@@ -5191,195 +5438,294 @@ during catch-up after a downtime or when your job is simply not able to keep up
 and events are queuing up. Second, it could mean a single upstream sub-task has
 not sent a watermark for a long time (for example because it did not receive any
 events to base the watermark on), which also prevents the watermark in
-downstream operators to progress. This [JIRA
-ticket](&lt;https://issues.apache.org/jira/browse/FLINK-5017&gt;) provides further
-information and a work around for the latter.
+downstream operators to progress. This &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-5017&quot;&gt;JIRA
+ticket&lt;/a&gt; provides further
+information and a work around for the latter.&lt;/p&gt;
 
-**Key Metrics**
+&lt;p&gt;&lt;strong&gt;Key Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`currentOutputWatermark` | operator | The last watermark this operator has emitted. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;currentOutputWatermark&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;operator&lt;/td&gt;
+      &lt;td&gt;The last watermark this operator has emitted.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Example Dashboard Panels**
+&lt;p&gt;&lt;strong&gt;Example Dashboard Panels&lt;/strong&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-4.png&quot; width=&quot;800px&quot; alt=&quot;Event Time Lag per Subtask of a single operator in the topology. In this case, the watermark is lagging a few seconds behind for each subtask.&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-4.png&quot; width=&quot;800px&quot; alt=&quot;Event Time Lag per Subtask of a single operator in the topology. In this case, the watermark is lagging a few seconds behind for each subtask.&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Event Time Lag per Subtask of a single operator in the topology. In this case, the watermark is lagging a few seconds behind for each subtask.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Possible Alerts**
+&lt;p&gt;&lt;strong&gt;Possible Alerts&lt;/strong&gt;&lt;/p&gt;
 
-* `currentProcessingTime - currentOutputWatermark` &amp;gt; `threshold`
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;currentProcessingTime - currentOutputWatermark&lt;/code&gt; &amp;gt; &lt;code&gt;threshold&lt;/code&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-### &quot;Keeping Up&quot;
+&lt;h3 id=&quot;keeping-up&quot;&gt;“Keeping Up”&lt;/h3&gt;
 
-When consuming from a message queue, there is often a direct way to monitor if
+&lt;p&gt;When consuming from a message queue, there is often a direct way to monitor if
 your application is keeping up. By using connector-specific metrics you can
 monitor how far behind the head of the message queue your current consumer group
-is. Flink forwards the underlying metrics from most sources.
+is. Flink forwards the underlying metrics from most sources.&lt;/p&gt;
 
-**Key Metrics**
+&lt;p&gt;&lt;strong&gt;Key Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`records-lag-max` | user | applies to `FlinkKafkaConsumer`. The maximum lag in terms of the number of records for any partition in this window. An increasing value over time is your best indication that the consumer group is not keeping up with the producers. |
-`millisBehindLatest` | user | applies to `FlinkKinesisConsumer`. The number of milliseconds a consumer is behind the head of the stream. For any consumer and Kinesis shard, this indicates how far it is behind the current time. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;records-lag-max&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;user&lt;/td&gt;
+      &lt;td&gt;applies to &lt;code&gt;FlinkKafkaConsumer&lt;/code&gt;. The maximum lag in terms of the number of records for any partition in this window. An increasing value over time is your best indication that the consumer group is not keeping up with the producers.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;millisBehindLatest&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;user&lt;/td&gt;
+      &lt;td&gt;applies to &lt;code&gt;FlinkKinesisConsumer&lt;/code&gt;. The number of milliseconds a consumer is behind the head of the stream. For any consumer and Kinesis shard, this indicates how far it is behind the current time.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Possible Alerts**
+&lt;p&gt;&lt;strong&gt;Possible Alerts&lt;/strong&gt;&lt;/p&gt;
 
-* `records-lag-max`  &amp;gt; `threshold`
-* `millisBehindLatest` &amp;gt; `threshold`
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;records-lag-max&lt;/code&gt;  &amp;gt; &lt;code&gt;threshold&lt;/code&gt;&lt;/li&gt;
+  &lt;li&gt;&lt;code&gt;millisBehindLatest&lt;/code&gt; &amp;gt; &lt;code&gt;threshold&lt;/code&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-## Monitoring Latency
+&lt;h2 id=&quot;monitoring-latency&quot;&gt;Monitoring Latency&lt;/h2&gt;
 
-Generally speaking, latency is the delay between the creation of an event and
+&lt;p&gt;Generally speaking, latency is the delay between the creation of an event and
 the time at which results based on this event become visible. Once the event is
 created it is usually stored in a persistent message queue, before it is
 processed by Apache Flink, which then writes the results to a database or calls
 a downstream system. In such a pipeline, latency can be introduced at each stage
-and for various reasons including the following:
-
-1. It might take a varying amount of time until events are persisted in the
-message queue.
-2. During periods of high load or during recovery, events might spend some time
-in the message queue until they are processed by Flink (see previous section).
-3. Some operators in a streaming topology need to buffer events for some time
-(e.g. in a time window) for functional reasons.
-4. Each computation in your Flink topology (framework or user code), as well as
-each network shuffle, takes time and adds to latency.
-5. If the application emits through a transactional sink, the sink will only
+and for various reasons including the following:&lt;/p&gt;
+
+&lt;ol&gt;
+  &lt;li&gt;It might take a varying amount of time until events are persisted in the
+message queue.&lt;/li&gt;
+  &lt;li&gt;During periods of high load or during recovery, events might spend some time
+in the message queue until they are processed by Flink (see previous section).&lt;/li&gt;
+  &lt;li&gt;Some operators in a streaming topology need to buffer events for some time
+(e.g. in a time window) for functional reasons.&lt;/li&gt;
+  &lt;li&gt;Each computation in your Flink topology (framework or user code), as well as
+each network shuffle, takes time and adds to latency.&lt;/li&gt;
+  &lt;li&gt;If the application emits through a transactional sink, the sink will only
 commit and publish transactions upon successful checkpoints of Flink, adding
-latency usually up to the checkpointing interval for each record.
+latency usually up to the checkpointing interval for each record.&lt;/li&gt;
+&lt;/ol&gt;
 
-In practice, it has proven invaluable to add timestamps to your events at
+&lt;p&gt;In practice, it has proven invaluable to add timestamps to your events at
 multiple stages (at least at creation, persistence, ingestion by Flink,
 publication by Flink, possibly sampling those to save bandwidth). The
 differences between these timestamps can be exposed as a user-defined metric in
-your Flink topology to derive the latency distribution of each stage.
+your Flink topology to derive the latency distribution of each stage.&lt;/p&gt;
 
-In the rest of this section, we will only consider latency, which is introduced
+&lt;p&gt;In the rest of this section, we will only consider latency, which is introduced
 inside the Flink topology and cannot be attributed to transactional sinks or
-events being buffered for functional reasons (4.).
+events being buffered for functional reasons (4.).&lt;/p&gt;
 
-To this end, Flink comes with a feature called [Latency
-Tracking](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#latency-tracking&gt;).
+&lt;p&gt;To this end, Flink comes with a feature called &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#latency-tracking&quot;&gt;Latency
+Tracking&lt;/a&gt;.
 When enabled, Flink will insert so-called latency markers periodically at all
 sources. For each sub-task, a latency distribution from each source to this
 operator will be reported. The granularity of these histograms can be further
-controlled by setting _metrics.latency.granularity_ as desired.
+controlled by setting &lt;em&gt;metrics.latency.granularity&lt;/em&gt; as desired.&lt;/p&gt;
 
-Due to the potentially high number of histograms (in particular for
-_metrics.latency.granularity: subtask_), enabling latency tracking can
+&lt;p&gt;Due to the potentially high number of histograms (in particular for
+&lt;em&gt;metrics.latency.granularity: subtask&lt;/em&gt;), enabling latency tracking can
 significantly impact the performance of the cluster. It is recommended to only
-enable it to locate sources of latency during debugging.
+enable it to locate sources of latency during debugging.&lt;/p&gt;
 
-**Metrics**
+&lt;p&gt;&lt;strong&gt;Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`latency` | operator | The latency from the source operator to this operator. |
-`restartingTime` | job | The time it took to restart the job, or how long the current restart has been in progress. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;latency&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;operator&lt;/td&gt;
+      &lt;td&gt;The latency from the source operator to this operator.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;restartingTime&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job&lt;/td&gt;
+      &lt;td&gt;The time it took to restart the job, or how long the current restart has been in progress.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Example Dashboard Panel**
+&lt;p&gt;&lt;strong&gt;Example Dashboard Panel&lt;/strong&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-5.png&quot; width=&quot;800px&quot; alt=&quot;Latency distribution between a source and a single sink subtask.&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-5.png&quot; width=&quot;800px&quot; alt=&quot;Latency distribution between a source and a single sink subtask.&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;Latency distribution between a source and a single sink subtask.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## JVM Metrics
+&lt;h2 id=&quot;jvm-metrics&quot;&gt;JVM Metrics&lt;/h2&gt;
 
-So far we have only looked at Flink-specific metrics. As long as latency &amp;
+&lt;p&gt;So far we have only looked at Flink-specific metrics. As long as latency &amp;amp;
 throughput of your application are in line with your expectations and it is
 checkpointing consistently, this is probably everything you need. On the other
 hand, if you job’s performance is starting to degrade among the firstmetrics you
-want to look at are memory consumption and CPU load of your Task- &amp; JobManager
-JVMs.
+want to look at are memory consumption and CPU load of your Task- &amp;amp; JobManager
+JVMs.&lt;/p&gt;
 
-### Memory
+&lt;h3 id=&quot;memory&quot;&gt;Memory&lt;/h3&gt;
 
-Flink reports the usage of Heap, NonHeap, Direct &amp; Mapped memory for JobManagers
-and TaskManagers. 
+&lt;p&gt;Flink reports the usage of Heap, NonHeap, Direct &amp;amp; Mapped memory for JobManagers
+and TaskManagers.&lt;/p&gt;
 
-* Heap memory - as with most JVM applications - is the most volatile and important
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Heap memory - as with most JVM applications - is the most volatile and important
 metric to watch. This is especially true when using Flink’s filesystem
 statebackend as it keeps all state objects on the JVM Heap. If the size of
 long-living objects on the Heap increases significantly, this can usually be
 attributed to the size of your application state (check the 
-[checkpointing metrics](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#checkpointing&gt;)
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#checkpointing&quot;&gt;checkpointing metrics&lt;/a&gt;
 for an estimated size of the on-heap state). The possible reasons for growing
 state are very application-specific. Typically, an increasing number of keys, a
 large event-time skew between different input streams or simply missing state
-cleanup may cause growing state.
-
-* NonHeap memory is dominated by the metaspace, the size of which is unlimited by default
+cleanup may cause growing state.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;NonHeap memory is dominated by the metaspace, the size of which is unlimited by default
 and holds class metadata as well as static content. There is a 
-[JIRA Ticket](&lt;https://issues.apache.org/jira/browse/FLINK-10317&gt;) to limit the size
-to 250 megabyte by default. 
-
-* The biggest driver of Direct memory is by far the
+&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10317&quot;&gt;JIRA Ticket&lt;/a&gt; to limit the size
+to 250 megabyte by default.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;The biggest driver of Direct memory is by far the
 number of Flink’s network buffers, which can be
-[configured](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/ops/config.html#configuring-the-network-buffers&gt;).
-
-* Mapped memory is usually close to zero as Flink does not use memory-mapped files.
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/ops/config.html#configuring-the-network-buffers&quot;&gt;configured&lt;/a&gt;.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Mapped memory is usually close to zero as Flink does not use memory-mapped files.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-In a containerized environment you should additionally monitor the overall
+&lt;p&gt;In a containerized environment you should additionally monitor the overall
 memory consumption of the Job- and TaskManager containers to ensure they don’t
 exceed their resource limits. This is particularly important, when using the
 RocksDB statebackend, since RocksDB allocates a considerable amount of
 memory off heap. To understand how much memory RocksDB might use, you can
-checkout [this blog
-post](&lt;https://www.da-platform.com/blog/manage-rocksdb-memory-size-apache-flink&gt;)
-by Stefan Richter.
+checkout &lt;a href=&quot;https://www.da-platform.com/blog/manage-rocksdb-memory-size-apache-flink&quot;&gt;this blog
+post&lt;/a&gt;
+by Stefan Richter.&lt;/p&gt;
 
-**Key Metrics**
+&lt;p&gt;&lt;strong&gt;Key Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`Status.JVM.Memory.NonHeap.Committed` | job-/taskmanager | The amount of non-heap memory guaranteed to be available to the JVM (in bytes). |
-`Status.JVM.Memory.Heap.Used` | job-/taskmanager | The amount of heap memory currently used (in bytes). |
-`Status.JVM.Memory.Heap.Committed` | job-/taskmanager | The amount of heap memory guaranteed to be available to the JVM (in bytes). |
-`Status.JVM.Memory.Direct.MemoryUsed` | job-/taskmanager | The amount of memory used by the JVM for the direct buffer pool (in bytes). |
-`Status.JVM.Memory.Mapped.MemoryUsed` | job-/taskmanager | The amount of memory used by the JVM for the mapped buffer pool (in bytes). |
-`Status.JVM.GarbageCollector.G1 Young Generation.Time` | job-/taskmanager | The total time spent performing G1 Young Generation garbage collection. |
-`Status.JVM.GarbageCollector.G1 Old Generation.Time` | job-/taskmanager | The total time spent performing G1 Old Generation garbage collection. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.Memory.NonHeap.Committed&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The amount of non-heap memory guaranteed to be available to the JVM (in bytes).&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.Memory.Heap.Used&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The amount of heap memory currently used (in bytes).&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.Memory.Heap.Committed&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The amount of heap memory guaranteed to be available to the JVM (in bytes).&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.Memory.Direct.MemoryUsed&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The amount of memory used by the JVM for the direct buffer pool (in bytes).&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.Memory.Mapped.MemoryUsed&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The amount of memory used by the JVM for the mapped buffer pool (in bytes).&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.GarbageCollector.G1 Young Generation.Time&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The total time spent performing G1 Young Generation garbage collection.&lt;/td&gt;
+    &lt;/tr&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.GarbageCollector.G1 Old Generation.Time&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The total time spent performing G1 Old Generation garbage collection.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Example Dashboard Panel**
+&lt;p&gt;&lt;strong&gt;Example Dashboard Panel&lt;/strong&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-6.png&quot; width=&quot;800px&quot; alt=&quot;TaskManager memory consumption and garbage collection times.&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-6.png&quot; width=&quot;800px&quot; alt=&quot;TaskManager memory consumption and garbage collection times.&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;TaskManager memory consumption and garbage collection times.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-7.png&quot; width=&quot;800px&quot; alt=&quot;JobManager memory consumption and garbage collection times.&quot;/&gt;
-&lt;br/&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-7.png&quot; width=&quot;800px&quot; alt=&quot;JobManager memory consumption and garbage collection times.&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;&lt;small&gt;JobManager memory consumption and garbage collection times.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Possible Alerts**
+&lt;p&gt;&lt;strong&gt;Possible Alerts&lt;/strong&gt;&lt;/p&gt;
 
-* `container memory limit` &amp;lt; `container memory + safety margin`
+&lt;ul&gt;
+  &lt;li&gt;&lt;code&gt;container memory limit&lt;/code&gt; &amp;lt; &lt;code&gt;container memory + safety margin&lt;/code&gt;&lt;/li&gt;
+&lt;/ul&gt;
 
-### CPU
+&lt;h3 id=&quot;cpu&quot;&gt;CPU&lt;/h3&gt;
 
-Besides memory, you should also monitor the CPU load of the TaskManagers. If
+&lt;p&gt;Besides memory, you should also monitor the CPU load of the TaskManagers. If
 your TaskManagers are constantly under very high load, you might be able to
 improve the overall performance by decreasing the number of task slots per
 TaskManager (in case of a Standalone setup), by providing more resources to the
@@ -5387,48 +5733,61 @@ TaskManager (in case of a containerized setup), or by providing more
 TaskManagers. In general, a system already running under very high load during
 normal operations, will need much more time to catch-up after recovering from a
 downtime. During this time you will see a much higher latency (event-time skew) than
-usual.
+usual.&lt;/p&gt;
 
-A sudden increase in the CPU load might also be attributed to high garbage
-collection pressure, which should be visible in the JVM memory metrics as well.
+&lt;p&gt;A sudden increase in the CPU load might also be attributed to high garbage
+collection pressure, which should be visible in the JVM memory metrics as well.&lt;/p&gt;
 
-If one or a few TaskManagers are constantly under very high load, this can slow
+&lt;p&gt;If one or a few TaskManagers are constantly under very high load, this can slow
 down the whole topology due to long checkpoint alignment times and increasing
 event-time skew. A common reason is skew in the partition key of the data, which
 can be mitigated by pre-aggregating before the shuffle or keying on a more
-evenly distributed key.
+evenly distributed key.&lt;/p&gt;
 
-**Key Metrics**
+&lt;p&gt;&lt;strong&gt;Key Metrics&lt;/strong&gt;&lt;/p&gt;
 
-Metric | Scope | Description |
------- | ----- | ----------- |
-`Status.JVM.CPU.Load` | job-/taskmanager | The recent CPU usage of the JVM. |
+&lt;table&gt;
+  &lt;thead&gt;
+    &lt;tr&gt;
+      &lt;th&gt;Metric&lt;/th&gt;
+      &lt;th&gt;Scope&lt;/th&gt;
+      &lt;th&gt;Description&lt;/th&gt;
+    &lt;/tr&gt;
+  &lt;/thead&gt;
+  &lt;tbody&gt;
+    &lt;tr&gt;
+      &lt;td&gt;&lt;code&gt;Status.JVM.CPU.Load&lt;/code&gt;&lt;/td&gt;
+      &lt;td&gt;job-/taskmanager&lt;/td&gt;
+      &lt;td&gt;The recent CPU usage of the JVM.&lt;/td&gt;
+    &lt;/tr&gt;
+  &lt;/tbody&gt;
+&lt;/table&gt;
 
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**Example Dashboard Panel**
+&lt;p&gt;&lt;strong&gt;Example Dashboard Panel&lt;/strong&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/2019-02-21-monitoring-best-practices/fig-8.png&quot; width=&quot;800px&quot; alt=&quot;TaskManager &amp; JobManager CPU load.&quot;/&gt;
-&lt;br/&gt;
-&lt;i&gt;&lt;small&gt;TaskManager &amp; JobManager CPU load.&lt;/small&gt;&lt;/i&gt;
+&lt;img src=&quot;/img/blog/2019-02-21-monitoring-best-practices/fig-8.png&quot; width=&quot;800px&quot; alt=&quot;TaskManager &amp;amp; JobManager CPU load.&quot; /&gt;
+&lt;br /&gt;
+&lt;i&gt;&lt;small&gt;TaskManager &amp;amp; JobManager CPU load.&lt;/small&gt;&lt;/i&gt;
 &lt;/center&gt;
-&lt;br/&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-## System Resources
+&lt;h2 id=&quot;system-resources&quot;&gt;System Resources&lt;/h2&gt;
 
-In addition to the JVM metrics above, it is also possible to use Flink’s metrics
-system to gather insights about system resources, i.e. memory, CPU &amp;
+&lt;p&gt;In addition to the JVM metrics above, it is also possible to use Flink’s metrics
+system to gather insights about system resources, i.e. memory, CPU &amp;amp;
 network-related metrics for the whole machine as opposed to the Flink processes
 alone. System resource monitoring is disabled by default and requires additional
 dependencies on the classpath. Please check out the 
-[Flink system resource metrics documentation](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html#system-resources&gt;) for
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html#system-resources&quot;&gt;Flink system resource metrics documentation&lt;/a&gt; for
 additional guidance and details. System resource monitoring in Flink can be very
-helpful in setups without existing host monitoring capabilities.
+helpful in setups without existing host monitoring capabilities.&lt;/p&gt;
 
-## Conclusion
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
 
-This post tries to shed some light on Flink’s metrics and monitoring system. You
+&lt;p&gt;This post tries to shed some light on Flink’s metrics and monitoring system. You
 can utilise it as a starting point when you first think about how to
 successfully monitor your Flink application. I highly recommend to start
 monitoring your Flink application early on in the development phase. This way
@@ -5436,3138 +5795,3283 @@ you will be able to improve your dashboards and alerts over time and, more
 importantly, observe the performance impact of the changes to your application
 throughout the development phase. By doing so, you can ask the right questions
 about the runtime behaviour of your application, and learn much more about
-Flink’s internals early on.
+Flink’s internals early on.&lt;/p&gt;
 
-Last but not least, this post only scratches the surface of the overall metrics
+&lt;p&gt;Last but not least, this post only scratches the surface of the overall metrics
 and monitoring capabilities of Apache Flink. I highly recommend going over
-[Flink’s metrics documentation](&lt;{{ site.DOCS_BASE_URL }}flink-docs-release-1.7/monitoring/metrics.html&gt;)
-for a full reference of Flink’s metrics system.</description>
-<pubDate>Mon, 25 Feb 2019 12:00:00 +0000</pubDate>
+&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.7/monitoring/metrics.html&quot;&gt;Flink’s metrics documentation&lt;/a&gt;
+for a full reference of Flink’s metrics system.&lt;/p&gt;
+</description>
+<pubDate>Mon, 25 Feb 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/02/25/monitoring-best-practices.html</link>
 <guid isPermaLink="true">/news/2019/02/25/monitoring-best-practices.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.6.4 Released</title>
-<description>The Apache Flink community released the fourth bugfix version of the Apache Flink 1.6 series.
+<description>&lt;p&gt;The Apache Flink community released the fourth bugfix version of the Apache Flink 1.6 series.&lt;/p&gt;
 
-This release includes more than 25 fixes and minor improvements for Flink 1.6.3. The list below includes a detailed list of all fixes.
+&lt;p&gt;This release includes more than 25 fixes and minor improvements for Flink 1.6.3. The list below includes a detailed list of all fixes.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.6.4.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.6.4.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.6.4&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.6.4&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.6.4&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.4&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.4&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.4&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10721&#39;&gt;FLINK-10721&lt;/a&gt;] -         Kafka discovery-loop exceptions may be swallowed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10721&quot;&gt;FLINK-10721&lt;/a&gt;] -         Kafka discovery-loop exceptions may be swallowed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10761&#39;&gt;FLINK-10761&lt;/a&gt;] -         MetricGroup#getAllVariables can deadlock
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10761&quot;&gt;FLINK-10761&lt;/a&gt;] -         MetricGroup#getAllVariables can deadlock
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10774&#39;&gt;FLINK-10774&lt;/a&gt;] -         connection leak when partition discovery is disabled and open throws exception
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10774&quot;&gt;FLINK-10774&lt;/a&gt;] -         connection leak when partition discovery is disabled and open throws exception
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10848&#39;&gt;FLINK-10848&lt;/a&gt;] -         Flink&amp;#39;s Yarn ResourceManager can allocate too many excess containers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10848&quot;&gt;FLINK-10848&lt;/a&gt;] -         Flink&amp;#39;s Yarn ResourceManager can allocate too many excess containers
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11022&#39;&gt;FLINK-11022&lt;/a&gt;] -         Update LICENSE and NOTICE files for older releases
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11022&quot;&gt;FLINK-11022&lt;/a&gt;] -         Update LICENSE and NOTICE files for older releases
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11071&#39;&gt;FLINK-11071&lt;/a&gt;] -         Dynamic proxy classes cannot be resolved when deserializing job graph
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11071&quot;&gt;FLINK-11071&lt;/a&gt;] -         Dynamic proxy classes cannot be resolved when deserializing job graph
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11084&#39;&gt;FLINK-11084&lt;/a&gt;] -         Incorrect ouput after two consecutive split and select
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11084&quot;&gt;FLINK-11084&lt;/a&gt;] -         Incorrect ouput after two consecutive split and select
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11119&#39;&gt;FLINK-11119&lt;/a&gt;] -         Incorrect Scala example for Table Function
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11119&quot;&gt;FLINK-11119&lt;/a&gt;] -         Incorrect Scala example for Table Function
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11134&#39;&gt;FLINK-11134&lt;/a&gt;] -         Invalid REST API request should not log the full exception in Flink logs
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11134&quot;&gt;FLINK-11134&lt;/a&gt;] -         Invalid REST API request should not log the full exception in Flink logs
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11151&#39;&gt;FLINK-11151&lt;/a&gt;] -         FileUploadHandler stops working if the upload directory is removed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11151&quot;&gt;FLINK-11151&lt;/a&gt;] -         FileUploadHandler stops working if the upload directory is removed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11173&#39;&gt;FLINK-11173&lt;/a&gt;] -         Proctime attribute validation throws an incorrect exception message
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11173&quot;&gt;FLINK-11173&lt;/a&gt;] -         Proctime attribute validation throws an incorrect exception message
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11224&#39;&gt;FLINK-11224&lt;/a&gt;] -         Log is missing in scala-shell
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11224&quot;&gt;FLINK-11224&lt;/a&gt;] -         Log is missing in scala-shell
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11232&#39;&gt;FLINK-11232&lt;/a&gt;] -         Empty Start Time of sub-task on web dashboard
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11232&quot;&gt;FLINK-11232&lt;/a&gt;] -         Empty Start Time of sub-task on web dashboard
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11234&#39;&gt;FLINK-11234&lt;/a&gt;] -         ExternalTableCatalogBuilder unable to build a batch-only table
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11234&quot;&gt;FLINK-11234&lt;/a&gt;] -         ExternalTableCatalogBuilder unable to build a batch-only table
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11235&#39;&gt;FLINK-11235&lt;/a&gt;] -         Elasticsearch connector leaks threads if no connection could be established
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11235&quot;&gt;FLINK-11235&lt;/a&gt;] -         Elasticsearch connector leaks threads if no connection could be established
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11251&#39;&gt;FLINK-11251&lt;/a&gt;] -         Incompatible metric name on prometheus reporter
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11251&quot;&gt;FLINK-11251&lt;/a&gt;] -         Incompatible metric name on prometheus reporter
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11389&#39;&gt;FLINK-11389&lt;/a&gt;] -         Incorrectly use job information when call getSerializedTaskInformation in class TaskDeploymentDescriptor
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11389&quot;&gt;FLINK-11389&lt;/a&gt;] -         Incorrectly use job information when call getSerializedTaskInformation in class TaskDeploymentDescriptor
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11584&#39;&gt;FLINK-11584&lt;/a&gt;] -         ConfigDocsCompletenessITCase fails DescriptionBuilder#linebreak() is used
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11584&quot;&gt;FLINK-11584&lt;/a&gt;] -         ConfigDocsCompletenessITCase fails DescriptionBuilder#linebreak() is used
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11585&#39;&gt;FLINK-11585&lt;/a&gt;] -         Prefix matching in ConfigDocsGenerator can result in wrong assignments
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11585&quot;&gt;FLINK-11585&lt;/a&gt;] -         Prefix matching in ConfigDocsGenerator can result in wrong assignments
 &lt;/li&gt;
 &lt;/ul&gt;
-                
+
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10910&#39;&gt;FLINK-10910&lt;/a&gt;] -         Harden Kubernetes e2e test
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10910&quot;&gt;FLINK-10910&lt;/a&gt;] -         Harden Kubernetes e2e test
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11079&#39;&gt;FLINK-11079&lt;/a&gt;] -         Skip deployment for flnk-storm-examples
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11079&quot;&gt;FLINK-11079&lt;/a&gt;] -         Skip deployment for flnk-storm-examples
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11207&#39;&gt;FLINK-11207&lt;/a&gt;] -         Update Apache commons-compress from 1.4.1 to 1.18
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11207&quot;&gt;FLINK-11207&lt;/a&gt;] -         Update Apache commons-compress from 1.4.1 to 1.18
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11262&#39;&gt;FLINK-11262&lt;/a&gt;] -         Bump jython-standalone to 2.7.1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11262&quot;&gt;FLINK-11262&lt;/a&gt;] -         Bump jython-standalone to 2.7.1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11289&#39;&gt;FLINK-11289&lt;/a&gt;] -         Rework example module structure to account for licensing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11289&quot;&gt;FLINK-11289&lt;/a&gt;] -         Rework example module structure to account for licensing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11304&#39;&gt;FLINK-11304&lt;/a&gt;] -         Typo in time attributes doc
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11304&quot;&gt;FLINK-11304&lt;/a&gt;] -         Typo in time attributes doc
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11469&#39;&gt;FLINK-11469&lt;/a&gt;] -         fix Tuning Checkpoints and Large State doc
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11469&quot;&gt;FLINK-11469&lt;/a&gt;] -         fix Tuning Checkpoints and Large State doc
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Mon, 25 Feb 2019 00:00:00 +0000</pubDate>
+<pubDate>Mon, 25 Feb 2019 01:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/02/25/release-1.6.4.html</link>
 <guid isPermaLink="true">/news/2019/02/25/release-1.6.4.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.7.2 Released</title>
-<description>The Apache Flink community released the second bugfix version of the Apache Flink 1.7 series.
-
-This release includes more than 40 fixes and minor improvements for Flink 1.7.1, covering several critical
-recovery issues as well as problems in the Flink streaming connectors.
-
-The list below includes a detailed list of all fixes.
-We highly recommend all users to upgrade to Flink 1.7.2.
-
-Updated Maven dependencies:
-
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.7.2&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.7.2&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.7.2&lt;/version&gt;
-&lt;/dependency&gt;
-```
-
-You can find the binaries on the updated [Downloads page]({{ site.baseurl }}/downloads.html).
-
-List of resolved issues:
+<description>&lt;p&gt;The Apache Flink community released the second bugfix version of the Apache Flink 1.7 series.&lt;/p&gt;
+
+&lt;p&gt;This release includes more than 40 fixes and minor improvements for Flink 1.7.1, covering several critical
+recovery issues as well as problems in the Flink streaming connectors.&lt;/p&gt;
+
+&lt;p&gt;The list below includes a detailed list of all fixes.
+We highly recommend all users to upgrade to Flink 1.7.2.&lt;/p&gt;
+
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.7.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.7.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.7.2&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Sub-task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11179&#39;&gt;FLINK-11179&lt;/a&gt;] -          JoinCancelingITCase#testCancelSortMatchWhileDoingHeavySorting test error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11179&quot;&gt;FLINK-11179&lt;/a&gt;] -          JoinCancelingITCase#testCancelSortMatchWhileDoingHeavySorting test error
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11180&#39;&gt;FLINK-11180&lt;/a&gt;] -         ProcessFailureCancelingITCase#testCancelingOnProcessFailure
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11180&quot;&gt;FLINK-11180&lt;/a&gt;] -         ProcessFailureCancelingITCase#testCancelingOnProcessFailure
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11181&#39;&gt;FLINK-11181&lt;/a&gt;] -         SimpleRecoveryITCaseBase test error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11181&quot;&gt;FLINK-11181&lt;/a&gt;] -         SimpleRecoveryITCaseBase test error
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10721&#39;&gt;FLINK-10721&lt;/a&gt;] -         Kafka discovery-loop exceptions may be swallowed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10721&quot;&gt;FLINK-10721&lt;/a&gt;] -         Kafka discovery-loop exceptions may be swallowed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10761&#39;&gt;FLINK-10761&lt;/a&gt;] -         MetricGroup#getAllVariables can deadlock
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10761&quot;&gt;FLINK-10761&lt;/a&gt;] -         MetricGroup#getAllVariables can deadlock
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10774&#39;&gt;FLINK-10774&lt;/a&gt;] -         connection leak when partition discovery is disabled and open throws exception
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10774&quot;&gt;FLINK-10774&lt;/a&gt;] -         connection leak when partition discovery is disabled and open throws exception
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10848&#39;&gt;FLINK-10848&lt;/a&gt;] -         Flink&amp;#39;s Yarn ResourceManager can allocate too many excess containers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10848&quot;&gt;FLINK-10848&lt;/a&gt;] -         Flink&amp;#39;s Yarn ResourceManager can allocate too many excess containers
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11046&#39;&gt;FLINK-11046&lt;/a&gt;] -         ElasticSearch6Connector cause thread blocked when index failed with retry
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11046&quot;&gt;FLINK-11046&lt;/a&gt;] -         ElasticSearch6Connector cause thread blocked when index failed with retry
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11071&#39;&gt;FLINK-11071&lt;/a&gt;] -         Dynamic proxy classes cannot be resolved when deserializing job graph
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11071&quot;&gt;FLINK-11071&lt;/a&gt;] -         Dynamic proxy classes cannot be resolved when deserializing job graph
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11083&#39;&gt;FLINK-11083&lt;/a&gt;] -         CRowSerializerConfigSnapshot is not instantiable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11083&quot;&gt;FLINK-11083&lt;/a&gt;] -         CRowSerializerConfigSnapshot is not instantiable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11084&#39;&gt;FLINK-11084&lt;/a&gt;] -         Incorrect ouput after two consecutive split and select
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11084&quot;&gt;FLINK-11084&lt;/a&gt;] -         Incorrect ouput after two consecutive split and select
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11100&#39;&gt;FLINK-11100&lt;/a&gt;] -         Presto S3 FileSystem E2E test broken
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11100&quot;&gt;FLINK-11100&lt;/a&gt;] -         Presto S3 FileSystem E2E test broken
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11119&#39;&gt;FLINK-11119&lt;/a&gt;] -         Incorrect Scala example for Table Function
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11119&quot;&gt;FLINK-11119&lt;/a&gt;] -         Incorrect Scala example for Table Function
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11134&#39;&gt;FLINK-11134&lt;/a&gt;] -         Invalid REST API request should not log the full exception in Flink logs
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11134&quot;&gt;FLINK-11134&lt;/a&gt;] -         Invalid REST API request should not log the full exception in Flink logs
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11145&#39;&gt;FLINK-11145&lt;/a&gt;] -         Fix Hadoop version handling in binary release script
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11145&quot;&gt;FLINK-11145&lt;/a&gt;] -         Fix Hadoop version handling in binary release script
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11151&#39;&gt;FLINK-11151&lt;/a&gt;] -         FileUploadHandler stops working if the upload directory is removed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11151&quot;&gt;FLINK-11151&lt;/a&gt;] -         FileUploadHandler stops working if the upload directory is removed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11168&#39;&gt;FLINK-11168&lt;/a&gt;] -         LargePlanTest times out on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11168&quot;&gt;FLINK-11168&lt;/a&gt;] -         LargePlanTest times out on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11173&#39;&gt;FLINK-11173&lt;/a&gt;] -         Proctime attribute validation throws an incorrect exception message
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11173&quot;&gt;FLINK-11173&lt;/a&gt;] -         Proctime attribute validation throws an incorrect exception message
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11187&#39;&gt;FLINK-11187&lt;/a&gt;] -         StreamingFileSink with S3 backend transient socket timeout issues 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11187&quot;&gt;FLINK-11187&lt;/a&gt;] -         StreamingFileSink with S3 backend transient socket timeout issues 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11191&#39;&gt;FLINK-11191&lt;/a&gt;] -         Exception in code generation when ambiguous columns in MATCH_RECOGNIZE
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11191&quot;&gt;FLINK-11191&lt;/a&gt;] -         Exception in code generation when ambiguous columns in MATCH_RECOGNIZE
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11194&#39;&gt;FLINK-11194&lt;/a&gt;] -         missing Scala 2.12 build of HBase connector 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11194&quot;&gt;FLINK-11194&lt;/a&gt;] -         missing Scala 2.12 build of HBase connector 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11201&#39;&gt;FLINK-11201&lt;/a&gt;] -         Document SBT dependency requirements when using MiniClusterResource
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11201&quot;&gt;FLINK-11201&lt;/a&gt;] -         Document SBT dependency requirements when using MiniClusterResource
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11224&#39;&gt;FLINK-11224&lt;/a&gt;] -         Log is missing in scala-shell
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11224&quot;&gt;FLINK-11224&lt;/a&gt;] -         Log is missing in scala-shell
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11227&#39;&gt;FLINK-11227&lt;/a&gt;] -         The DescriptorProperties contains some bounds checking errors
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11227&quot;&gt;FLINK-11227&lt;/a&gt;] -         The DescriptorProperties contains some bounds checking errors
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11232&#39;&gt;FLINK-11232&lt;/a&gt;] -         Empty Start Time of sub-task on web dashboard
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11232&quot;&gt;FLINK-11232&lt;/a&gt;] -         Empty Start Time of sub-task on web dashboard
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11234&#39;&gt;FLINK-11234&lt;/a&gt;] -         ExternalTableCatalogBuilder unable to build a batch-only table
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11234&quot;&gt;FLINK-11234&lt;/a&gt;] -         ExternalTableCatalogBuilder unable to build a batch-only table
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11235&#39;&gt;FLINK-11235&lt;/a&gt;] -         Elasticsearch connector leaks threads if no connection could be established
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11235&quot;&gt;FLINK-11235&lt;/a&gt;] -         Elasticsearch connector leaks threads if no connection could be established
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11246&#39;&gt;FLINK-11246&lt;/a&gt;] -         Fix distinct AGG visibility issues
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11246&quot;&gt;FLINK-11246&lt;/a&gt;] -         Fix distinct AGG visibility issues
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11251&#39;&gt;FLINK-11251&lt;/a&gt;] -         Incompatible metric name on prometheus reporter
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11251&quot;&gt;FLINK-11251&lt;/a&gt;] -         Incompatible metric name on prometheus reporter
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11279&#39;&gt;FLINK-11279&lt;/a&gt;] -         Invalid week interval parsing in ExpressionParser
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11279&quot;&gt;FLINK-11279&lt;/a&gt;] -         Invalid week interval parsing in ExpressionParser
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11302&#39;&gt;FLINK-11302&lt;/a&gt;] -         FlinkS3FileSystem uses an incorrect path for temporary files.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11302&quot;&gt;FLINK-11302&lt;/a&gt;] -         FlinkS3FileSystem uses an incorrect path for temporary files.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11389&#39;&gt;FLINK-11389&lt;/a&gt;] -         Incorrectly use job information when call getSerializedTaskInformation in class TaskDeploymentDescriptor
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11389&quot;&gt;FLINK-11389&lt;/a&gt;] -         Incorrectly use job information when call getSerializedTaskInformation in class TaskDeploymentDescriptor
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11419&#39;&gt;FLINK-11419&lt;/a&gt;] -         StreamingFileSink fails to recover after taskmanager failure
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11419&quot;&gt;FLINK-11419&lt;/a&gt;] -         StreamingFileSink fails to recover after taskmanager failure
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11436&#39;&gt;FLINK-11436&lt;/a&gt;] -         Java deserialization failure of the AvroSerializer when used in an old CompositeSerializers
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11436&quot;&gt;FLINK-11436&lt;/a&gt;] -         Java deserialization failure of the AvroSerializer when used in an old CompositeSerializers
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        New Feature
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10457&#39;&gt;FLINK-10457&lt;/a&gt;] -         Support SequenceFile for StreamingFileSink
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10457&quot;&gt;FLINK-10457&lt;/a&gt;] -         Support SequenceFile for StreamingFileSink
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10910&#39;&gt;FLINK-10910&lt;/a&gt;] -         Harden Kubernetes e2e test
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10910&quot;&gt;FLINK-10910&lt;/a&gt;] -         Harden Kubernetes e2e test
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11023&#39;&gt;FLINK-11023&lt;/a&gt;] -         Update LICENSE and NOTICE files for flink-connectors
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11023&quot;&gt;FLINK-11023&lt;/a&gt;] -         Update LICENSE and NOTICE files for flink-connectors
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11079&#39;&gt;FLINK-11079&lt;/a&gt;] -         Skip deployment for flink-storm-examples
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11079&quot;&gt;FLINK-11079&lt;/a&gt;] -         Skip deployment for flink-storm-examples
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11207&#39;&gt;FLINK-11207&lt;/a&gt;] -         Update Apache commons-compress from 1.4.1 to 1.18
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11207&quot;&gt;FLINK-11207&lt;/a&gt;] -         Update Apache commons-compress from 1.4.1 to 1.18
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11216&#39;&gt;FLINK-11216&lt;/a&gt;] -         Back to top button is missing in the Joining document and is not properly placed in the Process Function document
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11216&quot;&gt;FLINK-11216&lt;/a&gt;] -         Back to top button is missing in the Joining document and is not properly placed in the Process Function document
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11262&#39;&gt;FLINK-11262&lt;/a&gt;] -         Bump jython-standalone to 2.7.1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11262&quot;&gt;FLINK-11262&lt;/a&gt;] -         Bump jython-standalone to 2.7.1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11289&#39;&gt;FLINK-11289&lt;/a&gt;] -         Rework example module structure to account for licensing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11289&quot;&gt;FLINK-11289&lt;/a&gt;] -         Rework example module structure to account for licensing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11304&#39;&gt;FLINK-11304&lt;/a&gt;] -         Typo in time attributes doc
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11304&quot;&gt;FLINK-11304&lt;/a&gt;] -         Typo in time attributes doc
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11331&#39;&gt;FLINK-11331&lt;/a&gt;] -         Fix errors in tableApi.md and functions.md
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11331&quot;&gt;FLINK-11331&lt;/a&gt;] -         Fix errors in tableApi.md and functions.md
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11469&#39;&gt;FLINK-11469&lt;/a&gt;] -         fix  Tuning Checkpoints and Large State doc
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11469&quot;&gt;FLINK-11469&lt;/a&gt;] -         fix  Tuning Checkpoints and Large State doc
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11473&#39;&gt;FLINK-11473&lt;/a&gt;] -         Clarify Documenation on Latency Tracking
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11473&quot;&gt;FLINK-11473&lt;/a&gt;] -         Clarify Documenation on Latency Tracking
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11628&#39;&gt;FLINK-11628&lt;/a&gt;] -         Cache maven on travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11628&quot;&gt;FLINK-11628&lt;/a&gt;] -         Cache maven on travis
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Fri, 15 Feb 2019 12:00:00 +0000</pubDate>
+<pubDate>Fri, 15 Feb 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/02/15/release-1.7.2.html</link>
 <guid isPermaLink="true">/news/2019/02/15/release-1.7.2.html</guid>
 </item>
 
 <item>
 <title>Batch as a Special Case of Streaming and Alibaba&#39;s contribution of Blink</title>
-<description>Last week, we [broke the news](https://lists.apache.org/thread.html/2f7330e85d702a53b4a2b361149930b50f2e89d8e8a572f8ee2a0e6d@%3Cdev.flink.apache.org%3E) that Alibaba decided to contribute its Flink-fork, called Blink, back to the Apache Flink project. Why is that a big thing for Flink, what will it mean for users and the community, and how does it fit into Flink’s overall vision? Let&#39;s take a step back to understand this better...
+<description>&lt;p&gt;Last week, we &lt;a href=&quot;https://lists.apache.org/thread.html/2f7330e85d702a53b4a2b361149930b50f2e89d8e8a572f8ee2a0e6d@%3Cdev.flink.apache.org%3E&quot;&gt;broke the news&lt;/a&gt; that Alibaba decided to contribute its Flink-fork, called Blink, back to the Apache Flink project. Why is that a big thing for Flink, what will it mean for users and the community, and how does it fit into Flink’s overall vision? Let’s take a step back to understand this better…&lt;/p&gt;
 
-## A Unified Approach to Batch and Streaming
+&lt;h2 id=&quot;a-unified-approach-to-batch-and-streaming&quot;&gt;A Unified Approach to Batch and Streaming&lt;/h2&gt;
 
-Since its early days, Apache Flink has followed the philosophy of taking a unified approach to batch and streaming data processing. The core building block is *&quot;continuous processing of unbounded data streams&quot;*: if you can do that, you can also do offline processing of bounded data sets (batch processing use cases), because these are just streams that happen to end at some point.
+&lt;p&gt;Since its early days, Apache Flink has followed the philosophy of taking a unified approach to batch and streaming data processing. The core building block is &lt;em&gt;“continuous processing of unbounded data streams”&lt;/em&gt;: if you can do that, you can also do offline processing of bounded data sets (batch processing use cases), because these are just streams that happen to end at some point.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/unified-batch-streaming-blink/bounded-unbounded.png&quot; width=&quot;600px&quot; alt=&quot;Processing of bounded and unbounded data.&quot;/&gt;
+&lt;img src=&quot;/img/blog/unified-batch-streaming-blink/bounded-unbounded.png&quot; width=&quot;600px&quot; alt=&quot;Processing of bounded and unbounded data.&quot; /&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-The *&quot;streaming first, with batch as a special case of streaming&quot;* philosophy is supported by various projects (for example [Flink](https://flink.apache.org), [Beam](https://beam.apache.org), etc.) and often been cited as a powerful way to build data applications that [generalize across real-time and offline processing](https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101) and to help greatly reduce the complexity of data infrastructures.
+&lt;p&gt;The &lt;em&gt;“streaming first, with batch as a special case of streaming”&lt;/em&gt; philosophy is supported by various projects (for example &lt;a href=&quot;https://flink.apache.org&quot;&gt;Flink&lt;/a&gt;, &lt;a href=&quot;https://beam.apache.org&quot;&gt;Beam&lt;/a&gt;, etc.) and often been cited as a powerful way to build data applications that &lt;a href=&quot;https://www.oreilly.com/ideas/the-world-beyond-batch-streaming-101&quot;&gt;generalize across real-time and offl [...]
 
-### Why are there still batch processors?
+&lt;h3 id=&quot;why-are-there-still-batch-processors&quot;&gt;Why are there still batch processors?&lt;/h3&gt;
 
-However, *&quot;batch is just a special case of streaming&quot;* does not mean that any stream processor is now the right tool for your batch processing use cases - the introduction of stream processors did not render batch processors obsolete:
+&lt;p&gt;However, &lt;em&gt;“batch is just a special case of streaming”&lt;/em&gt; does not mean that any stream processor is now the right tool for your batch processing use cases - the introduction of stream processors did not render batch processors obsolete:&lt;/p&gt;
 
-* Pure stream processing systems are very slow at batch processing workloads. No one would consider it a good idea to use a stream processor that shuffles through message queues to analyze large amounts of available data.
-
-* Unified APIs like [Apache Beam](https://beam.apache.org) often delegate to different runtimes depending on whether the data is continuous/unbounded of fix/bounded. For example, the implementations of the batch and streaming runtime of Google Cloud Dataflow are different, to get the desired performance and resilience in each case.
-
-* *Apache Flink* has a streaming API that can do bounded/unbounded use cases, but still offers a separate DataSet API and runtime stack that is faster for batch use cases.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;Pure stream processing systems are very slow at batch processing workloads. No one would consider it a good idea to use a stream processor that shuffles through message queues to analyze large amounts of available data.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Unified APIs like &lt;a href=&quot;https://beam.apache.org&quot;&gt;Apache Beam&lt;/a&gt; often delegate to different runtimes depending on whether the data is continuous/unbounded of fix/bounded. For example, the implementations of the batch and streaming runtime of Google Cloud Dataflow are different, to get the desired performance and resilience in each case.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;&lt;em&gt;Apache Flink&lt;/em&gt; has a streaming API that can do bounded/unbounded use cases, but still offers a separate DataSet API and runtime stack that is faster for batch use cases.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-What is the reason for the above? Where did *&quot;batch is just a special case of streaming&quot;* go wrong?
+&lt;p&gt;What is the reason for the above? Where did &lt;em&gt;“batch is just a special case of streaming”&lt;/em&gt; go wrong?&lt;/p&gt;
 
-The answer is simple, nothing is wrong with that paradigm. Unifying batch and streaming in the API is one aspect. One needs to also exploit certain characteristics of the special case “bounded data” in the runtime to competitively handle batch processing use cases. After all, batch processors have been built specifically for that special case.
+&lt;p&gt;The answer is simple, nothing is wrong with that paradigm. Unifying batch and streaming in the API is one aspect. One needs to also exploit certain characteristics of the special case “bounded data” in the runtime to competitively handle batch processing use cases. After all, batch processors have been built specifically for that special case.&lt;/p&gt;
 
-## Batch on top of a Streaming Runtime
+&lt;h2 id=&quot;batch-on-top-of-a-streaming-runtime&quot;&gt;Batch on top of a Streaming Runtime&lt;/h2&gt;
 
-We always believed that it is possible to have a runtime that is state-of-the-art for both stream processing and batch processing use cases at the same time. A runtime that is streaming-first, but can exploit just the right amount of special properties of bounded streams to be as fast for batch use cases as dedicated batch processors. **This is the unique approach that Flink takes.**
+&lt;p&gt;We always believed that it is possible to have a runtime that is state-of-the-art for both stream processing and batch processing use cases at the same time. A runtime that is streaming-first, but can exploit just the right amount of special properties of bounded streams to be as fast for batch use cases as dedicated batch processors. &lt;strong&gt;This is the unique approach that Flink takes.&lt;/strong&gt;&lt;/p&gt;
 
-Apache Flink has a network stack that supports both [low-latency/high-throughput streaming data exchanges](https://www.ververica.com/flink-forward-berlin/resources/improving-throughput-and-latency-with-flinks-network-stack), as well as high-throughput batch shuffles. Flink has streaming runtime operators for many operations, but also specialized operators for bounded inputs, which get used when you choose the DataSet API or select the batch environment in the Table API.
+&lt;p&gt;Apache Flink has a network stack that supports both &lt;a href=&quot;https://www.ververica.com/flink-forward-berlin/resources/improving-throughput-and-latency-with-flinks-network-stack&quot;&gt;low-latency/high-throughput streaming data exchanges&lt;/a&gt;, as well as high-throughput batch shuffles. Flink has streaming runtime operators for many operations, but also specialized operators for bounded inputs, which get used when you choose the DataSet API or select the batch envir [...]
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/unified-batch-streaming-blink/stream-batch-joins.png&quot; width=&quot;500px&quot; alt=&quot;Streaming and batch joins.&quot;/&gt;
-&lt;br&gt;
+&lt;img src=&quot;/img/blog/unified-batch-streaming-blink/stream-batch-joins.png&quot; width=&quot;500px&quot; alt=&quot;Streaming and batch joins.&quot; /&gt;
+&lt;br /&gt;
 &lt;i&gt;The figure illustrates a streaming join and a batch join. The batch join can read one input fully into a hash table and then probe with the other input. The stream join needs to build tables for both sides, because it needs to continuously process both inputs. 
 For data larger than memory, the batch join can partition both data sets into subsets that fit in memory (data hits disk once) whereas the continuous nature of the stream join requires it to always keep all data in the table and repeatedly hit disk on cache misses.&lt;/i&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Because of that, Apache Flink has been actually demonstrating some pretty impressive batch processing performance since its early days. The below benchmark is a bit older, but validated our architectural approach early on.
+&lt;p&gt;Because of that, Apache Flink has been actually demonstrating some pretty impressive batch processing performance since its early days. The below benchmark is a bit older, but validated our architectural approach early on.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/unified-batch-streaming-blink/sort-performance.png&quot; width=&quot;500px&quot; alt=&quot;Sort performance.&quot;/&gt;
-&lt;br&gt;
-&lt;i&gt;Time to sort 3.2 TB (80 GB/node), in seconds&lt;br&gt;
+&lt;img src=&quot;/img/blog/unified-batch-streaming-blink/sort-performance.png&quot; width=&quot;500px&quot; alt=&quot;Sort performance.&quot; /&gt;
+&lt;br /&gt;
+&lt;i&gt;Time to sort 3.2 TB (80 GB/node), in seconds&lt;br /&gt;
 (&lt;a href=&quot;https://www.slideshare.net/FlinkForward/dongwon-kim-a-comparative-performance-evaluation-of-flink&quot; target=&quot;blank&quot;&gt;Presentation by Dongwon Kim, Flink Forward Berlin 2015&lt;/a&gt;.)&lt;/i&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-## What is still missing?
-
-To conclude the approach and make Flink&#39;s experience on bounded data (batch) state-of-the-art, we need to add a few more enhancements. We believe that these features are key to realizing our vision:
-
-**(1) A truly unified runtime operator stack**: Currently the bounded and unbounded operators have a different network and threading model and don&#39;t mix and match. The original reason was that batch operators followed a &quot;pull model&quot; (easier for batch algorithms), while streaming operators followed a &quot;push model&quot; (better latency/throughput characteristics). In a unified stack, continuous streaming operators are the foundation. When operating on bounded data without [...]
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-**(2) Exploiting bounded streams to reduce the scope of fault tolerance**: When input data is bounded, it is possible to completely buffer data during shuffles (memory or disk) and replay that data after a failure. This makes recovery more fine grained and thus much more efficient.
+&lt;h2 id=&quot;what-is-still-missing&quot;&gt;What is still missing?&lt;/h2&gt;
 
-**(3) Exploiting bounded stream operator properties for scheduling**: A continuous unbounded streaming application needs (by definition) all operators running at the same time. An application on bounded data can schedule operations after another, depending on how the operators consume data (e.g., first build hash table, then probe hash table). This increases resource efficiency.
+&lt;p&gt;To conclude the approach and make Flink’s experience on bounded data (batch) state-of-the-art, we need to add a few more enhancements. We believe that these features are key to realizing our vision:&lt;/p&gt;
 
-**(4) Enabling these special case optimizations for the DataStream API**: Currently, only the Table API (which is unified across bounded/unbounded streams) activates these optimizations when working on bounded data.
+&lt;p&gt;&lt;strong&gt;(1) A truly unified runtime operator stack&lt;/strong&gt;: Currently the bounded and unbounded operators have a different network and threading model and don’t mix and match. The original reason was that batch operators followed a “pull model” (easier for batch algorithms), while streaming operators followed a “push model” (better latency/throughput characteristics). In a unified stack, continuous streaming operators are the foundation. When operating on bounded da [...]
 
-**(5) Performance and coverage for SQL**: SQL is the de-facto standard data language, and while it is also being rapidly adopted for continuous streaming use cases, there is absolutely no way past it for bounded/batch use cases. To be competitive with the best batch engines, Flink needs more coverage and performance for the SQL query execution. While the core data-plane in Flink is high performance, the speed of SQL execution ultimately depends a lot also on optimizer rules, a rich set o [...]
+&lt;p&gt;&lt;strong&gt;(2) Exploiting bounded streams to reduce the scope of fault tolerance&lt;/strong&gt;: When input data is bounded, it is possible to completely buffer data during shuffles (memory or disk) and replay that data after a failure. This makes recovery more fine grained and thus much more efficient.&lt;/p&gt;
 
-## Enter Blink
+&lt;p&gt;&lt;strong&gt;(3) Exploiting bounded stream operator properties for scheduling&lt;/strong&gt;: A continuous unbounded streaming application needs (by definition) all operators running at the same time. An application on bounded data can schedule operations after another, depending on how the operators consume data (e.g., first build hash table, then probe hash table). This increases resource efficiency.&lt;/p&gt;
 
-Blink is a fork of Apache Flink, originally created inside Alibaba to improve Flink’s behavior for internal use cases. Blink adds a series of improvements and integrations (see the [Readme](https://github.com/apache/flink/blob/blink/README.md) for details), many of which fall into the category of improved bounded-data/batch processing and SQL. In fact, of the above list of features for a unified batch/streaming system, Blink implements significant steps forward in all except (4):
+&lt;p&gt;&lt;strong&gt;(4) Enabling these special case optimizations for the DataStream API&lt;/strong&gt;: Currently, only the Table API (which is unified across bounded/unbounded streams) activates these optimizations when working on bounded data.&lt;/p&gt;
 
-**Unified Stream Operators:** Blink extends the Flink streaming runtime operator model to support selectively reading from different inputs, while keeping the push model for very low latency. This control over the inputs helps to now support algorithms like hybrid hash-joins on the same operator and threading model as continuous symmetric joins through RocksDB. These operators also form the basis for future features like [“Side Inputs”](https://cwiki.apache.org/confluence/display/FLINK/F [...]
+&lt;p&gt;&lt;strong&gt;(5) Performance and coverage for SQL&lt;/strong&gt;: SQL is the de-facto standard data language, and while it is also being rapidly adopted for continuous streaming use cases, there is absolutely no way past it for bounded/batch use cases. To be competitive with the best batch engines, Flink needs more coverage and performance for the SQL query execution. While the core data-plane in Flink is high performance, the speed of SQL execution ultimately depends a lot als [...]
 
-**Table API &amp; SQL Query Processor:** The SQL query processor is the component that evolved the changed most compared to the latest Flink master branch:
+&lt;h2 id=&quot;enter-blink&quot;&gt;Enter Blink&lt;/h2&gt;
 
-- While Flink currently translates queries either into DataSet or DataStream programs (depending on the characteristics of their inputs), Blink translates queries to a data flow of the aforementioned stream operators.
+&lt;p&gt;Blink is a fork of Apache Flink, originally created inside Alibaba to improve Flink’s behavior for internal use cases. Blink adds a series of improvements and integrations (see the &lt;a href=&quot;https://github.com/apache/flink/blob/blink/README.md&quot;&gt;Readme&lt;/a&gt; for details), many of which fall into the category of improved bounded-data/batch processing and SQL. In fact, of the above list of features for a unified batch/streaming system, Blink implements significan [...]
 
-- Blink adds many more runtime operators for common SQL operations like semi-joins, anti-joins, etc.
+&lt;p&gt;&lt;strong&gt;Unified Stream Operators:&lt;/strong&gt; Blink extends the Flink streaming runtime operator model to support selectively reading from different inputs, while keeping the push model for very low latency. This control over the inputs helps to now support algorithms like hybrid hash-joins on the same operator and threading model as continuous symmetric joins through RocksDB. These operators also form the basis for future features like &lt;a href=&quot;https://cwiki.ap [...]
 
-- The query planner (optimizer) is still based on Apache Calcite, but has many more optimization rules (incl. join reordering) and uses a proper cost model for planning.
+&lt;p&gt;&lt;strong&gt;Table API &amp;amp; SQL Query Processor:&lt;/strong&gt; The SQL query processor is the component that evolved the changed most compared to the latest Flink master branch:&lt;/p&gt;
 
-- Stream operators are more aggressively chained.
-
-- The common data structures (sorters, hash tables) and serializers are extended to go even further in operating on binary data and saving serialization overhead. Code generation is used for the row serializers.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;While Flink currently translates queries either into DataSet or DataStream programs (depending on the characteristics of their inputs), Blink translates queries to a data flow of the aforementioned stream operators.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Blink adds many more runtime operators for common SQL operations like semi-joins, anti-joins, etc.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;The query planner (optimizer) is still based on Apache Calcite, but has many more optimization rules (incl. join reordering) and uses a proper cost model for planning.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;Stream operators are more aggressively chained.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;The common data structures (sorters, hash tables) and serializers are extended to go even further in operating on binary data and saving serialization overhead. Code generation is used for the row serializers.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-**Improved Scheduling and Failure Recovery:** Finally, Blink implements several improvements for task scheduling and fault tolerance. The scheduling strategies use resources better by exploiting how the operators process their input data. The failover strategies recover more fine-grained along the boundaries of persistent shuffles. A failed JobManager can be replaced without restarting a running application.
+&lt;p&gt;&lt;strong&gt;Improved Scheduling and Failure Recovery:&lt;/strong&gt; Finally, Blink implements several improvements for task scheduling and fault tolerance. The scheduling strategies use resources better by exploiting how the operators process their input data. The failover strategies recover more fine-grained along the boundaries of persistent shuffles. A failed JobManager can be replaced without restarting a running application.&lt;/p&gt;
 
-The changes in Blink result in a big improvement in performance. The below numbers were reported by the developers of Blink to give a rough impression of the performance gains.
+&lt;p&gt;The changes in Blink result in a big improvement in performance. The below numbers were reported by the developers of Blink to give a rough impression of the performance gains.&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/unified-batch-streaming-blink/blink-flink-tpch.png&quot; width=&quot;600px&quot; alt=&quot;TPC-H performance of Blink and Flink.&quot;/&gt;
-&lt;br&gt;
-&lt;i&gt;Relative performance of Blink versus Flink 1.6.0 in the TPC-H benchmark, query by query.&lt;br&gt;
-The performance improvement is in average 10x.&lt;br&gt;
+&lt;img src=&quot;/img/blog/unified-batch-streaming-blink/blink-flink-tpch.png&quot; width=&quot;600px&quot; alt=&quot;TPC-H performance of Blink and Flink.&quot; /&gt;
+&lt;br /&gt;
+&lt;i&gt;Relative performance of Blink versus Flink 1.6.0 in the TPC-H benchmark, query by query.&lt;br /&gt;
+The performance improvement is in average 10x.&lt;br /&gt;
 &lt;a href=&quot;https://www.ververica.com/flink-forward-berlin/resources/unified-engine-for-data-processing-and-ai&quot; target=&quot;blank&quot;&gt;Presentation by Xiaowei Jiang at Flink Forward Berlin, 2018&lt;/a&gt;.)&lt;/i&gt;
 &lt;/center&gt;
-&lt;br&gt;
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
 &lt;center&gt;
-&lt;img src=&quot;{{ site.baseurl }}/img/blog/unified-batch-streaming-blink/blink-spark-tpcds.png&quot; width=&quot;600px&quot; alt=&quot;TPC-DS performace of Blink and Spark.&quot;/&gt;
-&lt;br&gt;
-&lt;i&gt;Performance of Blink versus Spark in the TPC-DS benchmark, aggregate time for all queries together.&lt;br&gt;
+&lt;img src=&quot;/img/blog/unified-batch-streaming-blink/blink-spark-tpcds.png&quot; width=&quot;600px&quot; alt=&quot;TPC-DS performace of Blink and Spark.&quot; /&gt;
+&lt;br /&gt;
+&lt;i&gt;Performance of Blink versus Spark in the TPC-DS benchmark, aggregate time for all queries together.&lt;br /&gt;
 &lt;a href=&quot;https://www.bilibili.com/video/av42325467/?p=3&quot; target=&quot;blank&quot;&gt;Presentation by Xiaowei Jiang at Flink Forward Beijing, 2018&lt;/a&gt;.&lt;/i&gt;
 &lt;/center&gt;
-&lt;br&gt;
-
-## How do we plan to merge Blink and Flink?
+&lt;p&gt;&lt;br /&gt;&lt;/p&gt;
 
-Blink’s code is currently available as a [branch](https://github.com/apache/flink/tree/blink) in the Apache Flink repository. It is a challenge to merge a such big amount of changes, while making the merge process as non-disruptive as possible and keeping public APIs as stable as possible. 
+&lt;h2 id=&quot;how-do-we-plan-to-merge-blink-and-flink&quot;&gt;How do we plan to merge Blink and Flink?&lt;/h2&gt;
 
-The community’s [merge plan](https://lists.apache.org/thread.html/6066abd0f09fc1c41190afad67770ede8efd0bebc36f00938eecc118@%3Cdev.flink.apache.org%3E) focuses initially on the bounded/batch processing features mentioned above and follows the following approach to ensure a smooth integration:
+&lt;p&gt;Blink’s code is currently available as a &lt;a href=&quot;https://github.com/apache/flink/tree/blink&quot;&gt;branch&lt;/a&gt; in the Apache Flink repository. It is a challenge to merge a such big amount of changes, while making the merge process as non-disruptive as possible and keeping public APIs as stable as possible.&lt;/p&gt;
 
-* To merge Blink’s _SQL/Table API query processor_ enhancements, we exploit the fact that both Flink and Blink have the same APIs: SQL and the Table API.
-Following some restructuring of the Table/SQL module ([FLIP-32](https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions)) we plan to merge the Blink query planner (optimizer) and runtime (operators) as an additional query processor next to the current SQL runtime. Think of it as two different runners for the same APIs.&lt;br&gt;
-Initially, users will be able to select which query processor to use. After a transition period in which the new query processor will be developed to subsume the current query processor, the current processor will most likely be deprecated and eventually dropped. Given that SQL is such a well defined interface, we anticipate that this transition has little friction for users. Mostly a pleasant surprise to have broader SQL feature coverage and a boost in performance.
+&lt;p&gt;The community’s &lt;a href=&quot;https://lists.apache.org/thread.html/6066abd0f09fc1c41190afad67770ede8efd0bebc36f00938eecc118@%3Cdev.flink.apache.org%3E&quot;&gt;merge plan&lt;/a&gt; focuses initially on the bounded/batch processing features mentioned above and follows the following approach to ensure a smooth integration:&lt;/p&gt;
 
-* To support the merge of Blink’s _enhancements to scheduling and recovery_ for jobs on bounded data, the Flink community is already working on refactoring its current schedule and adding support for [pluggable scheduling and fail-over strategies](https://issues.apache.org/jira/browse/FLINK-10429).&lt;br&gt;
-Once this effort is finished, we can add Blink’s scheduling and recovery strategies as a new scheduling strategy that is used by the new query processor. Eventually, we plan to use the new scheduling strategy also for bounded DataStream programs.
-
-* The extended catalog support, DDL support, as well as support for Hive’s catalog and integrations is currently going through separate design discussions. We plan to leverage existing code here whenever it makes sense.
+&lt;ul&gt;
+  &lt;li&gt;
+    &lt;p&gt;To merge Blink’s &lt;em&gt;SQL/Table API query processor&lt;/em&gt; enhancements, we exploit the fact that both Flink and Blink have the same APIs: SQL and the Table API.
+Following some restructuring of the Table/SQL module (&lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-32%3A+Restructure+flink-table+for+future+contributions&quot;&gt;FLIP-32&lt;/a&gt;) we plan to merge the Blink query planner (optimizer) and runtime (operators) as an additional query processor next to the current SQL runtime. Think of it as two different runners for the same APIs.&lt;br /&gt;
+Initially, users will be able to select which query processor to use. After a transition period in which the new query processor will be developed to subsume the current query processor, the current processor will most likely be deprecated and eventually dropped. Given that SQL is such a well defined interface, we anticipate that this transition has little friction for users. Mostly a pleasant surprise to have broader SQL feature coverage and a boost in performance.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;To support the merge of Blink’s &lt;em&gt;enhancements to scheduling and recovery&lt;/em&gt; for jobs on bounded data, the Flink community is already working on refactoring its current schedule and adding support for &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10429&quot;&gt;pluggable scheduling and fail-over strategies&lt;/a&gt;.&lt;br /&gt;
+Once this effort is finished, we can add Blink’s scheduling and recovery strategies as a new scheduling strategy that is used by the new query processor. Eventually, we plan to use the new scheduling strategy also for bounded DataStream programs.&lt;/p&gt;
+  &lt;/li&gt;
+  &lt;li&gt;
+    &lt;p&gt;The extended catalog support, DDL support, as well as support for Hive’s catalog and integrations is currently going through separate design discussions. We plan to leverage existing code here whenever it makes sense.&lt;/p&gt;
+  &lt;/li&gt;
+&lt;/ul&gt;
 
-## Summary
+&lt;h2 id=&quot;summary&quot;&gt;Summary&lt;/h2&gt;
 
-We believe that the data processing stack of the future is based on stream processing: The elegance of stream processing with its ability to model offline processing (batch), real-time data processing, and event-driven applications in the same way, while offering high performance and consistency is simply too compelling.
+&lt;p&gt;We believe that the data processing stack of the future is based on stream processing: The elegance of stream processing with its ability to model offline processing (batch), real-time data processing, and event-driven applications in the same way, while offering high performance and consistency is simply too compelling.&lt;/p&gt;
 
-Exploiting certain properties of bounded data is important for a stream processor to achieve the same performance as dedicated batch processors. While Flink always supported batch processing, the project is taking the next step in building a unified runtime and towards **becoming a stream processor that is competitive with batch processing systems even on their home turf: OLAP SQL.** The contribution of Alibaba’s Blink code helps the Flink community to pick up the speed on this developme [...]
-<pubDate>Wed, 13 Feb 2019 12:00:00 +0000</pubDate>
+&lt;p&gt;Exploiting certain properties of bounded data is important for a stream processor to achieve the same performance as dedicated batch processors. While Flink always supported batch processing, the project is taking the next step in building a unified runtime and towards &lt;strong&gt;becoming a stream processor that is competitive with batch processing systems even on their home turf: OLAP SQL.&lt;/strong&gt; The contribution of Alibaba’s Blink code helps the Flink community to p [...]
+</description>
+<pubDate>Wed, 13 Feb 2019 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2019/02/13/unified-batch-streaming-blink.html</link>
 <guid isPermaLink="true">/news/2019/02/13/unified-batch-streaming-blink.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.5.6 Released</title>
-<description>The Apache Flink community released the sixth and last bugfix version of the Apache Flink 1.5 series.
+<description>&lt;p&gt;The Apache Flink community released the sixth and last bugfix version of the Apache Flink 1.5 series.&lt;/p&gt;
 
-This release includes more than 47 fixes and minor improvements for Flink 1.5.5. The list below includes a detailed list of all fixes.
+&lt;p&gt;This release includes more than 47 fixes and minor improvements for Flink 1.5.5. The list below includes a detailed list of all fixes.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.5.6.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.5.6.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.5.6&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.5.6&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.5.6&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.5.6&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.5.6&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.5.6&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Sub-task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10252&#39;&gt;FLINK-10252&lt;/a&gt;] -         Handle oversized metric messages
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10252&quot;&gt;FLINK-10252&lt;/a&gt;] -         Handle oversized metric messages
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10863&#39;&gt;FLINK-10863&lt;/a&gt;] -         Assign uids to all operators
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10863&quot;&gt;FLINK-10863&lt;/a&gt;] -         Assign uids to all operators
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-8336&#39;&gt;FLINK-8336&lt;/a&gt;] -         YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3 test instability
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-8336&quot;&gt;FLINK-8336&lt;/a&gt;] -         YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3 test instability
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9646&#39;&gt;FLINK-9646&lt;/a&gt;] -         ExecutionGraphCoLocationRestartTest.testConstraintsAfterRestart failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9646&quot;&gt;FLINK-9646&lt;/a&gt;] -         ExecutionGraphCoLocationRestartTest.testConstraintsAfterRestart failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10166&#39;&gt;FLINK-10166&lt;/a&gt;] -         Dependency problems when executing SQL query in sql-client
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10166&quot;&gt;FLINK-10166&lt;/a&gt;] -         Dependency problems when executing SQL query in sql-client
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10309&#39;&gt;FLINK-10309&lt;/a&gt;] -         Cancel with savepoint fails with java.net.ConnectException when using the per job-mode
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10309&quot;&gt;FLINK-10309&lt;/a&gt;] -         Cancel with savepoint fails with java.net.ConnectException when using the per job-mode
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10419&#39;&gt;FLINK-10419&lt;/a&gt;] -         ClassNotFoundException while deserializing user exceptions from checkpointing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10419&quot;&gt;FLINK-10419&lt;/a&gt;] -         ClassNotFoundException while deserializing user exceptions from checkpointing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10455&#39;&gt;FLINK-10455&lt;/a&gt;] -         Potential Kafka producer leak in case of failures
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10455&quot;&gt;FLINK-10455&lt;/a&gt;] -         Potential Kafka producer leak in case of failures
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10482&#39;&gt;FLINK-10482&lt;/a&gt;] -         java.lang.IllegalArgumentException: Negative number of in progress checkpoints
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10482&quot;&gt;FLINK-10482&lt;/a&gt;] -         java.lang.IllegalArgumentException: Negative number of in progress checkpoints
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10491&#39;&gt;FLINK-10491&lt;/a&gt;] -         Deadlock during spilling data in SpillableSubpartition 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10491&quot;&gt;FLINK-10491&lt;/a&gt;] -         Deadlock during spilling data in SpillableSubpartition 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10566&#39;&gt;FLINK-10566&lt;/a&gt;] -         Flink Planning is exponential in the number of stages
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10566&quot;&gt;FLINK-10566&lt;/a&gt;] -         Flink Planning is exponential in the number of stages
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10581&#39;&gt;FLINK-10581&lt;/a&gt;] -         YarnConfigurationITCase.testFlinkContainerMemory test instability
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10581&quot;&gt;FLINK-10581&lt;/a&gt;] -         YarnConfigurationITCase.testFlinkContainerMemory test instability
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10642&#39;&gt;FLINK-10642&lt;/a&gt;] -         CodeGen split fields errors when maxGeneratedCodeLength equals 1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10642&quot;&gt;FLINK-10642&lt;/a&gt;] -         CodeGen split fields errors when maxGeneratedCodeLength equals 1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10655&#39;&gt;FLINK-10655&lt;/a&gt;] -         RemoteRpcInvocation not overwriting ObjectInputStream&amp;#39;s ClassNotFoundException
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10655&quot;&gt;FLINK-10655&lt;/a&gt;] -         RemoteRpcInvocation not overwriting ObjectInputStream&amp;#39;s ClassNotFoundException
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10669&#39;&gt;FLINK-10669&lt;/a&gt;] -         Exceptions &amp;amp; errors are not properly checked in logs in e2e tests
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10669&quot;&gt;FLINK-10669&lt;/a&gt;] -         Exceptions &amp;amp; errors are not properly checked in logs in e2e tests
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10670&#39;&gt;FLINK-10670&lt;/a&gt;] -         Fix Correlate codegen error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10670&quot;&gt;FLINK-10670&lt;/a&gt;] -         Fix Correlate codegen error
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10674&#39;&gt;FLINK-10674&lt;/a&gt;] -         Fix handling of retractions after clean up
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10674&quot;&gt;FLINK-10674&lt;/a&gt;] -         Fix handling of retractions after clean up
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10690&#39;&gt;FLINK-10690&lt;/a&gt;] -         Tests leak resources via Files.list
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10690&quot;&gt;FLINK-10690&lt;/a&gt;] -         Tests leak resources via Files.list
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10693&#39;&gt;FLINK-10693&lt;/a&gt;] -         Fix Scala EitherSerializer duplication
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10693&quot;&gt;FLINK-10693&lt;/a&gt;] -         Fix Scala EitherSerializer duplication
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10715&#39;&gt;FLINK-10715&lt;/a&gt;] -         E2e tests fail with ConcurrentModificationException in MetricRegistryImpl
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10715&quot;&gt;FLINK-10715&lt;/a&gt;] -         E2e tests fail with ConcurrentModificationException in MetricRegistryImpl
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10750&#39;&gt;FLINK-10750&lt;/a&gt;] -         SocketClientSinkTest.testRetry fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10750&quot;&gt;FLINK-10750&lt;/a&gt;] -         SocketClientSinkTest.testRetry fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10752&#39;&gt;FLINK-10752&lt;/a&gt;] -         Result of AbstractYarnClusterDescriptor#validateClusterResources is ignored
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10752&quot;&gt;FLINK-10752&lt;/a&gt;] -         Result of AbstractYarnClusterDescriptor#validateClusterResources is ignored
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10753&#39;&gt;FLINK-10753&lt;/a&gt;] -         Propagate and log snapshotting exceptions
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10753&quot;&gt;FLINK-10753&lt;/a&gt;] -         Propagate and log snapshotting exceptions
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10770&#39;&gt;FLINK-10770&lt;/a&gt;] -         Some generated functions are not opened properly.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10770&quot;&gt;FLINK-10770&lt;/a&gt;] -         Some generated functions are not opened properly.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10773&#39;&gt;FLINK-10773&lt;/a&gt;] -         Resume externalized checkpoint end-to-end test fails
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10773&quot;&gt;FLINK-10773&lt;/a&gt;] -         Resume externalized checkpoint end-to-end test fails
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10821&#39;&gt;FLINK-10821&lt;/a&gt;] -         Resuming Externalized Checkpoint E2E test does not resume from Externalized Checkpoint
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10821&quot;&gt;FLINK-10821&lt;/a&gt;] -         Resuming Externalized Checkpoint E2E test does not resume from Externalized Checkpoint
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10839&#39;&gt;FLINK-10839&lt;/a&gt;] -         Fix implementation of PojoSerializer.duplicate() w.r.t. subclass serializer
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10839&quot;&gt;FLINK-10839&lt;/a&gt;] -         Fix implementation of PojoSerializer.duplicate() w.r.t. subclass serializer
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10856&#39;&gt;FLINK-10856&lt;/a&gt;] -         Harden resume from externalized checkpoint E2E test
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10856&quot;&gt;FLINK-10856&lt;/a&gt;] -         Harden resume from externalized checkpoint E2E test
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10857&#39;&gt;FLINK-10857&lt;/a&gt;] -         Conflict between JMX and Prometheus Metrics reporter
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10857&quot;&gt;FLINK-10857&lt;/a&gt;] -         Conflict between JMX and Prometheus Metrics reporter
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10880&#39;&gt;FLINK-10880&lt;/a&gt;] -         Failover strategies should not be applied to Batch Execution
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10880&quot;&gt;FLINK-10880&lt;/a&gt;] -         Failover strategies should not be applied to Batch Execution
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10913&#39;&gt;FLINK-10913&lt;/a&gt;] -         ExecutionGraphRestartTest.testRestartAutomatically unstable on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10913&quot;&gt;FLINK-10913&lt;/a&gt;] -         ExecutionGraphRestartTest.testRestartAutomatically unstable on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10925&#39;&gt;FLINK-10925&lt;/a&gt;] -         NPE in PythonPlanStreamer
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10925&quot;&gt;FLINK-10925&lt;/a&gt;] -         NPE in PythonPlanStreamer
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10990&#39;&gt;FLINK-10990&lt;/a&gt;] -         Enforce minimum timespan in MeterView
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10990&quot;&gt;FLINK-10990&lt;/a&gt;] -         Enforce minimum timespan in MeterView
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10998&#39;&gt;FLINK-10998&lt;/a&gt;] -         flink-metrics-ganglia has LGPL dependency
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10998&quot;&gt;FLINK-10998&lt;/a&gt;] -         flink-metrics-ganglia has LGPL dependency
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11011&#39;&gt;FLINK-11011&lt;/a&gt;] -         Elasticsearch 6 sink end-to-end test unstable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11011&quot;&gt;FLINK-11011&lt;/a&gt;] -         Elasticsearch 6 sink end-to-end test unstable
 &lt;/li&gt;
 &lt;/ul&gt;
-                
+
 &lt;h2&gt;        Improvement
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-4173&#39;&gt;FLINK-4173&lt;/a&gt;] -         Replace maven-assembly-plugin by maven-shade-plugin in flink-metrics
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-4173&quot;&gt;FLINK-4173&lt;/a&gt;] -         Replace maven-assembly-plugin by maven-shade-plugin in flink-metrics
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9869&#39;&gt;FLINK-9869&lt;/a&gt;] -         Send PartitionInfo in batch to Improve perfornance
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9869&quot;&gt;FLINK-9869&lt;/a&gt;] -         Send PartitionInfo in batch to Improve perfornance
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10613&#39;&gt;FLINK-10613&lt;/a&gt;] -         Remove logger casts in HBaseConnectorITCase
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10613&quot;&gt;FLINK-10613&lt;/a&gt;] -         Remove logger casts in HBaseConnectorITCase
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10614&#39;&gt;FLINK-10614&lt;/a&gt;] -         Update test_batch_allround.sh e2e to new testing infrastructure
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10614&quot;&gt;FLINK-10614&lt;/a&gt;] -         Update test_batch_allround.sh e2e to new testing infrastructure
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10637&#39;&gt;FLINK-10637&lt;/a&gt;] -         Start MiniCluster with random REST port
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10637&quot;&gt;FLINK-10637&lt;/a&gt;] -         Start MiniCluster with random REST port
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10678&#39;&gt;FLINK-10678&lt;/a&gt;] -         Add a switch to run_test to configure if logs should be checked for errors/excepions
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10678&quot;&gt;FLINK-10678&lt;/a&gt;] -         Add a switch to run_test to configure if logs should be checked for errors/excepions
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10906&#39;&gt;FLINK-10906&lt;/a&gt;] -         docker-entrypoint.sh logs credentails during startup
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10906&quot;&gt;FLINK-10906&lt;/a&gt;] -         docker-entrypoint.sh logs credentails during startup
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10916&#39;&gt;FLINK-10916&lt;/a&gt;] -         Include duplicated user-specified uid into error message
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10916&quot;&gt;FLINK-10916&lt;/a&gt;] -         Include duplicated user-specified uid into error message
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-11005&#39;&gt;FLINK-11005&lt;/a&gt;] -         Define flink-sql-client uber-jar dependencies via artifactSet
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-11005&quot;&gt;FLINK-11005&lt;/a&gt;] -         Define flink-sql-client uber-jar dependencies via artifactSet
 &lt;/li&gt;
 &lt;/ul&gt;
-    
+
 &lt;h2&gt;        Test
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10606&#39;&gt;FLINK-10606&lt;/a&gt;] -         Construct NetworkEnvironment simple for tests
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10606&quot;&gt;FLINK-10606&lt;/a&gt;] -         Construct NetworkEnvironment simple for tests
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10607&#39;&gt;FLINK-10607&lt;/a&gt;] -         Unify to remove duplicated NoOpResultPartitionConsumableNotifier
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10607&quot;&gt;FLINK-10607&lt;/a&gt;] -         Unify to remove duplicated NoOpResultPartitionConsumableNotifier
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10827&#39;&gt;FLINK-10827&lt;/a&gt;] -         Add test for duplicate() to SerializerTestBase
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10827&quot;&gt;FLINK-10827&lt;/a&gt;] -         Add test for duplicate() to SerializerTestBase
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 26 Dec 2018 12:00:00 +0000</pubDate>
+<pubDate>Wed, 26 Dec 2018 13:00:00 +0100</pubDate>
 <link>https://flink.apache.org/news/2018/12/26/release-1.5.6.html</link>
 <guid isPermaLink="true">/news/2018/12/26/release-1.5.6.html</guid>
 </item>
 
 <item>
 <title>Apache Flink 1.6.3 Released</title>
-<description>The Apache Flink community released the third bugfix version of the Apache Flink 1.6 series.
+<description>&lt;p&gt;The Apache Flink community released the third bugfix version of the Apache Flink 1.6 series.&lt;/p&gt;
 
-This release includes more than 80 fixes and minor improvements for Flink 1.6.2. The list below includes a detailed list of all fixes.
+&lt;p&gt;This release includes more than 80 fixes and minor improvements for Flink 1.6.2. The list below includes a detailed list of all fixes.&lt;/p&gt;
 
-We highly recommend all users to upgrade to Flink 1.6.3.
+&lt;p&gt;We highly recommend all users to upgrade to Flink 1.6.3.&lt;/p&gt;
 
-Updated Maven dependencies:
+&lt;p&gt;Updated Maven dependencies:&lt;/p&gt;
 
-```xml
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-java&lt;/artifactId&gt;
-  &lt;version&gt;1.6.3&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-streaming-java_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.6.3&lt;/version&gt;
-&lt;/dependency&gt;
-&lt;dependency&gt;
-  &lt;groupId&gt;org.apache.flink&lt;/groupId&gt;
-  &lt;artifactId&gt;flink-clients_2.11&lt;/artifactId&gt;
-  &lt;version&gt;1.6.3&lt;/version&gt;
-&lt;/dependency&gt;
-```
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-xml&quot;&gt;&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-java&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.3&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-streaming-java_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.3&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;dependency&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;groupId&amp;gt;&lt;/span&gt;org.apache.flink&lt;span class=&quot;nt&quot;&gt;&amp;lt;/groupId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;artifactId&amp;gt;&lt;/span&gt;flink-clients_2.11&lt;span class=&quot;nt&quot;&gt;&amp;lt;/artifactId&amp;gt;&lt;/span&gt;
+  &lt;span class=&quot;nt&quot;&gt;&amp;lt;version&amp;gt;&lt;/span&gt;1.6.3&lt;span class=&quot;nt&quot;&gt;&amp;lt;/version&amp;gt;&lt;/span&gt;
+&lt;span class=&quot;nt&quot;&gt;&amp;lt;/dependency&amp;gt;&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
 
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
+&lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 
-List of resolved issues:
+&lt;p&gt;List of resolved issues:&lt;/p&gt;
 
 &lt;h2&gt;        Sub-task
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10097&#39;&gt;FLINK-10097&lt;/a&gt;] -         More tests to increase StreamingFileSink test coverage
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10097&quot;&gt;FLINK-10097&lt;/a&gt;] -         More tests to increase StreamingFileSink test coverage
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10252&#39;&gt;FLINK-10252&lt;/a&gt;] -         Handle oversized metric messages
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10252&quot;&gt;FLINK-10252&lt;/a&gt;] -         Handle oversized metric messages
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10367&#39;&gt;FLINK-10367&lt;/a&gt;] -         Avoid recursion stack overflow during releasing SingleInputGate
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10367&quot;&gt;FLINK-10367&lt;/a&gt;] -         Avoid recursion stack overflow during releasing SingleInputGate
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10863&#39;&gt;FLINK-10863&lt;/a&gt;] -         Assign uids to all operators in general purpose testing job
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10863&quot;&gt;FLINK-10863&lt;/a&gt;] -         Assign uids to all operators in general purpose testing job
 &lt;/li&gt;
 &lt;/ul&gt;
-        
+
 &lt;h2&gt;        Bug
 &lt;/h2&gt;
 &lt;ul&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-8336&#39;&gt;FLINK-8336&lt;/a&gt;] -         YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3 test instability
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-8336&quot;&gt;FLINK-8336&lt;/a&gt;] -         YarnFileStageTestS3ITCase.testRecursiveUploadForYarnS3 test instability
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9635&#39;&gt;FLINK-9635&lt;/a&gt;] -         Local recovery scheduling can cause spread out of tasks
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9635&quot;&gt;FLINK-9635&lt;/a&gt;] -         Local recovery scheduling can cause spread out of tasks
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9646&#39;&gt;FLINK-9646&lt;/a&gt;] -         ExecutionGraphCoLocationRestartTest.testConstraintsAfterRestart failed on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9646&quot;&gt;FLINK-9646&lt;/a&gt;] -         ExecutionGraphCoLocationRestartTest.testConstraintsAfterRestart failed on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-9878&#39;&gt;FLINK-9878&lt;/a&gt;] -         IO worker threads BLOCKED on SSL Session Cache while CMS full gc
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-9878&quot;&gt;FLINK-9878&lt;/a&gt;] -         IO worker threads BLOCKED on SSL Session Cache while CMS full gc
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10149&#39;&gt;FLINK-10149&lt;/a&gt;] -         Fink Mesos allocates extra port when not configured to do so.
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10149&quot;&gt;FLINK-10149&lt;/a&gt;] -         Fink Mesos allocates extra port when not configured to do so.
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10166&#39;&gt;FLINK-10166&lt;/a&gt;] -         Dependency problems when executing SQL query in sql-client
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10166&quot;&gt;FLINK-10166&lt;/a&gt;] -         Dependency problems when executing SQL query in sql-client
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10309&#39;&gt;FLINK-10309&lt;/a&gt;] -         Cancel with savepoint fails with java.net.ConnectException when using the per job-mode
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10309&quot;&gt;FLINK-10309&lt;/a&gt;] -         Cancel with savepoint fails with java.net.ConnectException when using the per job-mode
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10357&#39;&gt;FLINK-10357&lt;/a&gt;] -         Streaming File Sink end-to-end test failed with mismatch
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10357&quot;&gt;FLINK-10357&lt;/a&gt;] -         Streaming File Sink end-to-end test failed with mismatch
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10359&#39;&gt;FLINK-10359&lt;/a&gt;] -         Scala example in DataSet docs is broken
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10359&quot;&gt;FLINK-10359&lt;/a&gt;] -         Scala example in DataSet docs is broken
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10364&#39;&gt;FLINK-10364&lt;/a&gt;] -         Test instability in NonHAQueryableStateFsBackendITCase#testMapState
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10364&quot;&gt;FLINK-10364&lt;/a&gt;] -         Test instability in NonHAQueryableStateFsBackendITCase#testMapState
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10419&#39;&gt;FLINK-10419&lt;/a&gt;] -         ClassNotFoundException while deserializing user exceptions from checkpointing
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10419&quot;&gt;FLINK-10419&lt;/a&gt;] -         ClassNotFoundException while deserializing user exceptions from checkpointing
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10425&#39;&gt;FLINK-10425&lt;/a&gt;] -         taskmanager.host is not respected
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10425&quot;&gt;FLINK-10425&lt;/a&gt;] -         taskmanager.host is not respected
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10455&#39;&gt;FLINK-10455&lt;/a&gt;] -         Potential Kafka producer leak in case of failures
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10455&quot;&gt;FLINK-10455&lt;/a&gt;] -         Potential Kafka producer leak in case of failures
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10463&#39;&gt;FLINK-10463&lt;/a&gt;] -         Null literal cannot be properly parsed in Java Table API function call
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10463&quot;&gt;FLINK-10463&lt;/a&gt;] -         Null literal cannot be properly parsed in Java Table API function call
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10481&#39;&gt;FLINK-10481&lt;/a&gt;] -         Wordcount end-to-end test in docker env unstable
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10481&quot;&gt;FLINK-10481&lt;/a&gt;] -         Wordcount end-to-end test in docker env unstable
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10482&#39;&gt;FLINK-10482&lt;/a&gt;] -         java.lang.IllegalArgumentException: Negative number of in progress checkpoints
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10482&quot;&gt;FLINK-10482&lt;/a&gt;] -         java.lang.IllegalArgumentException: Negative number of in progress checkpoints
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10491&#39;&gt;FLINK-10491&lt;/a&gt;] -         Deadlock during spilling data in SpillableSubpartition 
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10491&quot;&gt;FLINK-10491&lt;/a&gt;] -         Deadlock during spilling data in SpillableSubpartition 
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10566&#39;&gt;FLINK-10566&lt;/a&gt;] -         Flink Planning is exponential in the number of stages
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10566&quot;&gt;FLINK-10566&lt;/a&gt;] -         Flink Planning is exponential in the number of stages
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10567&#39;&gt;FLINK-10567&lt;/a&gt;] -         Lost serialize fields when ttl state store with the mutable serializer
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10567&quot;&gt;FLINK-10567&lt;/a&gt;] -         Lost serialize fields when ttl state store with the mutable serializer
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10570&#39;&gt;FLINK-10570&lt;/a&gt;] -         State grows unbounded when &amp;quot;within&amp;quot; constraint not applied
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10570&quot;&gt;FLINK-10570&lt;/a&gt;] -         State grows unbounded when &amp;quot;within&amp;quot; constraint not applied
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10581&#39;&gt;FLINK-10581&lt;/a&gt;] -         YarnConfigurationITCase.testFlinkContainerMemory test instability
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10581&quot;&gt;FLINK-10581&lt;/a&gt;] -         YarnConfigurationITCase.testFlinkContainerMemory test instability
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10642&#39;&gt;FLINK-10642&lt;/a&gt;] -         CodeGen split fields errors when maxGeneratedCodeLength equals 1
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10642&quot;&gt;FLINK-10642&lt;/a&gt;] -         CodeGen split fields errors when maxGeneratedCodeLength equals 1
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10655&#39;&gt;FLINK-10655&lt;/a&gt;] -         RemoteRpcInvocation not overwriting ObjectInputStream&amp;#39;s ClassNotFoundException
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10655&quot;&gt;FLINK-10655&lt;/a&gt;] -         RemoteRpcInvocation not overwriting ObjectInputStream&amp;#39;s ClassNotFoundException
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10663&#39;&gt;FLINK-10663&lt;/a&gt;] -         Closing StreamingFileSink can cause NPE
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10663&quot;&gt;FLINK-10663&lt;/a&gt;] -         Closing StreamingFileSink can cause NPE
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10669&#39;&gt;FLINK-10669&lt;/a&gt;] -         Exceptions &amp;amp; errors are not properly checked in logs in e2e tests
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10669&quot;&gt;FLINK-10669&lt;/a&gt;] -         Exceptions &amp;amp; errors are not properly checked in logs in e2e tests
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10670&#39;&gt;FLINK-10670&lt;/a&gt;] -         Fix Correlate codegen error
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10670&quot;&gt;FLINK-10670&lt;/a&gt;] -         Fix Correlate codegen error
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10674&#39;&gt;FLINK-10674&lt;/a&gt;] -         Fix handling of retractions after clean up
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10674&quot;&gt;FLINK-10674&lt;/a&gt;] -         Fix handling of retractions after clean up
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10681&#39;&gt;FLINK-10681&lt;/a&gt;] -         elasticsearch6.ElasticsearchSinkITCase fails if wrong JNA library installed
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10681&quot;&gt;FLINK-10681&lt;/a&gt;] -         elasticsearch6.ElasticsearchSinkITCase fails if wrong JNA library installed
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10690&#39;&gt;FLINK-10690&lt;/a&gt;] -         Tests leak resources via Files.list
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10690&quot;&gt;FLINK-10690&lt;/a&gt;] -         Tests leak resources via Files.list
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10693&#39;&gt;FLINK-10693&lt;/a&gt;] -         Fix Scala EitherSerializer duplication
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10693&quot;&gt;FLINK-10693&lt;/a&gt;] -         Fix Scala EitherSerializer duplication
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10715&#39;&gt;FLINK-10715&lt;/a&gt;] -         E2e tests fail with ConcurrentModificationException in MetricRegistryImpl
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10715&quot;&gt;FLINK-10715&lt;/a&gt;] -         E2e tests fail with ConcurrentModificationException in MetricRegistryImpl
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10750&#39;&gt;FLINK-10750&lt;/a&gt;] -         SocketClientSinkTest.testRetry fails on Travis
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10750&quot;&gt;FLINK-10750&lt;/a&gt;] -         SocketClientSinkTest.testRetry fails on Travis
 &lt;/li&gt;
-&lt;li&gt;[&lt;a href=&#39;https://issues.apache.org/jira/browse/FLINK-10752&#39;&gt;FLINK-10752&lt;/a&gt;] -         Result of AbstractYarnClusterDescriptor#validateClusterResources is ignored
+&lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-10752&quot;&gt;FLINK-10752&lt;/a&gt;] -         Result of AbstractYarnClusterDescriptor#validateClusterResources is ignored
... 15232 lines suppressed ...