You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by sj...@apache.org on 2021/05/06 14:42:11 UTC

[flink-web] 01/02: [hotfix] Link to StateBackend migration guide from the release notes

This is an automated email from the ASF dual-hosted git repository.

sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git

commit 9a12e1fafc58797a288308df67c0d725e1bf4d74
Author: Seth Wiesman <sj...@gmail.com>
AuthorDate: Tue May 4 15:46:06 2021 -0500

    [hotfix] Link to StateBackend migration guide from the release notes
---
 _posts/2021-05-03-release-1.13.0.md | 14 +++++++++-----
 1 file changed, 9 insertions(+), 5 deletions(-)

diff --git a/_posts/2021-05-03-release-1.13.0.md b/_posts/2021-05-03-release-1.13.0.md
index fdf58e2..791ab24 100644
--- a/_posts/2021-05-03-release-1.13.0.md
+++ b/_posts/2021-05-03-release-1.13.0.md
@@ -245,11 +245,11 @@ many built-in functions. But sometimes, you need to *escape* to the DataStream A
 expressiveness, flexibility, and explicit control over the state.
 
 The new methods `StreamTableEnvironment.toDataStream()/.fromDataStream()` can model
-a `DataStream` from the DataStream API as a table source or sink. Types are automatically
-converted, event-time, and watermarks carry across. In addition, the `Row` class (representing
-row events from the Table API) has received a major overhaul (improving the behavior of
-`toString()`/`hashCode()`/`equals()` methods) and now supports accessing fields by name, with
-support for sparse representations.
+a `DataStream` from the DataStream API as a table source or sink. 
+Notable improvements include:
+  * Automatic type conversion between the DataStream and Table API type systems
+  * Seamless integration of event time configurations; watermarks flow through boundaries for high consistency
+  * Enhancements to the `Row` class (representing row events from the Table API) has received a major overhaul (improving the behavior of `toString()`/`hashCode()`/`equals()` methods) and now supports accessing fields by name, with support for sparse representations.
 
 ```java
 Table table=tableEnv.fromDataStream(
@@ -547,6 +547,10 @@ pipeline utilization and throughput.
 * [FLINK-22133](https://issues.apache.org/jira/browse/FLINK-22133) The unified source API for connectors
   has a minor breaking change: The `SplitEnumerator.snapshotState()` method was adjusted to accept the
   *Checkpoint ID* of the checkpoint for which the snapshot is created.
+* [FLINK-19463](https://issues.apache.org/jira/browse/FLINK-19463) - The old `StateBackend` interfaces were deprecated
+  as they had overloaded semantics which many users found confusing. This is a pure API change and does not affect 
+  runtime characteristics of applications. 
+  For full details on how to update existing pipelines, please see the [migration guide]({{ site.DOCS_BASE_URL }}flink-docs-release-1.13/docs/ops/state/state_backends/#migrating-from-legacy-backends). 
 
 # Resources