You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by sj...@apache.org on 2021/05/06 14:42:12 UTC
[flink-web] 02/02: rebuild site
This is an automated email from the ASF dual-hosted git repository.
sjwiesman pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/flink-web.git
commit ee733f7eb546cc8896f0063c52b461e2882c9658
Author: Seth Wiesman <sj...@gmail.com>
AuthorDate: Thu May 6 09:41:52 2021 -0500
rebuild site
---
content/blog/feed.xml | 14 +++++++++-----
content/news/2021/05/03/release-1.13.0.html | 14 +++++++++-----
2 files changed, 18 insertions(+), 10 deletions(-)
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index e0eef96..b6712f8 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -274,11 +274,11 @@ many built-in functions. But sometimes, you need to <em>escape</em>
expressiveness, flexibility, and explicit control over the state.</p>
<p>The new methods <code>StreamTableEnvironment.toDataStream()/.fromDataStream()</code> can model
-a <code>DataStream</code> from the DataStream API as a table source or sink. Types are automatically
-converted, event-time, and watermarks carry across. In addition, the <code>Row</code> class (representing
-row events from the Table API) has received a major overhaul (improving the behavior of
-<code>toString()</code>/<code>hashCode()</code>/<code>equals()</code> methods) and now supports accessing fields by name, with
-support for sparse representations.</p>
+a <code>DataStream</code> from the DataStream API as a table source or sink.
+Notable improvements include:
+ * Automatic type conversion between the DataStream and Table API type systems
+ * Seamless integration of event time configurations; watermarks flow through boundaries for high consistency
+ * Enhancements to the <code>Row</code> class (representing row events from the Table API) has received a major overhaul (improving the behavior of <code>toString()</code>/<code>hashCode()</code>/<code>equals()</code> methods) and now supports accessing fields by name, with support for sparse representations.</p>
<div class="highlight"><pre><code class="language-java"><span class="n">Table</span> <span class="n">table</span><span class="o">=</span><span class="n">tableEnv</span><span class="o">.</span><span class="na">fromDataStream</span><span class="o">(</span>
<span class="n">dataStream</span><span class="o">,</span><span class="n">Schema</span><span class="o">.</span><span class="na">newBuilder</span><span class="o">()</span>
@@ -566,6 +566,10 @@ NUMERIC type and the TIMESTAMP type is problematic and therefore no longer suppo
<li><a href="https://issues.apache.org/jira/browse/FLINK-22133">FLINK-22133</a> The unified source API for connectors
has a minor breaking change: The <code>SplitEnumerator.snapshotState()</code> method was adjusted to accept the
<em>Checkpoint ID</em> of the checkpoint for which the snapshot is created.</li>
+ <li><a href="https://issues.apache.org/jira/browse/FLINK-19463">FLINK-19463</a> - The old <code>StateBackend</code> interfaces were deprecated
+as they had overloaded semantics which many users found confusing. This is a pure API change and does not affect
+runtime characteristics of applications.
+For full details on how to update existing pipelines, please see the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/#migrating-from-legacy-backends">migration guide</a>.</li>
</ul>
<h1 id="resources">Resources</h1>
diff --git a/content/news/2021/05/03/release-1.13.0.html b/content/news/2021/05/03/release-1.13.0.html
index 1283cbc..b9c6368 100644
--- a/content/news/2021/05/03/release-1.13.0.html
+++ b/content/news/2021/05/03/release-1.13.0.html
@@ -468,11 +468,11 @@ many built-in functions. But sometimes, you need to <em>escape</em> to the DataS
expressiveness, flexibility, and explicit control over the state.</p>
<p>The new methods <code>StreamTableEnvironment.toDataStream()/.fromDataStream()</code> can model
-a <code>DataStream</code> from the DataStream API as a table source or sink. Types are automatically
-converted, event-time, and watermarks carry across. In addition, the <code>Row</code> class (representing
-row events from the Table API) has received a major overhaul (improving the behavior of
-<code>toString()</code>/<code>hashCode()</code>/<code>equals()</code> methods) and now supports accessing fields by name, with
-support for sparse representations.</p>
+a <code>DataStream</code> from the DataStream API as a table source or sink.
+Notable improvements include:
+ * Automatic type conversion between the DataStream and Table API type systems
+ * Seamless integration of event time configurations; watermarks flow through boundaries for high consistency
+ * Enhancements to the <code>Row</code> class (representing row events from the Table API) has received a major overhaul (improving the behavior of <code>toString()</code>/<code>hashCode()</code>/<code>equals()</code> methods) and now supports accessing fields by name, with support for sparse representations.</p>
<div class="highlight"><pre><code class="language-java"><span class="n">Table</span> <span class="n">table</span><span class="o">=</span><span class="n">tableEnv</span><span class="o">.</span><span class="na">fromDataStream</span><span class="o">(</span>
<span class="n">dataStream</span><span class="o">,</span><span class="n">Schema</span><span class="o">.</span><span class="na">newBuilder</span><span class="o">()</span>
@@ -760,6 +760,10 @@ NUMERIC type and the TIMESTAMP type is problematic and therefore no longer suppo
<li><a href="https://issues.apache.org/jira/browse/FLINK-22133">FLINK-22133</a> The unified source API for connectors
has a minor breaking change: The <code>SplitEnumerator.snapshotState()</code> method was adjusted to accept the
<em>Checkpoint ID</em> of the checkpoint for which the snapshot is created.</li>
+ <li><a href="https://issues.apache.org/jira/browse/FLINK-19463">FLINK-19463</a> - The old <code>StateBackend</code> interfaces were deprecated
+as they had overloaded semantics which many users found confusing. This is a pure API change and does not affect
+runtime characteristics of applications.
+For full details on how to update existing pipelines, please see the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.13/docs/ops/state/state_backends/#migrating-from-legacy-backends">migration guide</a>.</li>
</ul>
<h1 id="resources">Resources</h1>