You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by tw...@apache.org on 2017/03/29 16:47:38 UTC

[1/3] flink-web git commit: New blog post about Table & SQL API's current state

Repository: flink-web
Updated Branches:
  refs/heads/asf-site 98e9d7640 -> 661f76489


New blog post about Table & SQL API's current state


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/d3e398ee
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/d3e398ee
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/d3e398ee

Branch: refs/heads/asf-site
Commit: d3e398eefd55f444d78b3e2e7b99bc8c66f8c266
Parents: 98e9d76
Author: twalthr <tw...@apache.org>
Authored: Wed Mar 29 18:25:39 2017 +0200
Committer: twalthr <tw...@apache.org>
Committed: Wed Mar 29 18:25:39 2017 +0200

----------------------------------------------------------------------
 _posts/2017-03-29-table-sql-api-update.md | 195 +++++++++++++++++++++++++
 1 file changed, 195 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/d3e398ee/_posts/2017-03-29-table-sql-api-update.md
----------------------------------------------------------------------
diff --git a/_posts/2017-03-29-table-sql-api-update.md b/_posts/2017-03-29-table-sql-api-update.md
new file mode 100644
index 0000000..aa501b1
--- /dev/null
+++ b/_posts/2017-03-29-table-sql-api-update.md
@@ -0,0 +1,195 @@
+---
+layout: post
+title:  "From Streams to Tables and Back Again: An Update on Flink's Table & SQL API"
+excerpt: "<p>Broadening the user base and unifying batch & streaming with relational APIs</p>"
+date:   2017-03-29 12:00:00
+author: "Timo Walther"
+author-twitter: "twalthr"
+categories: news
+---
+Stream processing can deliver a lot of value. Many organizations have recognized the benefit of managing large volumes of data in real-time, reacting quickly to trends, and providing customers with live services at scale. Streaming applications with well-defined business logic can deliver a competitive advantage.
+
+Flink's [DataStream](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html) abstraction is a powerful API which lets you flexibly define both basic and complex streaming pipelines. Additionally, it offers low-level operations such as [Async IO](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/asyncio.html) and [ProcessFunctions](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/process_function.html). However, many users do not need such a deep level of flexibility. They need an API which quickly solves 80% of their use cases where simple tasks can be defined using little code.
+
+To deliver the power of stream processing to a broader set of users, the Apache Flink community is developing APIs that provide simpler abstractions and more concise syntax so that users can focus on their business logic instead of advanced streaming concepts. Along with other APIs (such as [CEP](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/libs/cep.html) for complex event processing on streams), Flink offers a relational API that aims to unify stream and batch processing: the [Table & SQL API](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html), often referred to as the Table API.
+
+Recently, contributors working for companies such as Alibaba, Huawei, data Artisans, and more decided to further develop the Table API. Over the past year, the Table API has been rewritten entirely. Since Flink 1.1, its core has been based on [Apache Calcite](http://calcite.apache.org/), which parses SQL and optimizes all relational queries. Today, the Table API can address a wide range of use cases in both batch and stream environments with unified semantics.
+
+This blog post summarizes the current status of Flink\u2019s Table API and showcases some of the recently-added features in Apache Flink. Among the features presented here are the unified access to batch and streaming data, data transformation, and window operators.
+The following paragraphs are not only supposed to give you a general overview of the Table API, but also to illustrate the potential of relational APIs in the future.
+
+Because the Table API is built on top of Flink\u2019s core APIs, [DataStreams](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html) and [DataSets](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html) can be converted to a Table and vice-versa without much overhead. Hereafter, we show how to create tables from different sources and specify programs that can be executed locally or in a distributed setting. In this post, we will use the Scala version of the Table API, but there is also a Java version as well as a SQL API with an equivalent set of features.
+
+## Data Transformation and ETL
+
+A common task in every data processing pipeline is importing data from one or multiple systems, applying some transformations to it, then exporting the data to another system. The Table API can help to manage these recurring tasks. For reading data, the API provides a set of ready-to-use `TableSources` such as a `CsvTableSource` and `KafkaTableSource`, however, it also allows the implementation of custom `TableSources` that can hide configuration specifics (e.g. watermark generation) from users who are less familiar with streaming concepts.
+
+Let\u2019s assume we have a CSV file that stores customer information. The values are delimited by a \u201c\|\u201d-character and contain a customer identifier, name, timestamp of the last update, and preferences encoded in a comma-separated key-value string:
+
+    42|Bob Smith|2016-07-23 16:10:11|color=12,length=200,size=200
+
+The following example illustrates how to read a CSV file and perform some data cleansing before converting it to a regular DataStream program.
+
+```scala
+// set up execution environment
+val env = StreamExecutionEnvironment.getExecutionEnvironment
+val tEnv = TableEnvironment.getTableEnvironment(env)
+
+// configure table source
+val customerSource = CsvTableSource.builder()
+  .path("/path/to/customer_data.csv")
+  .ignoreFirstLine()
+  .fieldDelimiter("|")
+  .field("id", Types.LONG)
+  .field("name", Types.STRING)
+  .field("last_update", Types.TIMESTAMP)
+  .field("prefs", Types.STRING)
+  .build()
+
+// name your table source
+tEnv.registerTableSource("customers", customerSource)
+
+// define your table program
+val table = tEnv
+  .scan("customers")
+  .filter('name.isNotNull && 'last_update > "2016-01-01 00:00:00".toTimestamp)
+  .select('id, 'name.lowerCase(), 'prefs)
+
+// convert it to a data stream
+val ds = table.toDataStream[Row]
+
+ds.print()
+env.execute()
+```
+
+The Table API comes with a large set of built-in functions that make it easy to specify  business logic using a language integrated query (LINQ) syntax. In the example above, we filter out customers with invalid names and only select those that updated their preferences recently. We convert names to lowercase for normalization. For debugging purposes, we convert the table into a DataStream and print it.
+
+The `CsvTableSource` supports both batch and stream environments. If the programmer wants to execute the program above in a batch application, all he or she has to do is to replace the environment via `ExecutionEnvironment` and change the output conversion from `DataStream` to `DataSet`. The Table API program itself doesn\u2019t change.
+
+In the example, we converted the table program to a data stream of `Row` objects. However, we are not limited to row data types. The Table API supports all types from the underlying APIs such as Java and Scala Tuples, Case Classes, POJOs, or generic types that are serialized using Kryo. Let\u2019s assume that we want to have regular object (POJO) with the following format instead of generic rows:
+
+```scala
+class Customer {
+  var id: Int = _
+  var name: String = _
+  var update: Long = _
+  var prefs: java.util.Properties = _
+}
+```
+We can use the following table program to convert the CSV file into Customer objects. Flink takes care of creating objects and mapping fields for us.
+
+```scala
+val ds = tEnv
+  .scan("customers")
+  .select('id, 'name, 'last_update as 'update, parseProperties('prefs) as 'prefs)
+  .toDataStream[Customer]
+```
+
+You might have noticed that the query above uses a function to parse the preferences field. Even though Flink\u2019s Table API is shipped with a large set of built-in functions, is often necessary to define custom user-defined scalar functions. In the above example we use a user-defined function `parseProperties`. The following code snippet shows how easily we can implement a scalar function.
+
+```scala
+object parseProperties extends ScalarFunction {
+  def eval(str: String): Properties = {
+    val props = new Properties()
+    str
+      .split(",")
+      .map(\_.split("="))
+      .foreach(split => props.setProperty(split(0), split(1)))
+    props
+  }
+}
+```
+
+Scalar functions can be used to deserialize, extract, or convert values (and more). By overwriting the `open()` method we can even have access to runtime information such as distributed cached files or metrics. Even the `open()` method is only called once during the runtime\u2019s [task lifecycle](https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals/task_lifecycle.html).
+
+## Unified Windowing for Static and Streaming Data
+
+Another very common task, especially when working with continuous data, is the definition of windows to split a stream into pieces of finite size, over which we can apply computations. At the moment, the Table API supports three types of windows: sliding windows, tumbling windows, and session windows (for general definitions of the different types of windows, we recommend [Flink\u2019s documentation](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/windows.html)). All three window types work on [event or processing time](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/event_time.html). Session windows can be defined over time intervals, sliding and tumbling windows can be defined over time intervals or a number of rows.
+
+Let\u2019s assume that our customer data from the example above is an event stream of updates generated whenever the customer updated his or her preferences. We assume that events come from a TableSource that has assigned timestamps and watermarks. The definition of a window happens again in a LINQ-style fashion. The following example could be used to count the updates to the preferences during one day.
+
+```scala
+table
+  .window(Tumble over 1.day on 'rowtime as 'w)
+  .groupBy('id, 'w)
+  .select('id, 'w.start as 'from, 'w.end as 'to, 'prefs.count as 'updates)
+```
+
+By using the `on()` parameter, we can specify whether the window is supposed to work on event-time or not. The Table API assumes that timestamps and watermarks are assigned correctly when using event-time. Elements with timestamps smaller than the last received watermark are dropped. Since the extraction of timestamps and generation of watermarks depends on the data source and requires some deeper knowledge of their origin, the TableSource or the upstream DataStream is usually responsible for assigning these properties.
+
+The following code shows how to define other types of windows:
+
+```scala
+// using processing-time
+table.window(Tumble over 100.rows as 'manyRowWindow)
+// using event-time
+table.window(Session withGap 15.minutes on 'rowtime as 'sessionWindow)
+table.window(Slide over 1.day every 1.hour on 'rowtime as 'dailyWindow)
+```
+
+Since batch is just a special case of streaming (where a batch happens to have a defined start and end point), it is also possible to apply all of these windows in a batch execution environment. Without any modification of the table program itself, we can run the code on a DataSet given that we specified a column named \u201crowtime\u201d. This is particularly interesting if we want to compute exact results from time-to-time, so that late events that are heavily out-of-order can be included in the computation.
+
+At the moment, the Table API only supports so-called \u201cgroup windows\u201d that also exist in the DataStream API. Other windows such as SQL\u2019s OVER clause windows are in development and [planned for Flink 1.3](https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations).
+
+In order to demonstrate the expressiveness and capabilities of the API, here\u2019s a snippet with a more advanced example of an exponentially decaying moving average over a sliding window of one hour which returns aggregated results every second. The table program weighs recent orders more heavily than older orders. This example is borrowed from [Apache Calcite](https://calcite.apache.org/docs/stream.html#hopping-windows) and shows what will be possible in future Flink releases for both the Table API and SQL.
+
+```scala
+table
+  .window(Slide over 1.hour every 1.second as 'w)
+  .groupBy('productId, 'w)
+  .select(
+    'w.end,
+    'productId,
+    ('unitPrice * ('rowtime - 'w.start).exp() / 1.hour).sum / (('rowtime - 'w.start).exp() / 1.hour).sum)
+```
+
+## User-defined Table Functions
+
+[User-defined table functions](https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html#user-defined-table-functions) were added in Flink 1.2. These can be quite useful for table columns containing non-atomic values which need to be extracted and mapped to separate fields before processing. Table functions take an arbitrary number of scalar values and allow for returning an arbitrary number of rows as output instead of a single value, similar to a flatMap function in the DataStream or DataSet API. The output of a table function can then be joined with the original row in the table by using either a left-outer join or cross join.
+
+Using the previously-mentioned customer table, let\u2019s assume we want to produce a table that contains the color and size preferences as separate columns. The table program would look like this:
+
+```scala
+// create an instance of the table function
+val extractPrefs = new PropertiesExtractor()
+
+// derive rows and join them with original row
+table
+  .join(extractPrefs('prefs) as ('color, 'size))
+  .select('id, 'username, 'color, 'size)
+```
+
+The `PropertiesExtractor` is a user-defined table function that extracts the color and size. We are not interested in customers that haven\u2019t set these preferences and thus don\u2019t emit anything if both properties are not present in the string value. Since we are using a (cross) join in the program, customers without a result on the right side of the join will be filtered out.
+
+```scala
+class PropertiesExtractor extends TableFunction[Row] {
+  def eval(prefs: String): Unit = {
+    // split string into (key, value) pairs
+    val pairs = prefs
+      .split(",")
+      .map { kv =>
+        val split = kv.split("=")
+        (split(0), split(1))
+      }
+
+    val color = pairs.find(\_.\_1 == "color").map(\_.\_2)
+    val size = pairs.find(\_.\_1 == "size").map(\_.\_2)
+
+    // emit a row if color and size are specified
+    (color, size) match {
+      case (Some(c), Some(s)) => collect(Row.of(c, s))
+      case _ => // skip
+    }
+  }
+
+  override def getResultType = new RowTypeInfo(Types.STRING, Types.STRING)
+}
+```
+
+## Conclusion
+
+There is significant interest in making streaming more accessible and easier to use. Flink\u2019s Table API development is happening quickly, and we believe that soon, you will be able to implement large batch or streaming pipelines using purely relational APIs or even convert existing Flink jobs to table programs. The Table API is already a very useful tool since you can work around limitations and missing features at any time by switching back-and-forth between the DataSet/DataStream abstraction to the Table abstraction.
+
+Contributions like support of Apache Hive UDFs, external catalogs, more TableSources, additional windows, and more operators will make the Table API an even more useful tool. Particularly, the upcoming introduction of Dynamic Tables, which is worth a blog post of its own, shows that even in 2017, new relational APIs open the door to a number of possibilities.
+
+Try it out, or even better, join the design discussions on the [mailing lists](http://flink.apache.org/community.html#mailing-lists) and [JIRA](https://issues.apache.org/jira/browse/FLINK/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel) and start contributing!


[2/3] flink-web git commit: Rebuild website and update date of release 1.1.5

Posted by tw...@apache.org.
http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/blog/page5/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page5/index.html b/content/blog/page5/index.html
index 3d3cc18..e212840 100644
--- a/content/blog/page5/index.html
+++ b/content/blog/page5/index.html
@@ -142,6 +142,19 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2014/10/03/upcoming_events.html">Upcoming Events</a></h2>
+      <p>03 Oct 2014</p>
+
+      <p><p>We are happy to announce several upcoming Flink events both in Europe and the US. Starting with a <strong>Flink hackathon in Stockholm</strong> (Oct 8-9) and a talk about Flink at the <strong>Stockholm Hadoop User Group</strong> (Oct 8). This is followed by the very first <strong>Flink Meetup in Berlin</strong> (Oct 15). In the US, there will be two Flink Meetup talks: the first one at the <strong>Pasadena Big Data User Group</strong> (Oct 29) and the second one at <strong>Silicon Valley Hands On Programming Events</strong> (Nov 4).</p>
+
+</p>
+
+      <p><a href="/news/2014/10/03/upcoming_events.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2014/09/26/release-0.6.1.html">Apache Flink 0.6.1 available</a></h2>
       <p>26 Sep 2014</p>
 
@@ -202,6 +215,16 @@ academic and open source project that Flink originates from.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/index.html
----------------------------------------------------------------------
diff --git a/content/index.html b/content/index.html
index ccb17f7..f0ec841 100644
--- a/content/index.html
+++ b/content/index.html
@@ -168,6 +168,9 @@
 
   <dl>
       
+        <dt> <a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table &amp; SQL API</a></dt>
+        <dd><p>Broadening the user base and unifying batch &amp; streaming with relational APIs</p></dd>
+      
         <dt> <a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></dt>
         <dd><p>The Apache Flink community released the next bugfix version of the Apache Flink 1.1 series.</p>
 
@@ -183,11 +186,6 @@
       
         <dt> <a href="/news/2016/12/19/2016-year-in-review.html">Apache Flink in 2016: Year in Review</a></dt>
         <dd><p>As 2016 comes to a close, let's take a moment to look back on the Flink community's great work during the past year.</p></dd>
-      
-        <dt> <a href="/news/2016/10/12/release-1.1.3.html">Apache Flink 1.1.3 Released</a></dt>
-        <dd><p>The Apache Flink community released the next bugfix version of the Apache Flink 1.1. series.</p>
-
-</dd>
     
   </dl>
 

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/news/2017/03/29/table-sql-api-update.html
----------------------------------------------------------------------
diff --git a/content/news/2017/03/29/table-sql-api-update.html b/content/news/2017/03/29/table-sql-api-update.html
new file mode 100644
index 0000000..0c95fb0
--- /dev/null
+++ b/content/news/2017/03/29/table-sql-api-update.html
@@ -0,0 +1,364 @@
+<!DOCTYPE html>
+<html lang="en">
+  <head>
+    <meta charset="utf-8">
+    <meta http-equiv="X-UA-Compatible" content="IE=edge">
+    <meta name="viewport" content="width=device-width, initial-scale=1">
+    <!-- The above 3 meta tags *must* come first in the head; any other head content must come *after* these tags -->
+    <title>Apache Flink: From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</title>
+    <link rel="shortcut icon" href="/favicon.ico" type="image/x-icon">
+    <link rel="icon" href="/favicon.ico" type="image/x-icon">
+
+    <!-- Bootstrap -->
+    <link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/css/bootstrap.min.css">
+    <link rel="stylesheet" href="/css/flink.css">
+    <link rel="stylesheet" href="/css/syntax.css">
+
+    <!-- Blog RSS feed -->
+    <link href="/blog/feed.xml" rel="alternate" type="application/rss+xml" title="Apache Flink Blog: RSS feed" />
+
+    <!-- jQuery (necessary for Bootstrap's JavaScript plugins) -->
+    <!-- We need to load Jquery in the header for custom google analytics event tracking-->
+    <script src="https://ajax.googleapis.com/ajax/libs/jquery/1.11.2/jquery.min.js"></script>
+
+    <!-- HTML5 shim and Respond.js for IE8 support of HTML5 elements and media queries -->
+    <!-- WARNING: Respond.js doesn't work if you view the page via file:// -->
+    <!--[if lt IE 9]>
+      <script src="https://oss.maxcdn.com/html5shiv/3.7.2/html5shiv.min.js"></script>
+      <script src="https://oss.maxcdn.com/respond/1.4.2/respond.min.js"></script>
+    <![endif]-->
+  </head>
+  <body>  
+    
+
+    <!-- Main content. -->
+    <div class="container">
+    <div class="row">
+
+      
+     <div id="sidebar" class="col-sm-3">
+          <!-- Top navbar. -->
+    <nav class="navbar navbar-default">
+        <!-- The logo. -->
+        <div class="navbar-header">
+          <button type="button" class="navbar-toggle collapsed" data-toggle="collapse" data-target="#bs-example-navbar-collapse-1">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <div class="navbar-logo">
+            <a href="/">
+              <img alt="Apache Flink" src="/img/flink-header-logo.svg" width="147px" height="73px">
+            </a>
+          </div>
+        </div><!-- /.navbar-header -->
+
+        <!-- The navigation links. -->
+        <div class="collapse navbar-collapse" id="bs-example-navbar-collapse-1">
+          <ul class="nav navbar-nav navbar-main">
+
+            <!-- Downloads -->
+            <li class=""><a class="btn btn-info" href="/downloads.html">Download Flink</a></li>
+
+            <!-- Overview -->
+            <li><a href="/index.html">Home</a></li>
+
+            <!-- Intro -->
+            <li><a href="/introduction.html">Introduction to Flink</a></li>
+
+            <!-- Use cases -->
+            <li><a href="/usecases.html">Flink Use Cases</a></li>
+
+            <!-- Powered by -->
+            <li><a href="/poweredby.html">Powered by Flink</a></li>
+
+            <!-- Ecosystem -->
+            <li><a href="/ecosystem.html">Ecosystem</a></li>
+
+            <!-- Community -->
+            <li><a href="/community.html">Community &amp; Project Info</a></li>
+
+            <!-- Contribute -->
+            <li><a href="/how-to-contribute.html">How to Contribute</a></li>
+
+            <!-- Blog -->
+            <li class=" active hidden-md hidden-sm"><a href="/blog/"><b>Flink Blog</b></a></li>
+
+            <hr />
+
+
+
+            <!-- Documentation -->
+            <!-- <li>
+              <a href="http://ci.apache.org/projects/flink/flink-docs-release-1.2" target="_blank">Documentation <small><span class="glyphicon glyphicon-new-window"></span></small></a>
+            </li> -->
+            <li class="dropdown">
+              <a class="dropdown-toggle" data-toggle="dropdown" href="#">Documentation
+                <span class="caret"></span></a>
+                <ul class="dropdown-menu">
+                  <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-1.2" target="_blank">1.2 (Latest stable release) <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
+                  <li><a href="http://ci.apache.org/projects/flink/flink-docs-release-1.3" target="_blank">1.3 (Snapshot) <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
+                </ul>
+              </li>
+
+            <!-- Quickstart -->
+            <li>
+              <a href="http://ci.apache.org/projects/flink/flink-docs-release-1.2/quickstart/setup_quickstart.html" target="_blank">Quickstart <small><span class="glyphicon glyphicon-new-window"></span></small></a>
+            </li>
+
+            <!-- GitHub -->
+            <li>
+              <a href="https://github.com/apache/flink" target="_blank">Flink on GitHub <small><span class="glyphicon glyphicon-new-window"></span></small></a>
+            </li>
+
+          </ul>
+
+
+
+          <ul class="nav navbar-nav navbar-bottom">
+          <hr />
+
+            <!-- FAQ -->
+            <li ><a href="/faq.html">Project FAQ</a></li>
+
+            <!-- Twitter -->
+            <li><a href="https://twitter.com/apacheflink" target="_blank">@ApacheFlink <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
+
+            <!-- Visualizer -->
+            <li class=" hidden-md hidden-sm"><a href="/visualizer/" target="_blank">Plan Visualizer <small><span class="glyphicon glyphicon-new-window"></span></small></a></li>
+
+          </ul>
+        </div><!-- /.navbar-collapse -->
+    </nav>
+
+      </div>
+      <div class="col-sm-9">
+      <div class="row-fluid">
+  <div class="col-sm-12">
+    <div class="row">
+      <h1>From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</h1>
+
+      <article>
+        <p>29 Mar 2017 by Timo Walther (<a href="https://twitter.com/twalthr">@twalthr</a>)</p>
+
+<p>Stream processing can deliver a lot of value. Many organizations have recognized the benefit of managing large volumes of data in real-time, reacting quickly to trends, and providing customers with live services at scale. Streaming applications with well-defined business logic can deliver a competitive advantage.</p>
+
+<p>Flink\u2019s <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html">DataStream</a> abstraction is a powerful API which lets you flexibly define both basic and complex streaming pipelines. Additionally, it offers low-level operations such as <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/asyncio.html">Async IO</a> and <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/process_function.html">ProcessFunctions</a>. However, many users do not need such a deep level of flexibility. They need an API which quickly solves 80% of their use cases where simple tasks can be defined using little code.</p>
+
+<p>To deliver the power of stream processing to a broader set of users, the Apache Flink community is developing APIs that provide simpler abstractions and more concise syntax so that users can focus on their business logic instead of advanced streaming concepts. Along with other APIs (such as <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/libs/cep.html">CEP</a> for complex event processing on streams), Flink offers a relational API that aims to unify stream and batch processing: the <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html">Table &amp; SQL API</a>, often referred to as the Table API.</p>
+
+<p>Recently, contributors working for companies such as Alibaba, Huawei, data Artisans, and more decided to further develop the Table API. Over the past year, the Table API has been rewritten entirely. Since Flink 1.1, its core has been based on <a href="http://calcite.apache.org/">Apache Calcite</a>, which parses SQL and optimizes all relational queries. Today, the Table API can address a wide range of use cases in both batch and stream environments with unified semantics.</p>
+
+<p>This blog post summarizes the current status of Flink\u2019s Table API and showcases some of the recently-added features in Apache Flink. Among the features presented here are the unified access to batch and streaming data, data transformation, and window operators.
+The following paragraphs are not only supposed to give you a general overview of the Table API, but also to illustrate the potential of relational APIs in the future.</p>
+
+<p>Because the Table API is built on top of Flink\u2019s core APIs, <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html">DataStreams</a> and <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html">DataSets</a> can be converted to a Table and vice-versa without much overhead. Hereafter, we show how to create tables from different sources and specify programs that can be executed locally or in a distributed setting. In this post, we will use the Scala version of the Table API, but there is also a Java version as well as a SQL API with an equivalent set of features.</p>
+
+<h2 id="data-transformation-and-etl">Data Transformation and ETL</h2>
+
+<p>A common task in every data processing pipeline is importing data from one or multiple systems, applying some transformations to it, then exporting the data to another system. The Table API can help to manage these recurring tasks. For reading data, the API provides a set of ready-to-use <code>TableSources</code> such as a <code>CsvTableSource</code> and <code>KafkaTableSource</code>, however, it also allows the implementation of custom <code>TableSources</code> that can hide configuration specifics (e.g. watermark generation) from users who are less familiar with streaming concepts.</p>
+
+<p>Let\u2019s assume we have a CSV file that stores customer information. The values are delimited by a \u201c|\u201d-character and contain a customer identifier, name, timestamp of the last update, and preferences encoded in a comma-separated key-value string:</p>
+
+<div class="highlight"><pre><code>42|Bob Smith|2016-07-23 16:10:11|color=12,length=200,size=200
+</code></pre></div>
+
+<p>The following example illustrates how to read a CSV file and perform some data cleansing before converting it to a regular DataStream program.</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="c1">// set up execution environment</span>
+<span class="k">val</span> <span class="n">env</span> <span class="k">=</span> <span class="nc">StreamExecutionEnvironment</span><span class="o">.</span><span class="n">getExecutionEnvironment</span>
+<span class="k">val</span> <span class="n">tEnv</span> <span class="k">=</span> <span class="nc">TableEnvironment</span><span class="o">.</span><span class="n">getTableEnvironment</span><span class="o">(</span><span class="n">env</span><span class="o">)</span>
+
+<span class="c1">// configure table source</span>
+<span class="k">val</span> <span class="n">customerSource</span> <span class="k">=</span> <span class="nc">CsvTableSource</span><span class="o">.</span><span class="n">builder</span><span class="o">()</span>
+  <span class="o">.</span><span class="n">path</span><span class="o">(</span><span class="s">&quot;/path/to/customer_data.csv&quot;</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">ignoreFirstLine</span><span class="o">()</span>
+  <span class="o">.</span><span class="n">fieldDelimiter</span><span class="o">(</span><span class="s">&quot;|&quot;</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">field</span><span class="o">(</span><span class="s">&quot;id&quot;</span><span class="o">,</span> <span class="nc">Types</span><span class="o">.</span><span class="nc">LONG</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">field</span><span class="o">(</span><span class="s">&quot;name&quot;</span><span class="o">,</span> <span class="nc">Types</span><span class="o">.</span><span class="nc">STRING</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">field</span><span class="o">(</span><span class="s">&quot;last_update&quot;</span><span class="o">,</span> <span class="nc">Types</span><span class="o">.</span><span class="nc">TIMESTAMP</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">field</span><span class="o">(</span><span class="s">&quot;prefs&quot;</span><span class="o">,</span> <span class="nc">Types</span><span class="o">.</span><span class="nc">STRING</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">build</span><span class="o">()</span>
+
+<span class="c1">// name your table source</span>
+<span class="n">tEnv</span><span class="o">.</span><span class="n">registerTableSource</span><span class="o">(</span><span class="s">&quot;customers&quot;</span><span class="o">,</span> <span class="n">customerSource</span><span class="o">)</span>
+
+<span class="c1">// define your table program</span>
+<span class="k">val</span> <span class="n">table</span> <span class="k">=</span> <span class="n">tEnv</span>
+  <span class="o">.</span><span class="n">scan</span><span class="o">(</span><span class="s">&quot;customers&quot;</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="-Symbol">&#39;name</span><span class="o">.</span><span class="n">isNotNull</span> <span class="o">&amp;&amp;</span> <span class="-Symbol">&#39;last_update</span> <span class="o">&gt;</span> <span class="s">&quot;2016-01-01 00:00:00&quot;</span><span class="o">.</span><span class="n">toTimestamp</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">select</span><span class="o">(</span><span class="-Symbol">&#39;id</span><span class="o">,</span> <span class="-Symbol">&#39;name</span><span class="o">.</span><span class="n">lowerCase</span><span class="o">(),</span> <span class="-Symbol">&#39;prefs</span><span class="o">)</span>
+
+<span class="c1">// convert it to a data stream</span>
+<span class="k">val</span> <span class="n">ds</span> <span class="k">=</span> <span class="n">table</span><span class="o">.</span><span class="n">toDataStream</span><span class="o">[</span><span class="kt">Row</span><span class="o">]</span>
+
+<span class="n">ds</span><span class="o">.</span><span class="n">print</span><span class="o">()</span>
+<span class="n">env</span><span class="o">.</span><span class="n">execute</span><span class="o">()</span></code></pre></div>
+
+<p>The Table API comes with a large set of built-in functions that make it easy to specify  business logic using a language integrated query (LINQ) syntax. In the example above, we filter out customers with invalid names and only select those that updated their preferences recently. We convert names to lowercase for normalization. For debugging purposes, we convert the table into a DataStream and print it.</p>
+
+<p>The <code>CsvTableSource</code> supports both batch and stream environments. If the programmer wants to execute the program above in a batch application, all he or she has to do is to replace the environment via <code>ExecutionEnvironment</code> and change the output conversion from <code>DataStream</code> to <code>DataSet</code>. The Table API program itself doesn\u2019t change.</p>
+
+<p>In the example, we converted the table program to a data stream of <code>Row</code> objects. However, we are not limited to row data types. The Table API supports all types from the underlying APIs such as Java and Scala Tuples, Case Classes, POJOs, or generic types that are serialized using Kryo. Let\u2019s assume that we want to have regular object (POJO) with the following format instead of generic rows:</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="k">class</span> <span class="nc">Customer</span> <span class="o">{</span>
+  <span class="k">var</span> <span class="n">id</span><span class="k">:</span> <span class="kt">Int</span> <span class="o">=</span> <span class="k">_</span>
+  <span class="k">var</span> <span class="n">name</span><span class="k">:</span> <span class="kt">String</span> <span class="o">=</span> <span class="k">_</span>
+  <span class="k">var</span> <span class="n">update</span><span class="k">:</span> <span class="kt">Long</span> <span class="o">=</span> <span class="k">_</span>
+  <span class="k">var</span> <span class="n">prefs</span><span class="k">:</span> <span class="kt">java.util.Properties</span> <span class="o">=</span> <span class="k">_</span>
+<span class="o">}</span></code></pre></div>
+<p>We can use the following table program to convert the CSV file into Customer objects. Flink takes care of creating objects and mapping fields for us.</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="k">val</span> <span class="n">ds</span> <span class="k">=</span> <span class="n">tEnv</span>
+  <span class="o">.</span><span class="n">scan</span><span class="o">(</span><span class="s">&quot;customers&quot;</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">select</span><span class="o">(</span><span class="-Symbol">&#39;id</span><span class="o">,</span> <span class="-Symbol">&#39;name</span><span class="o">,</span> <span class="-Symbol">&#39;last_update</span> <span class="n">as</span> <span class="-Symbol">&#39;update</span><span class="o">,</span> <span class="n">parseProperties</span><span class="o">(</span><span class="-Symbol">&#39;prefs</span><span class="o">)</span> <span class="n">as</span> <span class="-Symbol">&#39;prefs</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">toDataStream</span><span class="o">[</span><span class="kt">Customer</span><span class="o">]</span></code></pre></div>
+
+<p>You might have noticed that the query above uses a function to parse the preferences field. Even though Flink\u2019s Table API is shipped with a large set of built-in functions, is often necessary to define custom user-defined scalar functions. In the above example we use a user-defined function <code>parseProperties</code>. The following code snippet shows how easily we can implement a scalar function.</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="k">object</span> <span class="nc">parseProperties</span> <span class="k">extends</span> <span class="nc">ScalarFunction</span> <span class="o">{</span>
+  <span class="k">def</span> <span class="n">eval</span><span class="o">(</span><span class="n">str</span><span class="k">:</span> <span class="kt">String</span><span class="o">)</span><span class="k">:</span> <span class="kt">Properties</span> <span class="o">=</span> <span class="o">{</span>
+    <span class="k">val</span> <span class="n">props</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">Properties</span><span class="o">()</span>
+    <span class="n">str</span>
+      <span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="s">&quot;,&quot;</span><span class="o">)</span>
+      <span class="o">.</span><span class="n">map</span><span class="o">(\</span><span class="k">_</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="s">&quot;=&quot;</span><span class="o">))</span>
+      <span class="o">.</span><span class="n">foreach</span><span class="o">(</span><span class="n">split</span> <span class="k">=&gt;</span> <span class="n">props</span><span class="o">.</span><span class="n">setProperty</span><span class="o">(</span><span class="n">split</span><span class="o">(</span><span class="mi">0</span><span class="o">),</span> <span class="n">split</span><span class="o">(</span><span class="mi">1</span><span class="o">)))</span>
+    <span class="n">props</span>
+  <span class="o">}</span>
+<span class="o">}</span></code></pre></div>
+
+<p>Scalar functions can be used to deserialize, extract, or convert values (and more). By overwriting the <code>open()</code> method we can even have access to runtime information such as distributed cached files or metrics. Even the <code>open()</code> method is only called once during the runtime\u2019s <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals/task_lifecycle.html">task lifecycle</a>.</p>
+
+<h2 id="unified-windowing-for-static-and-streaming-data">Unified Windowing for Static and Streaming Data</h2>
+
+<p>Another very common task, especially when working with continuous data, is the definition of windows to split a stream into pieces of finite size, over which we can apply computations. At the moment, the Table API supports three types of windows: sliding windows, tumbling windows, and session windows (for general definitions of the different types of windows, we recommend <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/windows.html">Flink\u2019s documentation</a>). All three window types work on <a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/event_time.html">event or processing time</a>. Session windows can be defined over time intervals, sliding and tumbling windows can be defined over time intervals or a number of rows.</p>
+
+<p>Let\u2019s assume that our customer data from the example above is an event stream of updates generated whenever the customer updated his or her preferences. We assume that events come from a TableSource that has assigned timestamps and watermarks. The definition of a window happens again in a LINQ-style fashion. The following example could be used to count the updates to the preferences during one day.</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="n">table</span>
+  <span class="o">.</span><span class="n">window</span><span class="o">(</span><span class="nc">Tumble</span> <span class="n">over</span> <span class="mf">1.d</span><span class="n">ay</span> <span class="n">on</span> <span class="-Symbol">&#39;rowtime</span> <span class="n">as</span> <span class="-Symbol">&#39;w</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">groupBy</span><span class="o">(</span><span class="-Symbol">&#39;id</span><span class="o">,</span> <span class="-Symbol">&#39;w</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">select</span><span class="o">(</span><span class="-Symbol">&#39;id</span><span class="o">,</span> <span class="-Symbol">&#39;w</span><span class="o">.</span><span class="n">start</span> <span class="n">as</span> <span class="-Symbol">&#39;from</span><span class="o">,</span> <span class="-Symbol">&#39;w</span><span class="o">.</span><span class="n">end</span> <span class="n">as</span> <span class="-Symbol">&#39;to</span><span class="o">,</span> <span class="-Symbol">&#39;prefs</span><span class="o">.</span><span class="n">count</span> <span class="n">as</span> <span class="-Symbol">&#39;updates</span><span class="o">)</span></code></pre></div>
+
+<p>By using the <code>on()</code> parameter, we can specify whether the window is supposed to work on event-time or not. The Table API assumes that timestamps and watermarks are assigned correctly when using event-time. Elements with timestamps smaller than the last received watermark are dropped. Since the extraction of timestamps and generation of watermarks depends on the data source and requires some deeper knowledge of their origin, the TableSource or the upstream DataStream is usually responsible for assigning these properties.</p>
+
+<p>The following code shows how to define other types of windows:</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="c1">// using processing-time</span>
+<span class="n">table</span><span class="o">.</span><span class="n">window</span><span class="o">(</span><span class="nc">Tumble</span> <span class="n">over</span> <span class="mf">100.</span><span class="n">rows</span> <span class="n">as</span> <span class="-Symbol">&#39;manyRowWindow</span><span class="o">)</span>
+<span class="c1">// using event-time</span>
+<span class="n">table</span><span class="o">.</span><span class="n">window</span><span class="o">(</span><span class="nc">Session</span> <span class="n">withGap</span> <span class="mf">15.</span><span class="n">minutes</span> <span class="n">on</span> <span class="-Symbol">&#39;rowtime</span> <span class="n">as</span> <span class="-Symbol">&#39;sessionWindow</span><span class="o">)</span>
+<span class="n">table</span><span class="o">.</span><span class="n">window</span><span class="o">(</span><span class="nc">Slide</span> <span class="n">over</span> <span class="mf">1.d</span><span class="n">ay</span> <span class="n">every</span> <span class="mf">1.</span><span class="n">hour</span> <span class="n">on</span> <span class="-Symbol">&#39;rowtime</span> <span class="n">as</span> <span class="-Symbol">&#39;dailyWindow</span><span class="o">)</span></code></pre></div>
+
+<p>Since batch is just a special case of streaming (where a batch happens to have a defined start and end point), it is also possible to apply all of these windows in a batch execution environment. Without any modification of the table program itself, we can run the code on a DataSet given that we specified a column named \u201crowtime\u201d. This is particularly interesting if we want to compute exact results from time-to-time, so that late events that are heavily out-of-order can be included in the computation.</p>
+
+<p>At the moment, the Table API only supports so-called \u201cgroup windows\u201d that also exist in the DataStream API. Other windows such as SQL\u2019s OVER clause windows are in development and <a href="https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations">planned for Flink 1.3</a>.</p>
+
+<p>In order to demonstrate the expressiveness and capabilities of the API, here\u2019s a snippet with a more advanced example of an exponentially decaying moving average over a sliding window of one hour which returns aggregated results every second. The table program weighs recent orders more heavily than older orders. This example is borrowed from <a href="https://calcite.apache.org/docs/stream.html#hopping-windows">Apache Calcite</a> and shows what will be possible in future Flink releases for both the Table API and SQL.</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="n">table</span>
+  <span class="o">.</span><span class="n">window</span><span class="o">(</span><span class="nc">Slide</span> <span class="n">over</span> <span class="mf">1.</span><span class="n">hour</span> <span class="n">every</span> <span class="mf">1.</span><span class="n">second</span> <span class="n">as</span> <span class="-Symbol">&#39;w</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">groupBy</span><span class="o">(</span><span class="-Symbol">&#39;productId</span><span class="o">,</span> <span class="-Symbol">&#39;w</span><span class="o">)</span>
+  <span class="o">.</span><span class="n">select</span><span class="o">(</span>
+    <span class="-Symbol">&#39;w</span><span class="o">.</span><span class="n">end</span><span class="o">,</span>
+    <span class="-Symbol">&#39;productId</span><span class="o">,</span>
+    <span class="o">(</span><span class="-Symbol">&#39;unitPrice</span> <span class="o">*</span> <span class="o">(</span><span class="-Symbol">&#39;rowtime</span> <span class="o">-</span> <span class="-Symbol">&#39;w</span><span class="o">.</span><span class="n">start</span><span class="o">).</span><span class="n">exp</span><span class="o">()</span> <span class="o">/</span> <span class="mf">1.</span><span class="n">hour</span><span class="o">).</span><span class="n">sum</span> <span class="o">/</span> <span class="o">((</span><span class="-Symbol">&#39;rowtime</span> <span class="o">-</span> <span class="-Symbol">&#39;w</span><span class="o">.</span><span class="n">start</span><span class="o">).</span><span class="n">exp</span><span class="o">()</span> <span class="o">/</span> <span class="mf">1.</span><span class="n">hour</span><span class="o">).</span><span class="n">sum</span><span class="o">)</span></code></pre></div>
+
+<h2 id="user-defined-table-functions">User-defined Table Functions</h2>
+
+<p><a href="https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html#user-defined-table-functions">User-defined table functions</a> were added in Flink 1.2. These can be quite useful for table columns containing non-atomic values which need to be extracted and mapped to separate fields before processing. Table functions take an arbitrary number of scalar values and allow for returning an arbitrary number of rows as output instead of a single value, similar to a flatMap function in the DataStream or DataSet API. The output of a table function can then be joined with the original row in the table by using either a left-outer join or cross join.</p>
+
+<p>Using the previously-mentioned customer table, let\u2019s assume we want to produce a table that contains the color and size preferences as separate columns. The table program would look like this:</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="c1">// create an instance of the table function</span>
+<span class="k">val</span> <span class="n">extractPrefs</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">PropertiesExtractor</span><span class="o">()</span>
+
+<span class="c1">// derive rows and join them with original row</span>
+<span class="n">table</span>
+  <span class="o">.</span><span class="n">join</span><span class="o">(</span><span class="n">extractPrefs</span><span class="o">(</span><span class="-Symbol">&#39;prefs</span><span class="o">)</span> <span class="n">as</span> <span class="o">(</span><span class="-Symbol">&#39;color</span><span class="o">,</span> <span class="-Symbol">&#39;size</span><span class="o">))</span>
+  <span class="o">.</span><span class="n">select</span><span class="o">(</span><span class="-Symbol">&#39;id</span><span class="o">,</span> <span class="-Symbol">&#39;username</span><span class="o">,</span> <span class="-Symbol">&#39;color</span><span class="o">,</span> <span class="-Symbol">&#39;size</span><span class="o">)</span></code></pre></div>
+
+<p>The <code>PropertiesExtractor</code> is a user-defined table function that extracts the color and size. We are not interested in customers that haven\u2019t set these preferences and thus don\u2019t emit anything if both properties are not present in the string value. Since we are using a (cross) join in the program, customers without a result on the right side of the join will be filtered out.</p>
+
+<div class="highlight"><pre><code class="language-scala"><span class="k">class</span> <span class="nc">PropertiesExtractor</span> <span class="k">extends</span> <span class="nc">TableFunction</span><span class="o">[</span><span class="kt">Row</span><span class="o">]</span> <span class="o">{</span>
+  <span class="k">def</span> <span class="n">eval</span><span class="o">(</span><span class="n">prefs</span><span class="k">:</span> <span class="kt">String</span><span class="o">)</span><span class="k">:</span> <span class="kt">Unit</span> <span class="o">=</span> <span class="o">{</span>
+    <span class="c1">// split string into (key, value) pairs</span>
+    <span class="k">val</span> <span class="n">pairs</span> <span class="k">=</span> <span class="n">prefs</span>
+      <span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="s">&quot;,&quot;</span><span class="o">)</span>
+      <span class="o">.</span><span class="n">map</span> <span class="o">{</span> <span class="n">kv</span> <span class="k">=&gt;</span>
+        <span class="k">val</span> <span class="n">split</span> <span class="k">=</span> <span class="n">kv</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="s">&quot;=&quot;</span><span class="o">)</span>
+        <span class="o">(</span><span class="n">split</span><span class="o">(</span><span class="mi">0</span><span class="o">),</span> <span class="n">split</span><span class="o">(</span><span class="mi">1</span><span class="o">))</span>
+      <span class="o">}</span>
+
+    <span class="k">val</span> <span class="n">color</span> <span class="k">=</span> <span class="n">pairs</span><span class="o">.</span><span class="n">find</span><span class="o">(\</span><span class="k">_</span><span class="o">.\</span><span class="n">_1</span> <span class="o">==</span> <span class="s">&quot;color&quot;</span><span class="o">).</span><span class="n">map</span><span class="o">(\</span><span class="k">_</span><span class="o">.\</span><span class="n">_2</span><span class="o">)</span>
+    <span class="k">val</span> <span class="n">size</span> <span class="k">=</span> <span class="n">pairs</span><span class="o">.</span><span class="n">find</span><span class="o">(\</span><span class="k">_</span><span class="o">.\</span><span class="n">_1</span> <span class="o">==</span> <span class="s">&quot;size&quot;</span><span class="o">).</span><span class="n">map</span><span class="o">(\</span><span class="k">_</span><span class="o">.\</span><span class="n">_2</span><span class="o">)</span>
+
+    <span class="c1">// emit a row if color and size are specified</span>
+    <span class="o">(</span><span class="n">color</span><span class="o">,</span> <span class="n">size</span><span class="o">)</span> <span class="k">match</span> <span class="o">{</span>
+      <span class="k">case</span> <span class="o">(</span><span class="nc">Some</span><span class="o">(</span><span class="n">c</span><span class="o">),</span> <span class="nc">Some</span><span class="o">(</span><span class="n">s</span><span class="o">))</span> <span class="k">=&gt;</span> <span class="n">collect</span><span class="o">(</span><span class="nc">Row</span><span class="o">.</span><span class="n">of</span><span class="o">(</span><span class="n">c</span><span class="o">,</span> <span class="n">s</span><span class="o">))</span>
+      <span class="k">case</span> <span class="k">_</span> <span class="k">=&gt;</span> <span class="c1">// skip</span>
+    <span class="o">}</span>
+  <span class="o">}</span>
+
+  <span class="k">override</span> <span class="k">def</span> <span class="n">getResultType</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">RowTypeInfo</span><span class="o">(</span><span class="nc">Types</span><span class="o">.</span><span class="nc">STRING</span><span class="o">,</span> <span class="nc">Types</span><span class="o">.</span><span class="nc">STRING</span><span class="o">)</span>
+<span class="o">}</span></code></pre></div>
+
+<h2 id="conclusion">Conclusion</h2>
+
+<p>There is significant interest in making streaming more accessible and easier to use. Flink\u2019s Table API development is happening quickly, and we believe that soon, you will be able to implement large batch or streaming pipelines using purely relational APIs or even convert existing Flink jobs to table programs. The Table API is already a very useful tool since you can work around limitations and missing features at any time by switching back-and-forth between the DataSet/DataStream abstraction to the Table abstraction.</p>
+
+<p>Contributions like support of Apache Hive UDFs, external catalogs, more TableSources, additional windows, and more operators will make the Table API an even more useful tool. Particularly, the upcoming introduction of Dynamic Tables, which is worth a blog post of its own, shows that even in 2017, new relational APIs open the door to a number of possibilities.</p>
+
+<p>Try it out, or even better, join the design discussions on the <a href="http://flink.apache.org/community.html#mailing-lists">mailing lists</a> and <a href="https://issues.apache.org/jira/browse/FLINK/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel">JIRA</a> and start contributing!</p>
+
+      </article>
+    </div>
+
+    <div class="row">
+      <div id="disqus_thread"></div>
+      <script type="text/javascript">
+        /* * * CONFIGURATION VARIABLES: EDIT BEFORE PASTING INTO YOUR WEBPAGE * * */
+        var disqus_shortname = 'stratosphere-eu'; // required: replace example with your forum shortname
+
+        /* * * DON'T EDIT BELOW THIS LINE * * */
+        (function() {
+            var dsq = document.createElement('script'); dsq.type = 'text/javascript'; dsq.async = true;
+            dsq.src = '//' + disqus_shortname + '.disqus.com/embed.js';
+             (document.getElementsByTagName('head')[0] || document.getElementsByTagName('body')[0]).appendChild(dsq);
+        })();
+      </script>
+    </div>
+  </div>
+</div>
+      </div>
+    </div>
+
+    <hr />
+
+    <div class="row">
+      <div class="footer text-center col-sm-12">
+        <p>Copyright � 2014-2016 <a href="http://apache.org">The Apache Software Foundation</a>. All Rights Reserved.</p>
+        <p>Apache Flink, Apache, and the Apache feather logo are either registered trademarks or trademarks of The Apache Software Foundation.</p>
+        <p><a href="/privacy-policy.html">Privacy Policy</a> &middot; <a href="/blog/feed.xml">RSS feed</a></p>
+      </div>
+    </div>
+    </div><!-- /.container -->
+
+    <!-- Include all compiled plugins (below), or include individual files as needed -->
+    <script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.3.4/js/bootstrap.min.js"></script>
+    <script src="/js/codetabs.js"></script>
+    <script src="/js/stickysidebar.js"></script>
+
+
+    <!-- Google Analytics -->
+    <script>
+      (function(i,s,o,g,r,a,m){i['GoogleAnalyticsObject']=r;i[r]=i[r]||function(){
+      (i[r].q=i[r].q||[]).push(arguments)},i[r].l=1*new Date();a=s.createElement(o),
+      m=s.getElementsByTagName(o)[0];a.async=1;a.src=g;m.parentNode.insertBefore(a,m)
+      })(window,document,'script','//www.google-analytics.com/analytics.js','ga');
+
+      ga('create', 'UA-52545728-1', 'auto');
+      ga('send', 'pageview');
+    </script>
+  </body>
+</html>


[3/3] flink-web git commit: Rebuild website and update date of release 1.1.5

Posted by tw...@apache.org.
Rebuild website and update date of release 1.1.5


Project: http://git-wip-us.apache.org/repos/asf/flink-web/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink-web/commit/661f7648
Tree: http://git-wip-us.apache.org/repos/asf/flink-web/tree/661f7648
Diff: http://git-wip-us.apache.org/repos/asf/flink-web/diff/661f7648

Branch: refs/heads/asf-site
Commit: 661f76489b9a1a1cf7b19b94e729f2cf7974a1f4
Parents: d3e398e
Author: twalthr <tw...@apache.org>
Authored: Wed Mar 29 18:34:21 2017 +0200
Committer: twalthr <tw...@apache.org>
Committed: Wed Mar 29 18:34:21 2017 +0200

----------------------------------------------------------------------
 _posts/2017-03-22-release-1.1.5.md              |  74 ----
 _posts/2017-03-23-release-1.1.5.md              |  74 ++++
 content/blog/feed.xml                           | 261 ++++++++++---
 content/blog/index.html                         |  33 +-
 content/blog/page2/index.html                   |  34 +-
 content/blog/page3/index.html                   |  40 +-
 content/blog/page4/index.html                   |  41 ++-
 content/blog/page5/index.html                   |  23 ++
 content/index.html                              |   8 +-
 .../news/2017/03/29/table-sql-api-update.html   | 364 +++++++++++++++++++
 10 files changed, 776 insertions(+), 176 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/_posts/2017-03-22-release-1.1.5.md
----------------------------------------------------------------------
diff --git a/_posts/2017-03-22-release-1.1.5.md b/_posts/2017-03-22-release-1.1.5.md
deleted file mode 100644
index 8c62cbe..0000000
--- a/_posts/2017-03-22-release-1.1.5.md
+++ /dev/null
@@ -1,74 +0,0 @@
----
-layout: post
-title:  "Apache Flink 1.1.5 Released"
-date:   2017-03-22 18:00:00
-categories: news
----
-
-The Apache Flink community released the next bugfix version of the Apache Flink 1.1 series.
-
-This release includes critical fixes for HA recovery robustness, fault tolerance
-guarantees of the Flink Kafka Connector, as well as classloading issues with the Kryo serializer.
-We highly recommend all users to upgrade to Flink 1.1.5.
-
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-java</artifactId>
-  <version>1.1.5</version>
-</dependency>
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-streaming-java_2.10</artifactId>
-  <version>1.1.5</version>
-</dependency>
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-clients_2.10</artifactId>
-  <version>1.1.5</version>
-</dependency>
-```
-
-You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
-
-## Release Notes - Flink - Version 1.1.5
-
-### Bug
-<ul>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5701'>FLINK-5701</a>] -         FlinkKafkaProducer should check asyncException on checkpoints
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-6006'>FLINK-6006</a>] -         Kafka Consumer can lose state if queried partition list is incomplete on restore
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5940'>FLINK-5940</a>] -         ZooKeeperCompletedCheckpointStore cannot handle broken state handles
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5942'>FLINK-5942</a>] -         Harden ZooKeeperStateHandleStore to deal with corrupted data
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-6025'>FLINK-6025</a>] -         User code ClassLoader not used when KryoSerializer fallbacks to serialization for copying
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5945'>FLINK-5945</a>] -         Close function in OuterJoinOperatorBase#executeOnCollections
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5934'>FLINK-5934</a>] -         Scheduler in ExecutionGraph null if failure happens in ExecutionGraph.restoreLatestCheckpointedState
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5771'>FLINK-5771</a>] -         DelimitedInputFormat does not correctly handle multi-byte delimiters
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5647'>FLINK-5647</a>] -         Fix RocksDB Backend Cleanup
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-2662'>FLINK-2662</a>] -         CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5585'>FLINK-5585</a>] -         NullPointer Exception in JobManager.updateAccumulators
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5484'>FLINK-5484</a>] -         Add test for registered Kryo types
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5518'>FLINK-5518</a>] -         HadoopInputFormat throws NPE when close() is called before open()
-</li>
-</ul>
-
-### Improvement
-<ul>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5575'>FLINK-5575</a>] -         in old releases, warn users and guide them to the latest stable docs
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5639'>FLINK-5639</a>] -         Clarify License implications of RabbitMQ Connector
-</li>
-<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5466'>FLINK-5466</a>] -         Make production environment default in gulpfile
-</li>
-</ul>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/_posts/2017-03-23-release-1.1.5.md
----------------------------------------------------------------------
diff --git a/_posts/2017-03-23-release-1.1.5.md b/_posts/2017-03-23-release-1.1.5.md
new file mode 100644
index 0000000..847f3c7
--- /dev/null
+++ b/_posts/2017-03-23-release-1.1.5.md
@@ -0,0 +1,74 @@
+---
+layout: post
+title:  "Apache Flink 1.1.5 Released"
+date:   2017-03-23 18:00:00
+categories: news
+---
+
+The Apache Flink community released the next bugfix version of the Apache Flink 1.1 series.
+
+This release includes critical fixes for HA recovery robustness, fault tolerance
+guarantees of the Flink Kafka Connector, as well as classloading issues with the Kryo serializer.
+We highly recommend all users to upgrade to Flink 1.1.5.
+
+```xml
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-java</artifactId>
+  <version>1.1.5</version>
+</dependency>
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-streaming-java_2.10</artifactId>
+  <version>1.1.5</version>
+</dependency>
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-clients_2.10</artifactId>
+  <version>1.1.5</version>
+</dependency>
+```
+
+You can find the binaries on the updated [Downloads page](http://flink.apache.org/downloads.html).
+
+## Release Notes - Flink - Version 1.1.5
+
+### Bug
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5701'>FLINK-5701</a>] -         FlinkKafkaProducer should check asyncException on checkpoints
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-6006'>FLINK-6006</a>] -         Kafka Consumer can lose state if queried partition list is incomplete on restore
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5940'>FLINK-5940</a>] -         ZooKeeperCompletedCheckpointStore cannot handle broken state handles
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5942'>FLINK-5942</a>] -         Harden ZooKeeperStateHandleStore to deal with corrupted data
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-6025'>FLINK-6025</a>] -         User code ClassLoader not used when KryoSerializer fallbacks to serialization for copying
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5945'>FLINK-5945</a>] -         Close function in OuterJoinOperatorBase#executeOnCollections
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5934'>FLINK-5934</a>] -         Scheduler in ExecutionGraph null if failure happens in ExecutionGraph.restoreLatestCheckpointedState
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5771'>FLINK-5771</a>] -         DelimitedInputFormat does not correctly handle multi-byte delimiters
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5647'>FLINK-5647</a>] -         Fix RocksDB Backend Cleanup
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-2662'>FLINK-2662</a>] -         CompilerException: "Bug: Plan generation for Unions picked a ship strategy between binary plan operators."
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5585'>FLINK-5585</a>] -         NullPointer Exception in JobManager.updateAccumulators
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5484'>FLINK-5484</a>] -         Add test for registered Kryo types
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5518'>FLINK-5518</a>] -         HadoopInputFormat throws NPE when close() is called before open()
+</li>
+</ul>
+
+### Improvement
+<ul>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5575'>FLINK-5575</a>] -         in old releases, warn users and guide them to the latest stable docs
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5639'>FLINK-5639</a>] -         Clarify License implications of RabbitMQ Connector
+</li>
+<li>[<a href='https://issues.apache.org/jira/browse/FLINK-5466'>FLINK-5466</a>] -         Make production environment default in gulpfile
+</li>
+</ul>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/blog/feed.xml
----------------------------------------------------------------------
diff --git a/content/blog/feed.xml b/content/blog/feed.xml
index 0f50aab..b677b23 100644
--- a/content/blog/feed.xml
+++ b/content/blog/feed.xml
@@ -7,6 +7,183 @@
 <atom:link href="http://flink.apache.org/blog/feed.xml" rel="self" type="application/rss+xml" />
 
 <item>
+<title>From Streams to Tables and Back Again: An Update on Flink&#39;s Table &amp; SQL API</title>
+<description>&lt;p&gt;Stream processing can deliver a lot of value. Many organizations have recognized the benefit of managing large volumes of data in real-time, reacting quickly to trends, and providing customers with live services at scale. Streaming applications with well-defined business logic can deliver a competitive advantage.&lt;/p&gt;
+
+&lt;p&gt;Flink\u2019s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html&quot;&gt;DataStream&lt;/a&gt; abstraction is a powerful API which lets you flexibly define both basic and complex streaming pipelines. Additionally, it offers low-level operations such as &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/asyncio.html&quot;&gt;Async IO&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/stream/process_function.html&quot;&gt;ProcessFunctions&lt;/a&gt;. However, many users do not need such a deep level of flexibility. They need an API which quickly solves 80% of their use cases where simple tasks can be defined using little code.&lt;/p&gt;
+
+&lt;p&gt;To deliver the power of stream processing to a broader set of users, the Apache Flink community is developing APIs that provide simpler abstractions and more concise syntax so that users can focus on their business logic instead of advanced streaming concepts. Along with other APIs (such as &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/libs/cep.html&quot;&gt;CEP&lt;/a&gt; for complex event processing on streams), Flink offers a relational API that aims to unify stream and batch processing: the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html&quot;&gt;Table &amp;amp; SQL API&lt;/a&gt;, often referred to as the Table API.&lt;/p&gt;
+
+&lt;p&gt;Recently, contributors working for companies such as Alibaba, Huawei, data Artisans, and more decided to further develop the Table API. Over the past year, the Table API has been rewritten entirely. Since Flink 1.1, its core has been based on &lt;a href=&quot;http://calcite.apache.org/&quot;&gt;Apache Calcite&lt;/a&gt;, which parses SQL and optimizes all relational queries. Today, the Table API can address a wide range of use cases in both batch and stream environments with unified semantics.&lt;/p&gt;
+
+&lt;p&gt;This blog post summarizes the current status of Flink\u2019s Table API and showcases some of the recently-added features in Apache Flink. Among the features presented here are the unified access to batch and streaming data, data transformation, and window operators.
+The following paragraphs are not only supposed to give you a general overview of the Table API, but also to illustrate the potential of relational APIs in the future.&lt;/p&gt;
+
+&lt;p&gt;Because the Table API is built on top of Flink\u2019s core APIs, &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/datastream_api.html&quot;&gt;DataStreams&lt;/a&gt; and &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/batch/index.html&quot;&gt;DataSets&lt;/a&gt; can be converted to a Table and vice-versa without much overhead. Hereafter, we show how to create tables from different sources and specify programs that can be executed locally or in a distributed setting. In this post, we will use the Scala version of the Table API, but there is also a Java version as well as a SQL API with an equivalent set of features.&lt;/p&gt;
+
+&lt;h2 id=&quot;data-transformation-and-etl&quot;&gt;Data Transformation and ETL&lt;/h2&gt;
+
+&lt;p&gt;A common task in every data processing pipeline is importing data from one or multiple systems, applying some transformations to it, then exporting the data to another system. The Table API can help to manage these recurring tasks. For reading data, the API provides a set of ready-to-use &lt;code&gt;TableSources&lt;/code&gt; such as a &lt;code&gt;CsvTableSource&lt;/code&gt; and &lt;code&gt;KafkaTableSource&lt;/code&gt;, however, it also allows the implementation of custom &lt;code&gt;TableSources&lt;/code&gt; that can hide configuration specifics (e.g. watermark generation) from users who are less familiar with streaming concepts.&lt;/p&gt;
+
+&lt;p&gt;Let\u2019s assume we have a CSV file that stores customer information. The values are delimited by a \u201c|\u201d-character and contain a customer identifier, name, timestamp of the last update, and preferences encoded in a comma-separated key-value string:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code&gt;42|Bob Smith|2016-07-23 16:10:11|color=12,length=200,size=200
+&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The following example illustrates how to read a CSV file and perform some data cleansing before converting it to a regular DataStream program.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// set up execution environment&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;env&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;StreamExecutionEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;getExecutionEnvironment&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;TableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;getTableEnvironment&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// configure table source&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;customerSource&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;CsvTableSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;builder&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;path&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;/path/to/customer_data.csv&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ignoreFirstLine&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;fieldDelimiter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;|&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;field&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;id&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;LONG&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;field&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;name&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;field&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;last_update&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;TIMESTAMP&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;field&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;prefs&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;build&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// name your table source&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;registerTableSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;customers&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;customerSource&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// define your table program&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;table&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;scan&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;customers&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;filter&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;isNotNull&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;amp;&amp;amp;&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;last_update&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;&amp;gt;&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;2016-01-01 00:00:00&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;toTimestamp&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;select&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;lowerCase&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(),&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;prefs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// convert it to a data stream&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ds&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;toDataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;Row&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;
+
+&lt;span class=&quot;n&quot;&gt;ds&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;print&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;env&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;execute&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The Table API comes with a large set of built-in functions that make it easy to specify  business logic using a language integrated query (LINQ) syntax. In the example above, we filter out customers with invalid names and only select those that updated their preferences recently. We convert names to lowercase for normalization. For debugging purposes, we convert the table into a DataStream and print it.&lt;/p&gt;
+
+&lt;p&gt;The &lt;code&gt;CsvTableSource&lt;/code&gt; supports both batch and stream environments. If the programmer wants to execute the program above in a batch application, all he or she has to do is to replace the environment via &lt;code&gt;ExecutionEnvironment&lt;/code&gt; and change the output conversion from &lt;code&gt;DataStream&lt;/code&gt; to &lt;code&gt;DataSet&lt;/code&gt;. The Table API program itself doesn\u2019t change.&lt;/p&gt;
+
+&lt;p&gt;In the example, we converted the table program to a data stream of &lt;code&gt;Row&lt;/code&gt; objects. However, we are not limited to row data types. The Table API supports all types from the underlying APIs such as Java and Scala Tuples, Case Classes, POJOs, or generic types that are serialized using Kryo. Let\u2019s assume that we want to have regular object (POJO) with the following format instead of generic rows:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Customer&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;var&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;id&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;Int&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;var&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;name&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;String&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;var&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;update&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;Long&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;var&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prefs&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;java.util.Properties&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+&lt;p&gt;We can use the following table program to convert the CSV file into Customer objects. Flink takes care of creating objects and mapping fields for us.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;ds&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;tEnv&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;scan&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;customers&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;select&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;name&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;last_update&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;update&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;parseProperties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;prefs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;prefs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;toDataStream&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;Customer&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;You might have noticed that the query above uses a function to parse the preferences field. Even though Flink\u2019s Table API is shipped with a large set of built-in functions, is often necessary to define custom user-defined scalar functions. In the above example we use a user-defined function &lt;code&gt;parseProperties&lt;/code&gt;. The following code snippet shows how easily we can implement a scalar function.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;k&quot;&gt;object&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;parseProperties&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;ScalarFunction&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;eval&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;str&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;Properties&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;props&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Properties&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;str&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;,&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(\&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;=&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;foreach&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;setProperty&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)))&lt;/span&gt;
+    &lt;span class=&quot;n&quot;&gt;props&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Scalar functions can be used to deserialize, extract, or convert values (and more). By overwriting the &lt;code&gt;open()&lt;/code&gt; method we can even have access to runtime information such as distributed cached files or metrics. Even the &lt;code&gt;open()&lt;/code&gt; method is only called once during the runtime\u2019s &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.3/internals/task_lifecycle.html&quot;&gt;task lifecycle&lt;/a&gt;.&lt;/p&gt;
+
+&lt;h2 id=&quot;unified-windowing-for-static-and-streaming-data&quot;&gt;Unified Windowing for Static and Streaming Data&lt;/h2&gt;
+
+&lt;p&gt;Another very common task, especially when working with continuous data, is the definition of windows to split a stream into pieces of finite size, over which we can apply computations. At the moment, the Table API supports three types of windows: sliding windows, tumbling windows, and session windows (for general definitions of the different types of windows, we recommend &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/windows.html&quot;&gt;Flink\u2019s documentation&lt;/a&gt;). All three window types work on &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/event_time.html&quot;&gt;event or processing time&lt;/a&gt;. Session windows can be defined over time intervals, sliding and tumbling windows can be defined over time intervals or a number of rows.&lt;/p&gt;
+
+&lt;p&gt;Let\u2019s assume that our customer data from the example above is an event stream of updates generated whenever the customer updated his or her preferences. We assume that events come from a TableSource that has assigned timestamps and watermarks. The definition of a window happens again in a LINQ-style fashion. The following example could be used to count the updates to the preferences during one day.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Tumble&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;over&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.d&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ay&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;on&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;rowtime&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;groupBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;select&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;start&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;from&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;end&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;to&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;prefs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&l
 t;span class=&quot;n&quot;&gt;count&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;updates&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;By using the &lt;code&gt;on()&lt;/code&gt; parameter, we can specify whether the window is supposed to work on event-time or not. The Table API assumes that timestamps and watermarks are assigned correctly when using event-time. Elements with timestamps smaller than the last received watermark are dropped. Since the extraction of timestamps and generation of watermarks depends on the data source and requires some deeper knowledge of their origin, the TableSource or the upstream DataStream is usually responsible for assigning these properties.&lt;/p&gt;
+
+&lt;p&gt;The following code shows how to define other types of windows:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// using processing-time&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Tumble&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;over&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;100.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;rows&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;manyRowWindow&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;c1&quot;&gt;// using event-time&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Session&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;withGap&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;15.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;minutes&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;on&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;rowtime&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;sessionWindow&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Slide&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;over&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.d&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;ay&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;every&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hour&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;on&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;rowtime&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;dailyWindow&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;Since batch is just a special case of streaming (where a batch happens to have a defined start and end point), it is also possible to apply all of these windows in a batch execution environment. Without any modification of the table program itself, we can run the code on a DataSet given that we specified a column named \u201crowtime\u201d. This is particularly interesting if we want to compute exact results from time-to-time, so that late events that are heavily out-of-order can be included in the computation.&lt;/p&gt;
+
+&lt;p&gt;At the moment, the Table API only supports so-called \u201cgroup windows\u201d that also exist in the DataStream API. Other windows such as SQL\u2019s OVER clause windows are in development and &lt;a href=&quot;https://cwiki.apache.org/confluence/display/FLINK/FLIP-11%3A+Table+API+Stream+Aggregations&quot;&gt;planned for Flink 1.3&lt;/a&gt;.&lt;/p&gt;
+
+&lt;p&gt;In order to demonstrate the expressiveness and capabilities of the API, here\u2019s a snippet with a more advanced example of an exponentially decaying moving average over a sliding window of one hour which returns aggregated results every second. The table program weighs recent orders more heavily than older orders. This example is borrowed from &lt;a href=&quot;https://calcite.apache.org/docs/stream.html#hopping-windows&quot;&gt;Apache Calcite&lt;/a&gt; and shows what will be possible in future Flink releases for both the Table API and SQL.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;window&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Slide&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;over&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hour&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;every&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;second&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;groupBy&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;productId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;select&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;
+    &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;end&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+    &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;productId&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;unitPrice&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;*&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;rowtime&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;start&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;exp&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hour&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;sum&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;((&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&
 gt;&amp;#39;rowtime&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;-&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;w&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;start&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;exp&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;/&lt;/span&gt; &lt;span class=&quot;mf&quot;&gt;1.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;hour&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;sum&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h2 id=&quot;user-defined-table-functions&quot;&gt;User-defined Table Functions&lt;/h2&gt;
+
+&lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.2/dev/table_api.html#user-defined-table-functions&quot;&gt;User-defined table functions&lt;/a&gt; were added in Flink 1.2. These can be quite useful for table columns containing non-atomic values which need to be extracted and mapped to separate fields before processing. Table functions take an arbitrary number of scalar values and allow for returning an arbitrary number of rows as output instead of a single value, similar to a flatMap function in the DataStream or DataSet API. The output of a table function can then be joined with the original row in the table by using either a left-outer join or cross join.&lt;/p&gt;
+
+&lt;p&gt;Using the previously-mentioned customer table, let\u2019s assume we want to produce a table that contains the color and size preferences as separate columns. The table program would look like this:&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;c1&quot;&gt;// create an instance of the table function&lt;/span&gt;
+&lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;extractPrefs&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;PropertiesExtractor&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;()&lt;/span&gt;
+
+&lt;span class=&quot;c1&quot;&gt;// derive rows and join them with original row&lt;/span&gt;
+&lt;span class=&quot;n&quot;&gt;table&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;join&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;extractPrefs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;prefs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;as&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;color&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;size&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;select&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;id&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;username&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;color&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;-Symbol&quot;&gt;&amp;#39;size&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;p&gt;The &lt;code&gt;PropertiesExtractor&lt;/code&gt; is a user-defined table function that extracts the color and size. We are not interested in customers that haven\u2019t set these preferences and thus don\u2019t emit anything if both properties are not present in the string value. Since we are using a (cross) join in the program, customers without a result on the right side of the join will be filtered out.&lt;/p&gt;
+
+&lt;div class=&quot;highlight&quot;&gt;&lt;pre&gt;&lt;code class=&quot;language-scala&quot;&gt;&lt;span class=&quot;k&quot;&gt;class&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;PropertiesExtractor&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;extends&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;TableFunction&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;[&lt;/span&gt;&lt;span class=&quot;kt&quot;&gt;Row&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;]&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+  &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;eval&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;prefs&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;String&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;:&lt;/span&gt; &lt;span class=&quot;kt&quot;&gt;Unit&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+    &lt;span class=&quot;c1&quot;&gt;// split string into (key, value) pairs&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pairs&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;prefs&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;,&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;map&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;kv&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&amp;gt;&lt;/span&gt;
+        &lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;split&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;kv&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;s&quot;&gt;&amp;quot;=&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+        &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;0&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;split&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;mi&quot;&gt;1&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt;
+      &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+    &lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;color&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pairs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;find&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(\&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.\&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;color&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(\&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.\&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;_2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+    &lt;span class=&quot;k&quot;&gt;val&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;size&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;pairs&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;find&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(\&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.\&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;_1&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;==&lt;/span&gt; &lt;span class=&quot;s&quot;&gt;&amp;quot;size&amp;quot;&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;).&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;map&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(\&lt;/span&gt;&lt;span class=&quot;k&quot;&gt;_&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.\&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;_2&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+
+    &lt;span class=&quot;c1&quot;&gt;// emit a row if color and size are specified&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;color&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;size&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;match&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;{&lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;case&lt;/span&gt; &lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Some&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;),&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Some&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;collect&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Row&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;of&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;n&quot;&gt;c&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;s&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;))&
 lt;/span&gt;
+      &lt;span class=&quot;k&quot;&gt;case&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;_&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&amp;gt;&lt;/span&gt; &lt;span class=&quot;c1&quot;&gt;// skip&lt;/span&gt;
+    &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+  &lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;
+
+  &lt;span class=&quot;k&quot;&gt;override&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;def&lt;/span&gt; &lt;span class=&quot;n&quot;&gt;getResultType&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;=&lt;/span&gt; &lt;span class=&quot;k&quot;&gt;new&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;RowTypeInfo&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;(&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;,&lt;/span&gt; &lt;span class=&quot;nc&quot;&gt;Types&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;.&lt;/span&gt;&lt;span class=&quot;nc&quot;&gt;STRING&lt;/span&gt;&lt;span class=&quot;o&quot;&gt;)&lt;/span&gt;
+&lt;span class=&quot;o&quot;&gt;}&lt;/span&gt;&lt;/code&gt;&lt;/pre&gt;&lt;/div&gt;
+
+&lt;h2 id=&quot;conclusion&quot;&gt;Conclusion&lt;/h2&gt;
+
+&lt;p&gt;There is significant interest in making streaming more accessible and easier to use. Flink\u2019s Table API development is happening quickly, and we believe that soon, you will be able to implement large batch or streaming pipelines using purely relational APIs or even convert existing Flink jobs to table programs. The Table API is already a very useful tool since you can work around limitations and missing features at any time by switching back-and-forth between the DataSet/DataStream abstraction to the Table abstraction.&lt;/p&gt;
+
+&lt;p&gt;Contributions like support of Apache Hive UDFs, external catalogs, more TableSources, additional windows, and more operators will make the Table API an even more useful tool. Particularly, the upcoming introduction of Dynamic Tables, which is worth a blog post of its own, shows that even in 2017, new relational APIs open the door to a number of possibilities.&lt;/p&gt;
+
+&lt;p&gt;Try it out, or even better, join the design discussions on the &lt;a href=&quot;http://flink.apache.org/community.html#mailing-lists&quot;&gt;mailing lists&lt;/a&gt; and &lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK/?selectedTab=com.atlassian.jira.jira-projects-plugin:summary-panel&quot;&gt;JIRA&lt;/a&gt; and start contributing!&lt;/p&gt;
+</description>
+<pubDate>Wed, 29 Mar 2017 14:00:00 +0200</pubDate>
+<link>http://flink.apache.org/news/2017/03/29/table-sql-api-update.html</link>
+<guid isPermaLink="true">/news/2017/03/29/table-sql-api-update.html</guid>
+</item>
+
+<item>
 <title>Apache Flink 1.1.5 Released</title>
 <description>&lt;p&gt;The Apache Flink community released the next bugfix version of the Apache Flink 1.1 series.&lt;/p&gt;
 
@@ -74,7 +251,7 @@ We highly recommend all users to upgrade to Flink 1.1.5.&lt;/p&gt;
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Thu, 23 Mar 2017 02:00:00 +0800</pubDate>
+<pubDate>Thu, 23 Mar 2017 19:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2017/03/23/release-1.1.5.html</link>
 <guid isPermaLink="true">/news/2017/03/23/release-1.1.5.html</guid>
 </item>
@@ -348,7 +525,7 @@ If you have, for example, a flatMap() operator that keeps a running aggregate pe
   &lt;li&gt;\u9b4f\u5049\u54f2&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Mon, 06 Feb 2017 20:00:00 +0800</pubDate>
+<pubDate>Mon, 06 Feb 2017 13:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2017/02/06/release-1.2.0.html</link>
 <guid isPermaLink="true">/news/2017/02/06/release-1.2.0.html</guid>
 </item>
@@ -563,7 +740,7 @@ If you have, for example, a flatMap() operator that keeps a running aggregate pe
 &lt;/ul&gt;
 
 </description>
-<pubDate>Wed, 21 Dec 2016 17:00:00 +0800</pubDate>
+<pubDate>Wed, 21 Dec 2016 10:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2016/12/21/release-1.1.4.html</link>
 <guid isPermaLink="true">/news/2016/12/21/release-1.1.4.html</guid>
 </item>
@@ -757,7 +934,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 
 &lt;p&gt;Lastly, we\u2019d like to extend a sincere thank you to all of the Flink community for making 2016 a great year!&lt;/p&gt;
 </description>
-<pubDate>Mon, 19 Dec 2016 17:00:00 +0800</pubDate>
+<pubDate>Mon, 19 Dec 2016 10:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2016/12/19/2016-year-in-review.html</link>
 <guid isPermaLink="true">/news/2016/12/19/2016-year-in-review.html</guid>
 </item>
@@ -861,7 +1038,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 12 Oct 2016 17:00:00 +0800</pubDate>
+<pubDate>Wed, 12 Oct 2016 11:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/10/12/release-1.1.3.html</link>
 <guid isPermaLink="true">/news/2016/10/12/release-1.1.3.html</guid>
 </item>
@@ -935,7 +1112,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 &lt;/ul&gt;
 
 </description>
-<pubDate>Mon, 05 Sep 2016 17:00:00 +0800</pubDate>
+<pubDate>Mon, 05 Sep 2016 11:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/09/05/release-1.1.2.html</link>
 <guid isPermaLink="true">/news/2016/09/05/release-1.1.2.html</guid>
 </item>
@@ -955,7 +1132,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 &lt;p&gt;We hope to see many community members at Flink Forward 2016. Registration is available online: &lt;a href=&quot;http://flink-forward.org/registration/&quot;&gt;flink-forward.org/registration&lt;/a&gt;
 &lt;/p&gt;
 </description>
-<pubDate>Wed, 24 Aug 2016 17:00:00 +0800</pubDate>
+<pubDate>Wed, 24 Aug 2016 11:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/08/24/ff16-keynotes-panels.html</link>
 <guid isPermaLink="true">/news/2016/08/24/ff16-keynotes-panels.html</guid>
 </item>
@@ -986,7 +1163,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 
 &lt;p&gt;You can find the binaries on the updated &lt;a href=&quot;http://flink.apache.org/downloads.html&quot;&gt;Downloads page&lt;/a&gt;.&lt;/p&gt;
 </description>
-<pubDate>Thu, 11 Aug 2016 17:00:00 +0800</pubDate>
+<pubDate>Thu, 11 Aug 2016 11:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/08/11/release-1.1.1.html</link>
 <guid isPermaLink="true">/news/2016/08/11/release-1.1.1.html</guid>
 </item>
@@ -1208,7 +1385,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
   &lt;li&gt;\u536b\u4e50&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Mon, 08 Aug 2016 21:00:00 +0800</pubDate>
+<pubDate>Mon, 08 Aug 2016 15:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/08/08/release-1.1.0.html</link>
 <guid isPermaLink="true">/news/2016/08/08/release-1.1.0.html</guid>
 </item>
@@ -1337,7 +1514,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 
 &lt;p&gt;If this post made you curious and you want to try out Flink\u2019s SQL interface and the new Table API, we encourage you to do so! Simply clone the SNAPSHOT &lt;a href=&quot;https://github.com/apache/flink/tree/master&quot;&gt;master branch&lt;/a&gt; and check out the &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/apis/table.html&quot;&gt;Table API documentation for the SNAPSHOT version&lt;/a&gt;. Please note that the branch is under heavy development, and hence some code examples in this blog post might not work. We are looking forward to your feedback and welcome contributions.&lt;/p&gt;
 </description>
-<pubDate>Tue, 24 May 2016 18:00:00 +0800</pubDate>
+<pubDate>Tue, 24 May 2016 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/05/24/stream-sql.html</link>
 <guid isPermaLink="true">/news/2016/05/24/stream-sql.html</guid>
 </item>
@@ -1381,7 +1558,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
   &lt;li&gt;[streaming-contrib] Fix port clash in DbStateBackend tests&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 11 May 2016 16:00:00 +0800</pubDate>
+<pubDate>Wed, 11 May 2016 10:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/05/11/release-1.0.3.html</link>
 <guid isPermaLink="true">/news/2016/05/11/release-1.0.3.html</guid>
 </item>
@@ -1427,7 +1604,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
   &lt;li&gt;[&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-3716&quot;&gt;FLINK-3716&lt;/a&gt;] [kafka consumer] Decreasing socket timeout so testFailOnNoBroker() will pass before JUnit timeout&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Fri, 22 Apr 2016 16:00:00 +0800</pubDate>
+<pubDate>Fri, 22 Apr 2016 10:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/04/22/release-1.0.2.html</link>
 <guid isPermaLink="true">/news/2016/04/22/release-1.0.2.html</guid>
 </item>
@@ -1440,7 +1617,7 @@ enable the joining of a main, high-throughput stream with one more more inputs w
 
 &lt;p&gt;Read more &lt;a href=&quot;http://flink-forward.org/&quot;&gt;here&lt;/a&gt;.&lt;/p&gt;
 </description>
-<pubDate>Thu, 14 Apr 2016 18:00:00 +0800</pubDate>
+<pubDate>Thu, 14 Apr 2016 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/04/14/flink-forward-announce.html</link>
 <guid isPermaLink="true">/news/2016/04/14/flink-forward-announce.html</guid>
 </item>
@@ -1633,7 +1810,7 @@ This feature will allow to prune unpromising event sequences early.&lt;/p&gt;
 &lt;p&gt;&lt;em&gt;Note:&lt;/em&gt; The example code requires Flink 1.0.1 or higher.&lt;/p&gt;
 
 </description>
-<pubDate>Wed, 06 Apr 2016 18:00:00 +0800</pubDate>
+<pubDate>Wed, 06 Apr 2016 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/04/06/cep-monitoring.html</link>
 <guid isPermaLink="true">/news/2016/04/06/cep-monitoring.html</guid>
 </item>
@@ -1704,7 +1881,7 @@ This feature will allow to prune unpromising event sequences early.&lt;/p&gt;
 &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 06 Apr 2016 16:00:00 +0800</pubDate>
+<pubDate>Wed, 06 Apr 2016 10:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2016/04/06/release-1.0.1.html</link>
 <guid isPermaLink="true">/news/2016/04/06/release-1.0.1.html</guid>
 </item>
@@ -1831,7 +2008,7 @@ When using this backend, active state in streaming programs can grow well beyond
   &lt;li&gt;zhangminglei&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Tue, 08 Mar 2016 21:00:00 +0800</pubDate>
+<pubDate>Tue, 08 Mar 2016 14:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2016/03/08/release-1.0.0.html</link>
 <guid isPermaLink="true">/news/2016/03/08/release-1.0.0.html</guid>
 </item>
@@ -1868,7 +2045,7 @@ When using this backend, active state in streaming programs can grow well beyond
   &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-3020&quot;&gt;FLINK-3020&lt;/a&gt;: Set number of task slots to maximum parallelism in local execution&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Thu, 11 Feb 2016 16:00:00 +0800</pubDate>
+<pubDate>Thu, 11 Feb 2016 09:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2016/02/11/release-0.10.2.html</link>
 <guid isPermaLink="true">/news/2016/02/11/release-0.10.2.html</guid>
 </item>
@@ -2092,7 +2269,7 @@ discussion&lt;/a&gt;
 on the Flink mailing lists.&lt;/p&gt;
 
 </description>
-<pubDate>Fri, 18 Dec 2015 18:00:00 +0800</pubDate>
+<pubDate>Fri, 18 Dec 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/12/18/a-year-in-review.html</link>
 <guid isPermaLink="true">/news/2015/12/18/a-year-in-review.html</guid>
 </item>
@@ -2239,7 +2416,7 @@ While you can embed Spouts/Bolts in a Flink program and mix-and-match them with
 &lt;p&gt;&lt;sup id=&quot;fn1&quot;&gt;1. We confess, there are three lines changed compared to a Storm project &lt;img class=&quot;emoji&quot; style=&quot;width:16px;height:16px;align:absmiddle&quot; src=&quot;/img/blog/smirk.png&quot; /&gt;\u2014because the example covers local &lt;em&gt;and&lt;/em&gt; remote execution. &lt;a href=&quot;#ref1&quot; title=&quot;Back to text.&quot;&gt;\u21a9&lt;/a&gt;&lt;/sup&gt;&lt;/p&gt;
 
 </description>
-<pubDate>Fri, 11 Dec 2015 18:00:00 +0800</pubDate>
+<pubDate>Fri, 11 Dec 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/12/11/storm-compatibility.html</link>
 <guid isPermaLink="true">/news/2015/12/11/storm-compatibility.html</guid>
 </item>
@@ -2396,7 +2573,7 @@ While you can embed Spouts/Bolts in a Flink program and mix-and-match them with
 
 &lt;p&gt;Support for various types of windows over continuous data streams is a must-have for modern stream processors. Apache Flink is a stream processor with a very strong feature set, including a very flexible mechanism to build and evaluate windows over continuous data streams. Flink provides pre-defined window operators for common uses cases as well as a toolbox that allows to define very custom windowing logic. The Flink community will add more pre-defined window operators as we learn the requirements from our users.&lt;/p&gt;
 </description>
-<pubDate>Fri, 04 Dec 2015 18:00:00 +0800</pubDate>
+<pubDate>Fri, 04 Dec 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/12/04/Introducing-windows.html</link>
 <guid isPermaLink="true">/news/2015/12/04/Introducing-windows.html</guid>
 </item>
@@ -2455,7 +2632,7 @@ While you can embed Spouts/Bolts in a Flink program and mix-and-match them with
 &lt;/ul&gt;
 
 </description>
-<pubDate>Fri, 27 Nov 2015 16:00:00 +0800</pubDate>
+<pubDate>Fri, 27 Nov 2015 09:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/11/27/release-0.10.1.html</link>
 <guid isPermaLink="true">/news/2015/11/27/release-0.10.1.html</guid>
 </item>
@@ -2630,7 +2807,7 @@ Also note that some methods in the DataStream API had to be renamed as part of t
 &lt;/ul&gt;
 
 </description>
-<pubDate>Mon, 16 Nov 2015 16:00:00 +0800</pubDate>
+<pubDate>Mon, 16 Nov 2015 09:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/11/16/release-0.10.0.html</link>
 <guid isPermaLink="true">/news/2015/11/16/release-0.10.0.html</guid>
 </item>
@@ -3521,7 +3698,7 @@ Either &lt;code&gt;0 + absolutePointer&lt;/code&gt; or &lt;code&gt;objectRefAddr
 &lt;/div&gt;
 
 </description>
-<pubDate>Wed, 16 Sep 2015 16:00:00 +0800</pubDate>
+<pubDate>Wed, 16 Sep 2015 10:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/09/16/off-heap-memory.html</link>
 <guid isPermaLink="true">/news/2015/09/16/off-heap-memory.html</guid>
 </item>
@@ -3573,7 +3750,7 @@ fault tolerance, the internal runtime architecture, and others.&lt;/p&gt;
 register for the conference.&lt;/p&gt;
 
 </description>
-<pubDate>Thu, 03 Sep 2015 16:00:00 +0800</pubDate>
+<pubDate>Thu, 03 Sep 2015 10:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/09/03/flink-forward.html</link>
 <guid isPermaLink="true">/news/2015/09/03/flink-forward.html</guid>
 </item>
@@ -3634,7 +3811,7 @@ for this release:&lt;/p&gt;
   &lt;li&gt;&lt;a href=&quot;https://issues.apache.org/jira/browse/FLINK-2584&quot;&gt;FLINK-2584&lt;/a&gt; ASM dependency is not shaded away&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Tue, 01 Sep 2015 16:00:00 +0800</pubDate>
+<pubDate>Tue, 01 Sep 2015 10:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/09/01/release-0.9.1.html</link>
 <guid isPermaLink="true">/news/2015/09/01/release-0.9.1.html</guid>
 </item>
@@ -4091,7 +4268,7 @@ tools, graph database systems and sampling techniques.&lt;/p&gt;
 &lt;h2 id=&quot;links&quot;&gt;Links&lt;/h2&gt;
 &lt;p&gt;&lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/libs/gelly_guide.html&quot;&gt;Gelly Documentation&lt;/a&gt;&lt;/p&gt;
 </description>
-<pubDate>Mon, 24 Aug 2015 00:00:00 +0800</pubDate>
+<pubDate>Mon, 24 Aug 2015 00:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/08/24/introducing-flink-gelly.html</link>
 <guid isPermaLink="true">/news/2015/08/24/introducing-flink-gelly.html</guid>
 </item>
@@ -4330,7 +4507,7 @@ tools, graph database systems and sampling techniques.&lt;/p&gt;
 
 &lt;p&gt;Flink will require at least Java 7 in major releases after 0.9.0.&lt;/p&gt;
 </description>
-<pubDate>Wed, 24 Jun 2015 22:00:00 +0800</pubDate>
+<pubDate>Wed, 24 Jun 2015 16:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/06/24/announcing-apache-flink-0.9.0-release.html</link>
 <guid isPermaLink="true">/news/2015/06/24/announcing-apache-flink-0.9.0-release.html</guid>
 </item>
@@ -4369,7 +4546,7 @@ including Apache Flink.&lt;/p&gt;
 
 &lt;p&gt;Stay tuned for a wealth of upcoming events! Two Flink talsk will be presented at &lt;a href=&quot;http://berlinbuzzwords.de/15/sessions&quot;&gt;Berlin Buzzwords&lt;/a&gt;, Flink will be presented at the &lt;a href=&quot;http://2015.hadoopsummit.org/san-jose/&quot;&gt;Hadoop Summit in San Jose&lt;/a&gt;. A &lt;a href=&quot;http://www.meetup.com/Apache-Flink-Meetup/events/220557545/&quot;&gt;training workshop on Apache Flink&lt;/a&gt; is being organized in Berlin. Finally, &lt;a href=&quot;http://2015.flink-forward.org/&quot;&gt;Flink Forward&lt;/a&gt;, the first conference to bring together the whole Flink community is taking place in Berlin in October 2015.&lt;/p&gt;
 </description>
-<pubDate>Thu, 14 May 2015 18:00:00 +0800</pubDate>
+<pubDate>Thu, 14 May 2015 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/05/14/Community-update-April.html</link>
 <guid isPermaLink="true">/news/2015/05/14/Community-update-April.html</guid>
 </item>
@@ -4560,7 +4737,7 @@ The following figure shows how two objects are compared.&lt;/p&gt;
   &lt;li&gt;Flink\u2019s DBMS-style operators operate natively on binary data yielding high performance in-memory and destage gracefully to disk if necessary.&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Mon, 11 May 2015 18:00:00 +0800</pubDate>
+<pubDate>Mon, 11 May 2015 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/05/11/Juggling-with-Bits-and-Bytes.html</link>
 <guid isPermaLink="true">/news/2015/05/11/Juggling-with-Bits-and-Bytes.html</guid>
 </item>
@@ -4812,7 +4989,7 @@ Improve usability of command line interface&lt;/p&gt;
   &lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Mon, 13 Apr 2015 18:00:00 +0800</pubDate>
+<pubDate>Mon, 13 Apr 2015 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/04/13/release-0.9.0-milestone1.html</link>
 <guid isPermaLink="true">/news/2015/04/13/release-0.9.0-milestone1.html</guid>
 </item>
@@ -4879,7 +5056,7 @@ limited in that it does not yet handle large state and iterative
 programs.&lt;/p&gt;
 
 </description>
-<pubDate>Tue, 07 Apr 2015 18:00:00 +0800</pubDate>
+<pubDate>Tue, 07 Apr 2015 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2015/04/07/march-in-flink.html</link>
 <guid isPermaLink="true">/news/2015/04/07/march-in-flink.html</guid>
 </item>
@@ -5066,7 +5243,7 @@ programs.&lt;/p&gt;
 [4] &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/index.html#semantic-annotations&quot;&gt;Flink 1.0 documentation: Semantic annotations&lt;/a&gt; &lt;br /&gt;
 [5] &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-release-1.0/apis/batch/dataset_transformations.html#join-algorithm-hints&quot;&gt;Flink 1.0 documentation: Optimizer join hints&lt;/a&gt; &lt;br /&gt;&lt;/p&gt;
 </description>
-<pubDate>Fri, 13 Mar 2015 18:00:00 +0800</pubDate>
+<pubDate>Fri, 13 Mar 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html</link>
 <guid isPermaLink="true">/news/2015/03/13/peeking-into-Apache-Flinks-Engine-Room.html</guid>
 </item>
@@ -5182,7 +5359,7 @@ Hadoop clusters.  Also, basic support for accessing secured HDFS with
 a standalone Flink setup is now available.&lt;/p&gt;
 
 </description>
-<pubDate>Mon, 02 Mar 2015 18:00:00 +0800</pubDate>
+<pubDate>Mon, 02 Mar 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/03/02/february-2015-in-flink.html</link>
 <guid isPermaLink="true">/news/2015/03/02/february-2015-in-flink.html</guid>
 </item>
@@ -5827,7 +6004,7 @@ internally, fault tolerance, and performance measurements!&lt;/p&gt;
 
 &lt;p&gt;&lt;a href=&quot;#top&quot;&gt;Back to top&lt;/a&gt;&lt;/p&gt;
 </description>
-<pubDate>Mon, 09 Feb 2015 20:00:00 +0800</pubDate>
+<pubDate>Mon, 09 Feb 2015 13:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/02/09/streaming-example.html</link>
 <guid isPermaLink="true">/news/2015/02/09/streaming-example.html</guid>
 </item>
@@ -5876,7 +6053,7 @@ internally, fault tolerance, and performance measurements!&lt;/p&gt;
 
 &lt;p&gt;The improved YARN client of Flink now allows users to deploy Flink on YARN for executing a single job. Older versions only supported a long-running YARN session. The code of the YARN client has been refactored to provide an (internal) Java API for controlling YARN clusters more easily.&lt;/p&gt;
 </description>
-<pubDate>Wed, 04 Feb 2015 18:00:00 +0800</pubDate>
+<pubDate>Wed, 04 Feb 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/02/04/january-in-flink.html</link>
 <guid isPermaLink="true">/news/2015/02/04/january-in-flink.html</guid>
 </item>
@@ -5962,7 +6139,7 @@ internally, fault tolerance, and performance measurements!&lt;/p&gt;
   &lt;li&gt;Chen Xu&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Wed, 21 Jan 2015 18:00:00 +0800</pubDate>
+<pubDate>Wed, 21 Jan 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/01/21/release-0.8.html</link>
 <guid isPermaLink="true">/news/2015/01/21/release-0.8.html</guid>
 </item>
@@ -6024,7 +6201,7 @@ Flink serialization system improved a lot over time and by now surpasses the cap
 
 &lt;p&gt;The community is working hard together with the Apache infra team to migrate the Flink infrastructure to a top-level project. At the same time, the Flink community is working on the Flink 0.8.0 release which should be out very soon.&lt;/p&gt;
 </description>
-<pubDate>Tue, 06 Jan 2015 18:00:00 +0800</pubDate>
+<pubDate>Tue, 06 Jan 2015 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2015/01/06/december-in-flink.html</link>
 <guid isPermaLink="true">/news/2015/01/06/december-in-flink.html</guid>
 </item>
@@ -6111,7 +6288,7 @@ Flink serialization system improved a lot over time and by now surpasses the cap
 
 &lt;p&gt;If you want to use Flink\u2019s Hadoop compatibility package checkout our &lt;a href=&quot;https://ci.apache.org/projects/flink/flink-docs-master/apis/batch/hadoop_compatibility.html&quot;&gt;documentation&lt;/a&gt;.&lt;/p&gt;
 </description>
-<pubDate>Tue, 18 Nov 2014 18:00:00 +0800</pubDate>
+<pubDate>Tue, 18 Nov 2014 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2014/11/18/hadoop-compatibility.html</link>
 <guid isPermaLink="true">/news/2014/11/18/hadoop-compatibility.html</guid>
 </item>
@@ -6183,7 +6360,7 @@ Flink serialization system improved a lot over time and by now surpasses the cap
   &lt;li&gt;Yingjun Wu&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Tue, 04 Nov 2014 18:00:00 +0800</pubDate>
+<pubDate>Tue, 04 Nov 2014 11:00:00 +0100</pubDate>
 <link>http://flink.apache.org/news/2014/11/04/release-0.7.0.html</link>
 <guid isPermaLink="true">/news/2014/11/04/release-0.7.0.html</guid>
 </item>
@@ -6282,7 +6459,7 @@ properties, some algorithms)&lt;/p&gt;
 &lt;p&gt;http://www.meetup.com/HandsOnProgrammingEvents/events/210504392/&lt;/p&gt;
 
 </description>
-<pubDate>Fri, 03 Oct 2014 18:00:00 +0800</pubDate>
+<pubDate>Fri, 03 Oct 2014 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2014/10/03/upcoming_events.html</link>
 <guid isPermaLink="true">/news/2014/10/03/upcoming_events.html</guid>
 </item>
@@ -6296,7 +6473,7 @@ of the system. We suggest all users of Flink to work with this newest version.&l
 
 &lt;p&gt;&lt;a href=&quot;/downloads.html&quot;&gt;Download&lt;/a&gt; the release today.&lt;/p&gt;
 </description>
-<pubDate>Fri, 26 Sep 2014 18:00:00 +0800</pubDate>
+<pubDate>Fri, 26 Sep 2014 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2014/09/26/release-0.6.1.html</link>
 <guid isPermaLink="true">/news/2014/09/26/release-0.6.1.html</guid>
 </item>
@@ -6379,7 +6556,7 @@ robust, as well as breaking API changes.&lt;/p&gt;
   &lt;li&gt;Tobias Wiens&lt;/li&gt;
 &lt;/ul&gt;
 </description>
-<pubDate>Tue, 26 Aug 2014 18:00:00 +0800</pubDate>
+<pubDate>Tue, 26 Aug 2014 12:00:00 +0200</pubDate>
 <link>http://flink.apache.org/news/2014/08/26/release-0.6.html</link>
 <guid isPermaLink="true">/news/2014/08/26/release-0.6.html</guid>
 </item>

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/blog/index.html
----------------------------------------------------------------------
diff --git a/content/blog/index.html b/content/blog/index.html
index c7b2173..e9f15ae 100644
--- a/content/blog/index.html
+++ b/content/blog/index.html
@@ -142,6 +142,17 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</a></h2>
+      <p>29 Mar 2017 by Timo Walther (<a href="https://twitter.com/twalthr">@twalthr</a>)</p>
+
+      <p><p>Broadening the user base and unifying batch & streaming with relational APIs</p></p>
+
+      <p><a href="/news/2017/03/29/table-sql-api-update.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></h2>
       <p>23 Mar 2017</p>
 
@@ -254,18 +265,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2016/05/24/stream-sql.html">Stream Processing for Everyone with SQL and Apache Flink</a></h2>
-      <p>24 May 2016 by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
-
-      <p><p>About six months ago, the Apache Flink community started an effort to add a SQL interface for stream data analysis. SQL is <i>the</i> standard language to access and process data. Everybody who occasionally analyzes data is familiar with SQL. Consequently, a SQL interface for stream data processing will make this technology accessible to a much wider audience. Moreover, SQL support for streaming data will also enable new use cases such as interactive and ad-hoc stream analysis and significantly simplify many applications including stream ingestion and simple transformations.</p>
-<p>In this blog post, we report on the current status, architectural design, and future plans of the Apache Flink community to implement support for SQL as a language for analyzing data streams.</p></p>
-
-      <p><a href="/news/2016/05/24/stream-sql.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -298,6 +297,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/blog/page2/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page2/index.html b/content/blog/page2/index.html
index 0a18f5f..c39e2ba 100644
--- a/content/blog/page2/index.html
+++ b/content/blog/page2/index.html
@@ -142,6 +142,18 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2016/05/24/stream-sql.html">Stream Processing for Everyone with SQL and Apache Flink</a></h2>
+      <p>24 May 2016 by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
+
+      <p><p>About six months ago, the Apache Flink community started an effort to add a SQL interface for stream data analysis. SQL is <i>the</i> standard language to access and process data. Everybody who occasionally analyzes data is familiar with SQL. Consequently, a SQL interface for stream data processing will make this technology accessible to a much wider audience. Moreover, SQL support for streaming data will also enable new use cases such as interactive and ad-hoc stream analysis and significantly simplify many applications including stream ingestion and simple transformations.</p>
+<p>In this blog post, we report on the current status, architectural design, and future plans of the Apache Flink community to implement support for SQL as a language for analyzing data streams.</p></p>
+
+      <p><a href="/news/2016/05/24/stream-sql.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2016/05/11/release-1.0.3.html">Flink 1.0.3 Released</a></h2>
       <p>11 May 2016</p>
 
@@ -252,18 +264,6 @@
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/12/04/Introducing-windows.html">Introducing Stream Windows in Apache Flink</a></h2>
-      <p>04 Dec 2015 by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
-
-      <p><p>The data analysis space is witnessing an evolution from batch to stream processing for many use cases. Although batch can be handled as a special case of stream processing, analyzing never-ending streaming data often requires a shift in the mindset and comes with its own terminology (for example, \u201cwindowing\u201d and \u201cat-least-once\u201d/\u201dexactly-once\u201d processing). This shift and the new terminology can be quite confusing for people being new to the space of stream processing. Apache Flink is a production-ready stream processor with an easy-to-use yet very expressive API to define advanced stream analysis programs. Flink's API features very flexible window definitions on data streams which let it stand out among other open source stream processors.</p>
-<p>In this blog post, we discuss the concept of windows for stream processing, present Flink's built-in windows, and explain its support for custom windowing semantics.</p></p>
-
-      <p><a href="/news/2015/12/04/Introducing-windows.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -296,6 +296,16 @@
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/blog/page3/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page3/index.html b/content/blog/page3/index.html
index cce9003..75ac4b3 100644
--- a/content/blog/page3/index.html
+++ b/content/blog/page3/index.html
@@ -142,6 +142,18 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/12/04/Introducing-windows.html">Introducing Stream Windows in Apache Flink</a></h2>
+      <p>04 Dec 2015 by Fabian Hueske (<a href="https://twitter.com/fhueske">@fhueske</a>)</p>
+
+      <p><p>The data analysis space is witnessing an evolution from batch to stream processing for many use cases. Although batch can be handled as a special case of stream processing, analyzing never-ending streaming data often requires a shift in the mindset and comes with its own terminology (for example, \u201cwindowing\u201d and \u201cat-least-once\u201d/\u201dexactly-once\u201d processing). This shift and the new terminology can be quite confusing for people being new to the space of stream processing. Apache Flink is a production-ready stream processor with an easy-to-use yet very expressive API to define advanced stream analysis programs. Flink's API features very flexible window definitions on data streams which let it stand out among other open source stream processors.</p>
+<p>In this blog post, we discuss the concept of windows for stream processing, present Flink's built-in windows, and explain its support for custom windowing semantics.</p></p>
+
+      <p><a href="/news/2015/12/04/Introducing-windows.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/11/27/release-0.10.1.html">Flink 0.10.1 released</a></h2>
       <p>27 Nov 2015</p>
 
@@ -261,24 +273,6 @@ vertex-centric or gather-sum-apply to Flink dataflows.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2015/04/13/release-0.9.0-milestone1.html">Announcing Flink 0.9.0-milestone1 preview release</a></h2>
-      <p>13 Apr 2015</p>
-
-      <p><p>The Apache Flink community is pleased to announce the availability of
-the 0.9.0-milestone-1 release. The release is a preview of the
-upcoming 0.9.0 release. It contains many new features which will be
-available in the upcoming 0.9 release. Interested users are encouraged
-to try it out and give feedback. As the version number indicates, this
-release is a preview release that contains known issues.</p>
-
-</p>
-
-      <p><a href="/news/2015/04/13/release-0.9.0-milestone1.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -311,6 +305,16 @@ release is a preview release that contains known issues.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></li>
       
       

http://git-wip-us.apache.org/repos/asf/flink-web/blob/661f7648/content/blog/page4/index.html
----------------------------------------------------------------------
diff --git a/content/blog/page4/index.html b/content/blog/page4/index.html
index b967101..7d66840 100644
--- a/content/blog/page4/index.html
+++ b/content/blog/page4/index.html
@@ -142,6 +142,24 @@
     <!-- Blog posts -->
     
     <article>
+      <h2 class="blog-title"><a href="/news/2015/04/13/release-0.9.0-milestone1.html">Announcing Flink 0.9.0-milestone1 preview release</a></h2>
+      <p>13 Apr 2015</p>
+
+      <p><p>The Apache Flink community is pleased to announce the availability of
+the 0.9.0-milestone-1 release. The release is a preview of the
+upcoming 0.9.0 release. It contains many new features which will be
+available in the upcoming 0.9 release. Interested users are encouraged
+to try it out and give feedback. As the version number indicates, this
+release is a preview release that contains known issues.</p>
+
+</p>
+
+      <p><a href="/news/2015/04/13/release-0.9.0-milestone1.html">Continue reading &raquo;</a></p>
+    </article>
+
+    <hr>
+    
+    <article>
       <h2 class="blog-title"><a href="/news/2015/04/07/march-in-flink.html">March 2015 in the Flink community</a></h2>
       <p>07 Apr 2015</p>
 
@@ -263,19 +281,6 @@ and offers a new API including definition of flexible windows.</p>
 
     <hr>
     
-    <article>
-      <h2 class="blog-title"><a href="/news/2014/10/03/upcoming_events.html">Upcoming Events</a></h2>
-      <p>03 Oct 2014</p>
-
-      <p><p>We are happy to announce several upcoming Flink events both in Europe and the US. Starting with a <strong>Flink hackathon in Stockholm</strong> (Oct 8-9) and a talk about Flink at the <strong>Stockholm Hadoop User Group</strong> (Oct 8). This is followed by the very first <strong>Flink Meetup in Berlin</strong> (Oct 15). In the US, there will be two Flink Meetup talks: the first one at the <strong>Pasadena Big Data User Group</strong> (Oct 29) and the second one at <strong>Silicon Valley Hands On Programming Events</strong> (Nov 4).</p>
-
-</p>
-
-      <p><a href="/news/2014/10/03/upcoming_events.html">Continue reading &raquo;</a></p>
-    </article>
-
-    <hr>
-    
 
     <!-- Pagination links -->
     
@@ -308,6 +313,16 @@ and offers a new API including definition of flexible windows.</p>
 
     <ul id="markdown-toc">
       
+      <li><a href="/news/2017/03/29/table-sql-api-update.html">From Streams to Tables and Back Again: An Update on Flink's Table & SQL API</a></li>
+      
+      
+        
+      
+    
+      
+      
+
+      
       <li><a href="/news/2017/03/23/release-1.1.5.html">Apache Flink 1.1.5 Released</a></li>