You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ja...@apache.org on 2019/08/11 05:53:49 UTC

[flink] branch master updated: [docs-sync] Synchronize the latest documentation changes (commits to 56c6b487) into Chinese documents

This is an automated email from the ASF dual-hosted git repository.

jark pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/master by this push:
     new edc6826  [docs-sync] Synchronize the latest documentation changes (commits to 56c6b487) into Chinese documents
edc6826 is described below

commit edc682631f49d371598390b4d4d8caba32bf7741
Author: Jark Wu <im...@gmail.com>
AuthorDate: Sun Aug 11 13:51:51 2019 +0800

    [docs-sync] Synchronize the latest documentation changes (commits to 56c6b487) into Chinese documents
---
 docs/dev/api_concepts.zh.md                  |   2 +-
 docs/dev/connectors/filesystem_sink.zh.md    |   5 ++
 docs/dev/connectors/guarantees.zh.md         |   7 +-
 docs/dev/connectors/index.zh.md              |   1 +
 docs/dev/connectors/pubsub.zh.md             |  10 +--
 docs/dev/custom_serializers.zh.md            |  22 ++++--
 docs/dev/stream/state/schema_evolution.zh.md |   8 +++
 docs/dev/table/catalog.zh.md                 |  69 +++++++++---------
 docs/dev/table/config.zh.md                  | 103 +++++++++++++++++++++++++++
 docs/dev/table/index.zh.md                   |  46 +++++++++---
 docs/dev/table/sql.zh.md                     |  86 ++++++++++++++--------
 docs/dev/table/sqlClient.zh.md               |  68 ++++++++++++++++--
 docs/dev/table/tableApi.zh.md                |  65 ++++++++++-------
 docs/flinkDev/ide_setup.zh.md                |   4 +-
 docs/monitoring/metrics.zh.md                |  25 +++----
 docs/ops/config.zh.md                        |   2 +-
 docs/ops/state/large_state_tuning.zh.md      |   2 +-
 docs/ops/upgrading.zh.md                     |  38 ++++++++--
 18 files changed, 428 insertions(+), 135 deletions(-)

diff --git a/docs/dev/api_concepts.zh.md b/docs/dev/api_concepts.zh.md
index 8c2b746..a743774 100644
--- a/docs/dev/api_concepts.zh.md
+++ b/docs/dev/api_concepts.zh.md
@@ -758,7 +758,7 @@ Restrictions apply to classes containing fields that cannot be serialized, like
 resources. Classes that follow the Java Beans conventions work well in general.
 
 All classes that are not identified as POJO types (see POJO requirements above) are handled by Flink as general class types.
-Flink treats these data types as black boxes and is not able to access their content (i.e., for efficient sorting). General types are de/serialized using the serialization framework [Kryo](https://github.com/EsotericSoftware/kryo).
+Flink treats these data types as black boxes and is not able to access their content (e.g., for efficient sorting). General types are de/serialized using the serialization framework [Kryo](https://github.com/EsotericSoftware/kryo).
 
 #### Values
 
diff --git a/docs/dev/connectors/filesystem_sink.zh.md b/docs/dev/connectors/filesystem_sink.zh.md
index 6939fd7..2c2585d 100644
--- a/docs/dev/connectors/filesystem_sink.zh.md
+++ b/docs/dev/connectors/filesystem_sink.zh.md
@@ -23,6 +23,11 @@ specific language governing permissions and limitations
 under the License.
 -->
 
+<div class="alert alert-info" markdown="span">
+The `BucketingSink` has been **deprecated since Flink 1.9** and will be removed in subsequent releases.
+Please use the [__StreamingFileSink__]({{site.baseurl}}/dev/connectors/streamfile_sink.html) instead.
+</div>
+
 这个连接器可以向所有 [Hadoop FileSystem](http://hadoop.apache.org) 支持的文件系统写入分区文件。
 使用前,需要在工程里添加下面的依赖:
 
diff --git a/docs/dev/connectors/guarantees.zh.md b/docs/dev/connectors/guarantees.zh.md
index f5c1aec..ef8a72b 100644
--- a/docs/dev/connectors/guarantees.zh.md
+++ b/docs/dev/connectors/guarantees.zh.md
@@ -62,6 +62,11 @@ Please read the documentation of each connector to understand the details of the
             <td></td>
         </tr>
         <tr>
+            <td>Google PubSub</td>
+            <td>at least once</td>
+            <td></td>
+        </tr>
+        <tr>
             <td>Collections</td>
             <td>exactly once</td>
             <td></td>
@@ -104,7 +109,7 @@ state updates) of Flink coupled with bundled sinks:
     </tr>
     <tr>
         <td>Kafka producer</td>
-        <td>at least once/ exactly once</td>
+        <td>at least once / exactly once</td>
         <td>exactly once with transactional producers (v 0.11+)</td>
     </tr>
     <tr>
diff --git a/docs/dev/connectors/index.zh.md b/docs/dev/connectors/index.zh.md
index aabcdbd..e5896ab 100644
--- a/docs/dev/connectors/index.zh.md
+++ b/docs/dev/connectors/index.zh.md
@@ -46,6 +46,7 @@ under the License.
  * [RabbitMQ](rabbitmq.html) (source/sink)
  * [Apache NiFi](nifi.html) (source/sink)
  * [Twitter Streaming API](twitter.html) (source)
+ * [Google PubSub](pubsub.html) (source/sink)
 
 请记住,在使用一种连接器时,通常需要额外的第三方组件,比如:数据存储服务器或者消息队列。
 要注意这些列举的连接器是 Flink 工程的一部分,包含在发布的源码中,但是不包含在二进制发行版中。
diff --git a/docs/dev/connectors/pubsub.zh.md b/docs/dev/connectors/pubsub.zh.md
index 0ee8187..59b3e00 100644
--- a/docs/dev/connectors/pubsub.zh.md
+++ b/docs/dev/connectors/pubsub.zh.md
@@ -94,7 +94,7 @@ DataStream<SomeObject> dataStream = (...);
 
 SerializationSchema<SomeObject> serializationSchema = (...);
 SinkFunction<SomeObject> pubsubSink = PubSubSink.newBuilder()
-                                                .withDeserializationSchema(deserializer)
+                                                .withSerializationSchema(serializationSchema)
                                                 .withProjectName("project")
                                                 .withSubscriptionName("subscription")
                                                 .build()
@@ -120,18 +120,20 @@ The following example shows how you would create a source to read messages from
 <div class="codetabs" markdown="1">
 <div data-lang="java" markdown="1">
 {% highlight java %}
+String hostAndPort = "localhost:1234";
 DeserializationSchema<SomeObject> deserializationSchema = (...);
 SourceFunction<SomeObject> pubsubSource = PubSubSource.newBuilder()
                                                       .withDeserializationSchema(deserializationSchema)
                                                       .withProjectName("my-fake-project")
                                                       .withSubscriptionName("subscription")
-                                                      .withPubSubSubscriberFactory(new PubSubSubscriberFactoryForEmulator("localhost:1234", "my-fake-project", "subscription", 10, Duration.ofSeconds(15), 100))
+                                                      .withPubSubSubscriberFactory(new PubSubSubscriberFactoryForEmulator(hostAndPort, "my-fake-project", "subscription", 10, Duration.ofSeconds(15), 100))
                                                       .build();
+SerializationSchema<SomeObject> serializationSchema = (...);
 SinkFunction<SomeObject> pubsubSink = PubSubSink.newBuilder()
-                                                .withDeserializationSchema(deserializationSchema)
+                                                .withSerializationSchema(serializationSchema)
                                                 .withProjectName("my-fake-project")
                                                 .withSubscriptionName("subscription")
-                                                .withHostAndPortForEmulator(getPubSubHostPort())
+                                                .withHostAndPortForEmulator(hostAndPort)
                                                 .build()
 
 StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
diff --git a/docs/dev/custom_serializers.zh.md b/docs/dev/custom_serializers.zh.md
index d294019..4b75998 100644
--- a/docs/dev/custom_serializers.zh.md
+++ b/docs/dev/custom_serializers.zh.md
@@ -62,13 +62,20 @@ env.getConfig().addDefaultKryoSerializer(MyCustomType.class, TBaseSerializer.cla
 <dependency>
 	<groupId>com.twitter</groupId>
 	<artifactId>chill-thrift</artifactId>
-	<version>0.5.2</version>
+	<version>0.7.6</version>
+	<!-- exclusions for dependency conversion -->
+	<exclusions>
+		<exclusion>
+			<groupId>com.esotericsoftware.kryo</groupId>
+			<artifactId>kryo</artifactId>
+		</exclusion>
+	</exclusions>
 </dependency>
 <!-- libthrift is required by chill-thrift -->
 <dependency>
 	<groupId>org.apache.thrift</groupId>
 	<artifactId>libthrift</artifactId>
-	<version>0.6.1</version>
+	<version>0.11.0</version>
 	<exclusions>
 		<exclusion>
 			<groupId>javax.servlet</groupId>
@@ -90,13 +97,20 @@ env.getConfig().addDefaultKryoSerializer(MyCustomType.class, TBaseSerializer.cla
 <dependency>
 	<groupId>com.twitter</groupId>
 	<artifactId>chill-protobuf</artifactId>
-	<version>0.5.2</version>
+	<version>0.7.6</version>
+	<!-- exclusions for dependency conversion -->
+	<exclusions>
+		<exclusion>
+			<groupId>com.esotericsoftware.kryo</groupId>
+			<artifactId>kryo</artifactId>
+		</exclusion>
+	</exclusions>
 </dependency>
 <!-- We need protobuf for chill-protobuf -->
 <dependency>
 	<groupId>com.google.protobuf</groupId>
 	<artifactId>protobuf-java</artifactId>
-	<version>2.5.0</version>
+	<version>3.7.0</version>
 </dependency>
 
 {% endhighlight %}
diff --git a/docs/dev/stream/state/schema_evolution.zh.md b/docs/dev/stream/state/schema_evolution.zh.md
index d02cc7a..71f959f 100644
--- a/docs/dev/stream/state/schema_evolution.zh.md
+++ b/docs/dev/stream/state/schema_evolution.zh.md
@@ -95,4 +95,12 @@ Flink 完全支持 Avro 状态类型的升级,只要数据结构的修改是
 
 一个例外是如果新的 Avro 数据 schema 生成的类无法被重定位或者使用了不同的命名空间,在作业恢复时状态数据会被认为是不兼容的。
 
+{% warn Attention %} Schema evolution of keys is not supported.
+
+Example: RocksDB state backend relies on binary objects identity, rather than `hashCode` method implementation. Any changes to the keys object structure could lead to non deterministic behaviour.
+
+{% warn Attention %} **Kryo** cannot be used for schema evolution.
+
+When Kryo is used, there is no possibility for the framework to verify if any incompatible changes have been made.
+
 {% top %}
diff --git a/docs/dev/table/catalog.zh.md b/docs/dev/table/catalog.zh.md
index c4a29cc..a1920d8 100644
--- a/docs/dev/table/catalog.zh.md
+++ b/docs/dev/table/catalog.zh.md
@@ -63,7 +63,9 @@ select * from mydb2.myTable2
 
 `CatalogManager` always has a built-in `GenericInMemoryCatalog` named `default_catalog`, which has a built-in default database named `default_database`. If no other catalog and database are explicitly set, they will be the current catalog and current database by default. All temp meta-objects, such as those defined by `TableEnvironment#registerTable`  are registered to this catalog. 
 
-Users can set current catalog and database via `TableEnvironment.useCatalog(...)` and `TableEnvironment.useDatabase(...)` in Table API, or `USE CATALOG ...` and `USE DATABASE ...` in Flink SQL.
+Users can set current catalog and database via `TableEnvironment.useCatalog(...)` and
+`TableEnvironment.useDatabase(...)` in Table API, or `USE CATALOG ...` and `USE ...` in Flink SQL
+ Client.
 
 
 Catalog Types
@@ -95,25 +97,17 @@ The ultimate goal for integrating Flink with Hive metadata is that:
 
 2. Meta-objects created by `HiveCatalog` can be written back to Hive metastore such that Hive and other Hive-compatible applications can consume.
 
-## User-configured Catalog
-
-Catalogs are pluggable. Users can develop custom catalogs by implementing the `Catalog` interface, which defines a set of APIs for reading and writing catalog meta-objects such as database, tables, partitions, views, and functions.
-
-
-HiveCatalog
------------
-
-## Supported Hive Versions
+### Supported Hive Versions
 
 Flink's `HiveCatalog` officially supports Hive 2.3.4 and 1.2.1.
 
 The Hive version is explicitly specified as a String, either by passing it to the constructor when creating `HiveCatalog` instances directly in Table API or specifying it in yaml config file in SQL CLI. The Hive version string are `2.3.4` and `1.2.1`.
 
-## Case Insensitive to Meta-Object Names
+### Case Insensitive to Meta-Object Names
 
 Note that Hive Metastore stores meta-object names in lower cases. Thus, unlike `GenericInMemoryCatalog`, `HiveCatalog` is case-insensitive to meta-object names, and users need to be cautious on that.
 
-## Dependencies
+### Dependencies
 
 To use `HiveCatalog`, users need to include the following dependency jars.
 
@@ -138,6 +132,7 @@ For Hive 1.2.1, users need:
 
 - hive-metastore-1.2.1.jar
 - hive-exec-1.2.1.jar
+- libfb303-0.9.3.jar
 
 
 // Hadoop dependencies
@@ -154,7 +149,7 @@ If you don't have Hive dependencies at hand, they can be found at [mvnrepostory.
 Note that users need to make sure the compatibility between their Hive versions and Hadoop versions. Otherwise, there may be potential problem, for example, Hive 2.3.4 is compiled against Hadoop 2.7.2, you may run into problems when using Hive 2.3.4 with Hadoop 2.4.
 
 
-## Data Type Mapping
+### Data Type Mapping
 
 For both Flink and Hive tables, `HiveCatalog` stores table schemas by mapping them to Hive table schemas with Hive data types. Types are dynamically mapped back on read.
 
@@ -162,27 +157,28 @@ Currently `HiveCatalog` supports most Flink data types with the following mappin
 
 |  Flink Data Type  |  Hive Data Type  |
 |---|---|
-| CHAR(p)       |  char(p)* |
-| VARCHAR(p)    |  varchar(p)** |
-| STRING        |  string |
-| BOOLEAN       |  boolean |
-| BYTE          |  tinyint |
-| SHORT         |  smallint |
-| INT           |  int |
-| BIGINT        |  long |
-| FLOAT         |  float |
-| DOUBLE        |  double |
-| DECIMAL(p, s) |  decimal(p, s) |
-| DATE          |  date |
-| TIMESTAMP_WITHOUT_TIME_ZONE |  Timestamp |
+| CHAR(p)       |  CHAR(p)* |
+| VARCHAR(p)    |  VARCHAR(p)** |
+| STRING        |  STRING |
+| BOOLEAN       |  BOOLEAN |
+| TINYINT       |  TINYINT |
+| SMALLINT      |  SMALLINT |
+| INT           |  INT |
+| BIGINT        |  LONG |
+| FLOAT         |  FLOAT |
+| DOUBLE        |  DOUBLE |
+| DECIMAL(p, s) |  DECIMAL(p, s) |
+| DATE          |  DATE |
+| TIMESTAMP_WITHOUT_TIME_ZONE |  TIMESTAMP |
 | TIMESTAMP_WITH_TIME_ZONE |  N/A |
 | TIMESTAMP_WITH_LOCAL_TIME_ZONE |  N/A |
-| INTERVAL |  N/A |
-| BINARY        |  binary |
-| VARBINARY(p)  |  binary |
-| ARRAY\<E>     |  list\<E> |
-| MAP<K, V>     |  map<K, V> |
-| ROW           |  struct |
+| INTERVAL      |   N/A*** |
+| BINARY        |   N/A |
+| VARBINARY(p)  |   N/A |
+| BYTES         |   BINARY |
+| ARRAY\<E>     |  ARRAY\<E> |
+| MAP<K, V>     |  MAP<K, V> ****|
+| ROW           |  STRUCT |
 | MULTISET      |  N/A |
 
 
@@ -194,6 +190,13 @@ The following limitations in Hive's data types impact the mapping between Flink
 
 \** maximum length is 65535
 
+\*** `INTERVAL` type can not be mapped to hive `INTERVAL` for now.
+
+\**** Hive map key type only allows primitive types, while Flink map key can be any data type.
+
+## User-configured Catalog
+
+Catalogs are pluggable. Users can develop custom catalogs by implementing the `Catalog` interface, which defines a set of APIs for reading and writing catalog meta-objects such as database, tables, partitions, views, and functions.
 
 Catalog Registration
 --------------------
@@ -333,7 +336,7 @@ default_catalog
 # ------ Set default catalog and database ------
 
 Flink SQL> use catalog myHive1;
-Flink SQL> use database myDb;
+Flink SQL> use myDb;
 
 # ------ Access Hive metadata ------
 
diff --git a/docs/dev/table/config.zh.md b/docs/dev/table/config.zh.md
new file mode 100644
index 0000000..fa1f849
--- /dev/null
+++ b/docs/dev/table/config.zh.md
@@ -0,0 +1,103 @@
+---
+title: "配置"
+nav-parent_id: tableapi
+nav-pos: 150
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+By default, the Table & SQL API is preconfigured for producing accurate results with acceptable
+performance.
+
+Depending on the requirements of a table program, it might be necessary to adjust
+certain parameters for optimization. For example, unbounded streaming programs may need to ensure
+that the required state size is capped (see [streaming concepts](./streaming/query_configuration.html)).
+
+* This will be replaced by the TOC
+{:toc}
+
+### Overview
+
+In every table environment, the `TableConfig` offers options for configuring the current session.
+
+For common or important configuration options, the `TableConfig` provides getters and setters methods
+with detailed inline documentation.
+
+For more advanced configuration, users can directly access the underlying key-value map. The following
+sections list all available options that can be used to adjust Flink Table & SQL API programs.
+
+<span class="label label-danger">Attention</span> Because options are read at different point in time
+when performing operations, it is recommended to set configuration options early after instantiating a
+table environment.
+
+<div class="codetabs" markdown="1">
+<div data-lang="java" markdown="1">
+{% highlight java %}
+// instantiate table environment
+TableEnvironment tEnv = ...
+
+tEnv.getConfig()        // access high-level configuration
+  .getConfiguration()   // set low-level key-value options
+  .setString("table.exec.mini-batch.enabled", "true")
+  .setString("table.exec.mini-batch.allow-latency", "5 s")
+  .setString("table.exec.mini-batch.size", "5000");
+{% endhighlight %}
+</div>
+
+<div data-lang="scala" markdown="1">
+{% highlight scala %}
+// instantiate table environment
+val tEnv: TableEnvironment = ...
+
+tEnv.getConfig         // access high-level configuration
+  .getConfiguration    // set low-level key-value options
+  .setString("table.exec.mini-batch.enabled", "true")
+  .setString("table.exec.mini-batch.allow-latency", "5 s")
+  .setString("table.exec.mini-batch.size", "5000")
+{% endhighlight %}
+</div>
+
+<div data-lang="python" markdown="1">
+{% highlight python %}
+# instantiate table environment
+t_env = ...
+
+t_env.get_config()        # access high-level configuration
+  .get_configuration()    # set low-level key-value options
+  .set_string("table.exec.mini-batch.enabled", "true")
+  .set_string("table.exec.mini-batch.allow-latency", "5 s")
+  .set_string("table.exec.mini-batch.size", "5000");
+{% endhighlight %}
+</div>
+</div>
+
+<span class="label label-danger">Attention</span> Currently, key-value options are only supported
+for the Blink planner.
+
+### Execution Options
+
+The following options can be used to tune the performance of the query execution.
+
+{% include generated/execution_config_configuration.html %}
+
+### Optimizer Options
+
+The following options can be used to adjust the behavior of the query optimizer to get a better execution plan.
+
+{% include generated/optimizer_config_configuration.html %}
diff --git a/docs/dev/table/index.zh.md b/docs/dev/table/index.zh.md
index 43ea5e2..3ef2a7c 100644
--- a/docs/dev/table/index.zh.md
+++ b/docs/dev/table/index.zh.md
@@ -34,7 +34,13 @@ The Table API and the SQL interfaces are tightly integrated with each other as w
 Dependency Structure
 --------------------
 
-All Table API and SQL components are bundled in the `flink-table` Maven artifact.
+Starting from Flink 1.9, Flink provides two different planner implementations for evaluating Table & SQL API programs: the Blink planner and the old planner that was available before Flink 1.9. Planners are responsible for
+translating relational operators into an executable, optimized Flink job. Both of the planners come with different optimization rules and runtime classes.
+They may also differ in the set of supported features.
+
+<span class="label label-danger">Attention</span> For production use cases, we recommend the old planner that was present before Flink 1.9 for now.
+
+All Table API and SQL components are bundled in the `flink-table` or `flink-table-blink` Maven artifacts.
 
 The following dependencies are relevant for most projects:
 
@@ -43,35 +49,52 @@ The following dependencies are relevant for most projects:
 * `flink-table-api-scala`: The Table & SQL API for pure table programs using the Scala programming language (in early development stage, not recommended!).
 * `flink-table-api-java-bridge`: The Table & SQL API with DataStream/DataSet API support using the Java programming language.
 * `flink-table-api-scala-bridge`: The Table & SQL API with DataStream/DataSet API support using the Scala programming language.
-* `flink-table-planner`: The table program planner and runtime.
-* `flink-table-uber`: Packages the modules above into a distribution for most Table & SQL API use cases. The uber JAR file `flink-table*.jar` is located in the `/opt` directory of a Flink release and can be moved to `/lib` if desired.
+* `flink-table-planner`: The table program planner and runtime. This was the only planner of Flink before the 1.9 release. It is still the recommended one.
+* `flink-table-planner-blink`: The new Blink planner.
+* `flink-table-runtime-blink`: The new Blink runtime.
+* `flink-table-uber`: Packages the API modules above plus the old planner into a distribution for most Table & SQL API use cases. The uber JAR file `flink-table-*.jar` is located in the `/lib` directory of a Flink release by default.
+* `flink-table-uber-blink`: Packages the API modules above plus the Blink specific modules into a distribution for most Table & SQL API use cases. The uber JAR file `flink-table-blink-*.jar` is located in the `/lib` directory of a Flink release by default.
+
+See the [common API](common.html) page for more information about how to switch between the old and new Blink planner in table programs.
 
 ### Table Program Dependencies
 
-The following dependencies must be added to a project in order to use the Table API & SQL for defining pipelines:
+Depending on the target programming language, you need to add the Java or Scala API to a project in order to use the Table API & SQL for defining pipelines:
 
 {% highlight xml %}
+<!-- Either... -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-planner{{ site.scala_version_suffix }}</artifactId>
+  <artifactId>flink-table-api-java-bridge{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
+  <scope>provided</scope>
+</dependency>
+<!-- or... -->
+<dependency>
+  <groupId>org.apache.flink</groupId>
+  <artifactId>flink-table-api-scala-bridge{{ site.scala_version_suffix }}</artifactId>
+  <version>{{site.version}}</version>
+  <scope>provided</scope>
 </dependency>
 {% endhighlight %}
 
-Additionally, depending on the target programming language, you need to add the Java or Scala API.
+Additionally, if you want to run the Table API & SQL programs locally within your IDE, you must add one of the
+following set of modules, depending which planner you want to use:
 
 {% highlight xml %}
-<!-- Either... -->
+<!-- Either... (for the old planner that was available before Flink 1.9) -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-api-java-bridge{{ site.scala_version_suffix }}</artifactId>
+  <artifactId>flink-table-planner{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
+  <scope>provided</scope>
 </dependency>
-<!-- or... -->
+<!-- or.. (for the new Blink planner) -->
 <dependency>
   <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-api-scala-bridge{{ site.scala_version_suffix }}</artifactId>
+  <artifactId>flink-table-planner-blink{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
+  <scope>provided</scope>
 </dependency>
 {% endhighlight %}
 
@@ -82,6 +105,7 @@ Internally, parts of the table ecosystem are implemented in Scala. Therefore, pl
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-streaming-scala{{ site.scala_version_suffix }}</artifactId>
   <version>{{site.version}}</version>
+  <scope>provided</scope>
 </dependency>
 {% endhighlight %}
 
@@ -94,6 +118,7 @@ If you want to implement a [custom format]({{ site.baseurl }}/dev/table/sourceSi
   <groupId>org.apache.flink</groupId>
   <artifactId>flink-table-common</artifactId>
   <version>{{site.version}}</version>
+  <scope>provided</scope>
 </dependency>
 {% endhighlight %}
 
@@ -110,6 +135,7 @@ Where to go next?
 -----------------
 
 * [Concepts & Common API]({{ site.baseurl }}/dev/table/common.html): Shared concepts and APIs of the Table API and SQL.
+* [Data Types]({{ site.baseurl }}/dev/table/types.html): Lists pre-defined data types and their properties.
 * [Streaming Concepts]({{ site.baseurl }}/dev/table/streaming): Streaming-specific documentation for the Table API or SQL such as configuration of time attributes and handling of updating results.
 * [Connect to External Systems]({{ site.baseurl }}/dev/table/functions.html): Available connectors and formats for reading and writing data to external systems.
 * [Table API]({{ site.baseurl }}/dev/table/tableApi.html): Supported operations and API for the Table API.
diff --git a/docs/dev/table/sql.zh.md b/docs/dev/table/sql.zh.md
index 60c4571..4efddfb 100644
--- a/docs/dev/table/sql.zh.md
+++ b/docs/dev/table/sql.zh.md
@@ -103,7 +103,6 @@ tableEnv.sqlUpdate(
 {% endhighlight %}
 </div>
 
-
 <div data-lang="python" markdown="1">
 {% highlight python %}
 env = StreamExecutionEnvironment.get_execution_environment()
@@ -143,7 +142,7 @@ The following BNF-grammar describes the superset of supported SQL features in ba
 insert:
   INSERT INTO tableReference
   query
-  
+
 query:
   values
   | {
@@ -169,7 +168,7 @@ select:
   [ GROUP BY { groupItem [, groupItem ]* } ]
   [ HAVING booleanExpression ]
   [ WINDOW windowName AS windowSpec [, windowName AS windowSpec ]* ]
-  
+
 selectWithoutFrom:
   SELECT [ ALL | DISTINCT ]
   { * | projectItem [, projectItem ]* }
@@ -280,6 +279,57 @@ String literals must be enclosed in single quotes (e.g., `SELECT 'Hello World'`)
 Operations
 --------------------
 
+### Show and Use
+
+<div markdown="1">
+<table class="table table-bordered">
+  <thead>
+    <tr>
+      <th class="text-left" style="width: 20%">Operation</th>
+      <th class="text-center">Description</th>
+    </tr>
+  </thead>
+  <tbody>
+    <tr>
+      <td>
+        <strong>Show</strong><br>
+        <span class="label label-primary">Batch</span> <span class="label label-primary">Streaming</span>
+      </td>
+      <td>
+        <p>Show all catalogs</p>
+{% highlight sql %}
+SHOW CATALOGS;
+{% endhighlight %}
+		<p>Show all databases in the current catalog</p>
+{% highlight sql %}
+SHOW DATABASES;
+{% endhighlight %}
+		<p>Show all tables in the current database in the current catalog</p>
+{% highlight sql %}
+SHOW TABLES;
+{% endhighlight %}
+      </td>
+    </tr>
+    <tr>
+      <td>
+        <strong>Use</strong><br>
+        <span class="label label-primary">Batch</span> <span class="label label-primary">Streaming</span>
+      </td>
+      <td>
+			<p>Set current catalog for the session </p>
+{% highlight sql %}
+USE CATALOG mycatalog;
+{% endhighlight %}
+            <p>Set current database of the current catalog for the session</p>
+{% highlight sql %}
+USE mydatabase;
+{% endhighlight %}
+      </td>
+    </tr>
+  </tbody>
+</table>
+</div>
+
 ### Scan, Projection, and Filter
 
 <div markdown="1">
@@ -391,11 +441,11 @@ SELECT COUNT(amount) OVER (
 FROM Orders
 
 SELECT COUNT(amount) OVER w, SUM(amount) OVER w
-FROM Orders 
+FROM Orders
 WINDOW w AS (
   PARTITION BY user
   ORDER BY proctime
-  ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)  
+  ROWS BETWEEN 2 PRECEDING AND CURRENT ROW)
 {% endhighlight %}
       </td>
     </tr>
@@ -1018,29 +1068,7 @@ MATCH_RECOGNIZE (
 Data Types
 ----------
 
-The SQL runtime is built on top of Flink's DataSet and DataStream APIs. Internally, it also uses Flink's `TypeInformation` to define data types. Fully supported types are listed in `org.apache.flink.table.api.Types`. The following table summarizes the relation between SQL Types, Table API types, and the resulting Java class.
-
-| Table API              | SQL                         | Java type              |
-| :--------------------- | :-------------------------- | :--------------------- |
-| `Types.STRING`         | `VARCHAR`                   | `java.lang.String`     |
-| `Types.BOOLEAN`        | `BOOLEAN`                   | `java.lang.Boolean`    |
-| `Types.BYTE`           | `TINYINT`                   | `java.lang.Byte`       |
-| `Types.SHORT`          | `SMALLINT`                  | `java.lang.Short`      |
-| `Types.INT`            | `INTEGER, INT`              | `java.lang.Integer`    |
-| `Types.LONG`           | `BIGINT`                    | `java.lang.Long`       |
-| `Types.FLOAT`          | `REAL, FLOAT`               | `java.lang.Float`      |
-| `Types.DOUBLE`         | `DOUBLE`                    | `java.lang.Double`     |
-| `Types.DECIMAL`        | `DECIMAL`                   | `java.math.BigDecimal` |
-| `Types.SQL_DATE`       | `DATE`                      | `java.sql.Date`        |
-| `Types.SQL_TIME`       | `TIME`                      | `java.sql.Time`        |
-| `Types.SQL_TIMESTAMP`  | `TIMESTAMP(3)`              | `java.sql.Timestamp`   |
-| `Types.INTERVAL_MONTHS`| `INTERVAL YEAR TO MONTH`    | `java.lang.Integer`    |
-| `Types.INTERVAL_MILLIS`| `INTERVAL DAY TO SECOND(3)` | `java.lang.Long`       |
-| `Types.PRIMITIVE_ARRAY`| `ARRAY`                     | e.g. `int[]`           |
-| `Types.OBJECT_ARRAY`   | `ARRAY`                     | e.g. `java.lang.Byte[]`|
-| `Types.MAP`            | `MAP`                       | `java.util.HashMap`    |
-| `Types.MULTISET`       | `MULTISET`                  | e.g. `java.util.HashMap<String, Integer>` for a multiset of `String` |
-| `Types.ROW`            | `ROW`                       | `org.apache.flink.types.Row` |
+Please see the dedicated page about [data types](types.html).
 
 Generic types and (nested) composite types (e.g., POJOs, tuples, rows, Scala case classes) can be fields of a row as well.
 
@@ -1057,7 +1085,7 @@ Although not every SQL feature is implemented yet, some string combinations are
 
 {% highlight sql %}
 
-A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHARACTER_LENGTH, CHA [...]
+A, ABS, ABSOLUTE, ACTION, ADA, ADD, ADMIN, AFTER, ALL, ALLOCATE, ALLOW, ALTER, ALWAYS, AND, ANY, ARE, ARRAY, AS, ASC, ASENSITIVE, ASSERTION, ASSIGNMENT, ASYMMETRIC, AT, ATOMIC, ATTRIBUTE, ATTRIBUTES, AUTHORIZATION, AVG, BEFORE, BEGIN, BERNOULLI, BETWEEN, BIGINT, BINARY, BIT, BLOB, BOOLEAN, BOTH, BREADTH, BY, BYTES, C, CALL, CALLED, CARDINALITY, CASCADE, CASCADED, CASE, CAST, CATALOG, CATALOG_NAME, CEIL, CEILING, CENTURY, CHAIN, CHAR, CHARACTER, CHARACTERISTICS, CHARACTERS, CHARACTER_LENG [...]
 
 {% endhighlight %}
 
diff --git a/docs/dev/table/sqlClient.zh.md b/docs/dev/table/sqlClient.zh.md
index b1d8f4c..b942bf7 100644
--- a/docs/dev/table/sqlClient.zh.md
+++ b/docs/dev/table/sqlClient.zh.md
@@ -157,7 +157,7 @@ Mode "embedded" submits Flink jobs from the local machine.
 
 ### Environment Files
 
-A SQL query needs a configuration environment in which it is executed. The so-called *environment files* define available table sources and sinks, external catalogs, user-defined functions, and other properties required for execution and deployment.
+A SQL query needs a configuration environment in which it is executed. The so-called *environment files* define available catalogs, table sources and sinks, user-defined functions, and other properties required for execution and deployment.
 
 Every environment file is a regular [YAML file](http://yaml.org/). An example of such a file is presented below.
 
@@ -199,9 +199,24 @@ functions:
       - 7.6
       - false
 
-# Execution properties allow for changing the behavior of a table program.
+# Define available catalogs
+
+catalogs:
+   - name: catalog_1
+     type: hive
+     property-version: 1
+     hive-conf-dir: ...
+   - name: catalog_2
+     type: hive
+     property-version: 1
+     default-database: mydb2
+     hive-conf-dir: ...
+     hive-version: 1.2.1
+
+# Properties that change the fundamental execution behavior of a table program.
 
 execution:
+  planner: old                      # optional: either 'old' (default) or 'blink'
   type: streaming                   # required: execution mode either 'batch' or 'streaming'
   result-mode: table                # required: either 'table' or 'changelog'
   max-table-result-rows: 1000000    # optional: maximum number of maintained rows in
@@ -212,10 +227,22 @@ execution:
   max-parallelism: 16               # optional: Flink's maximum parallelism (128 by default)
   min-idle-state-retention: 0       # optional: table program's minimum idle state time
   max-idle-state-retention: 0       # optional: table program's maximum idle state time
+  current-catalog: catalog_1        # optional: name of the current catalog of the session ('default_catalog' by default)
+  current-database: mydb1           # optional: name of the current database of the current catalog
+                                    #   (default database of the current catalog by default)
   restart-strategy:                 # optional: restart strategy
     type: fallback                  #   "fallback" to global restart strategy by default
 
-# Deployment properties allow for describing the cluster to which table programs are submitted to.
+# Configuration options for adjusting and tuning table programs.
+
+# A full list of options and their default values can be found
+# on the dedicated "Configuration" page.
+configuration:
+  table.optimizer.join-reorder-enabled: true
+  table.exec.spill-compression.enabled: true
+  table.exec.spill-compression.block-size: 128kb
+
+# Properties that describe the cluster to which table programs are submitted to.
 
 deployment:
   response-timeout: 5000
@@ -226,9 +253,10 @@ This configuration:
 - defines an environment with a table source `MyTableSource` that reads from a CSV file,
 - defines a view `MyCustomView` that declares a virtual table using a SQL query,
 - defines a user-defined function `myUDF` that can be instantiated using the class name and two constructor parameters,
-- specifies a parallelism of 1 for queries executed in this streaming environment,
-- specifies an event-time characteristic, and
-- runs queries in the `table` result mode.
+- connects to two Hive catalogs and uses `catalog_1` as the current catalog with `mydb1` as the current database of the catalog,
+- uses the old planner in streaming mode for running statements with event-time characteristic and a parallelism of 1,
+- runs exploratory queries in the `table` result mode,
+- and makes some planner adjustments around join reordering and spilling via configuration options.
 
 Depending on the use case, a configuration can be split into multiple files. Therefore, environment files can be created for general purposes (*defaults environment file* using `--defaults`) as well as on a per-session basis (*session environment file* using `--environment`). Every CLI session is initialized with the default properties followed by the session properties. For example, the defaults environment file could specify all table sources that should be available for querying in ev [...]
 
@@ -410,6 +438,34 @@ This process can be recursively performed until all the constructor parameters a
 
 {% top %}
 
+Catalogs
+--------
+
+Catalogs can be defined as a set of YAML properties and are automatically registered to the environment upon starting SQL Client.
+
+Users can specify which catalog they want to use as the current catalog in SQL CLI, and which database of the catalog they want to use as the current database.
+
+{% highlight yaml %}
+catalogs:
+   - name: catalog_1
+     type: hive
+     property-version: 1
+     default-database: mydb2
+     hive-version: 1.2.1
+     hive-conf-dir: <path of Hive conf directory>
+   - name: catalog_2
+     type: hive
+     property-version: 1
+     hive-conf-dir: <path of Hive conf directory>
+
+execution:
+   ...
+   current-catalog: catalog_1
+   current-database: mydb1
+{% endhighlight %}
+
+For more information about catalog, see [Catalogs]({{ site.baseurl }}/dev/table/catalog.html).
+
 Detached SQL Queries
 --------------------
 
diff --git a/docs/dev/table/tableApi.zh.md b/docs/dev/table/tableApi.zh.md
index 28802a6..d99523b 100644
--- a/docs/dev/table/tableApi.zh.md
+++ b/docs/dev/table/tableApi.zh.md
@@ -2660,6 +2660,26 @@ Table table = input
 
     <tr>
       <td>
+        <strong>Group Window Aggregate</strong><br>
+        <span class="label label-primary">Batch</span> <span class="label label-primary">Streaming</span>
+      </td>
+      <td>
+        <p>Groups and aggregates a table on a <a href="#group-windows">group window</a> and possibly one or more grouping keys. You have to close the "aggregate" with a select statement. And the select statement does not support "*" or aggregate functions.</p>
+{% highlight java %}
+AggregateFunction myAggFunc = new MyMinMax();
+tableEnv.registerFunction("myAggFunc", myAggFunc);
+
+Table table = input
+    .window(Tumble.over("5.minutes").on("rowtime").as("w")) // define window
+    .groupBy("key, w") // group by key and window
+    .aggregate("myAggFunc(a) as (x, y)")
+    .select("key, x, y, w.start, w.end"); // access window properties and aggregate results
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td>
         <strong>FlatAggregate</strong><br>
         <span class="label label-primary">Streaming</span><br>
         <span class="label label-info">Result Updating</span>
@@ -2854,7 +2874,7 @@ class MyMinMax extends AggregateFunction[Row, MyMinMaxAcc] {
   }
 }
 
-val myAggFunc: AggregateFunction = new MyMinMax
+val myAggFunc = new MyMinMax
 val table = input
   .groupBy('key)
   .aggregate(myAggFunc('a) as ('x, 'y))
@@ -2865,6 +2885,25 @@ val table = input
 
     <tr>
       <td>
+        <strong>Group Window Aggregate</strong><br>
+        <span class="label label-primary">Batch</span> <span class="label label-primary">Streaming</span>
+      </td>
+      <td>
+        <p>Groups and aggregates a table on a <a href="#group-windows">group window</a> and possibly one or more grouping keys. You have to close the "aggregate" with a select statement. And the select statement does not support "*" or aggregate functions.</p>
+{% highlight scala %}
+val myAggFunc = new MyMinMax
+val table = input
+    .window(Tumble over 5.minutes on 'rowtime as 'w) // define window
+    .groupBy('key, 'w) // group by key and window
+    .aggregate(myAggFunc('a) as ('x, 'y))
+    .select('key, 'x, 'y, 'w.start, 'w.end) // access window properties and aggregate results
+
+{% endhighlight %}
+      </td>
+    </tr>
+
+    <tr>
+      <td>
         <strong>FlatAggregate</strong><br>
         <span class="label label-primary">Streaming</span><br>
         <span class="label label-info">Result Updating</span>
@@ -2966,29 +3005,7 @@ val result = orders
 Data Types
 ----------
 
-The Table API is built on top of Flink's DataSet and DataStream APIs. Internally, it also uses Flink's `TypeInformation` to define data types. Fully supported types are listed in `org.apache.flink.table.api.Types`. The following table summarizes the relation between Table API types, SQL types, and the resulting Java class.
-
-| Table API              | SQL                         | Java type              |
-| :--------------------- | :-------------------------- | :--------------------- |
-| `Types.STRING`         | `VARCHAR`                   | `java.lang.String`     |
-| `Types.BOOLEAN`        | `BOOLEAN`                   | `java.lang.Boolean`    |
-| `Types.BYTE`           | `TINYINT`                   | `java.lang.Byte`       |
-| `Types.SHORT`          | `SMALLINT`                  | `java.lang.Short`      |
-| `Types.INT`            | `INTEGER, INT`              | `java.lang.Integer`    |
-| `Types.LONG`           | `BIGINT`                    | `java.lang.Long`       |
-| `Types.FLOAT`          | `REAL, FLOAT`               | `java.lang.Float`      |
-| `Types.DOUBLE`         | `DOUBLE`                    | `java.lang.Double`     |
-| `Types.DECIMAL`        | `DECIMAL`                   | `java.math.BigDecimal` |
-| `Types.SQL_DATE`       | `DATE`                      | `java.sql.Date`        |
-| `Types.SQL_TIME`       | `TIME`                      | `java.sql.Time`        |
-| `Types.SQL_TIMESTAMP`  | `TIMESTAMP(3)`              | `java.sql.Timestamp`   |
-| `Types.INTERVAL_MONTHS`| `INTERVAL YEAR TO MONTH`    | `java.lang.Integer`    |
-| `Types.INTERVAL_MILLIS`| `INTERVAL DAY TO SECOND(3)` | `java.lang.Long`       |
-| `Types.PRIMITIVE_ARRAY`| `ARRAY`                     | e.g. `int[]`           |
-| `Types.OBJECT_ARRAY`   | `ARRAY`                     | e.g. `java.lang.Byte[]`|
-| `Types.MAP`            | `MAP`                       | `java.util.HashMap`    |
-| `Types.MULTISET`       | `MULTISET`                  | e.g. `java.util.HashMap<String, Integer>` for a multiset of `String` |
-| `Types.ROW`            | `ROW`                       | `org.apache.flink.types.Row` |
+Please see the dedicated page about [data types](types.html).
 
 Generic types and (nested) composite types (e.g., POJOs, tuples, rows, Scala case classes) can be fields of a row as well.
 
diff --git a/docs/flinkDev/ide_setup.zh.md b/docs/flinkDev/ide_setup.zh.md
index 101a858..0b82b2d 100644
--- a/docs/flinkDev/ide_setup.zh.md
+++ b/docs/flinkDev/ide_setup.zh.md
@@ -72,7 +72,7 @@ to enable support for Scala projects and files:
 4. Leave the default options and successively click "Next" until you reach the SDK section.
 5. If there is no SDK listed, create one using the "+" sign on the top left.
    Select "JDK", choose the JDK home directory and click "OK".
-   Select the most suiting JDK version. NOTE: A good rule of thumb is to select 
+   Select the most suiting JDK version. NOTE: A good rule of thumb is to select
    the JDK version matching the active Maven profile.
 6. Continue by clicking "Next" until finishing the import.
 7. Right-click on the imported Flink project -> Maven -> Generate Sources and Update Folders.
@@ -88,7 +88,7 @@ IntelliJ supports checkstyle within the IDE using the Checkstyle-IDEA plugin.
 1. Install the "Checkstyle-IDEA" plugin from the IntelliJ plugin repository.
 2. Configure the plugin by going to Settings -> Other Settings -> Checkstyle.
 3. Set the "Scan Scope" to "Only Java sources (including tests)".
-4. Select _8.12_ in the "Checkstyle Version" dropdown and click apply. **This step is important,
+4. Select _8.14_ in the "Checkstyle Version" dropdown and click apply. **This step is important,
    don't skip it!**
 5. In the "Configuration File" pane, add a new configuration using the plus icon:
     1. Set the "Description" to "Flink".
diff --git a/docs/monitoring/metrics.zh.md b/docs/monitoring/metrics.zh.md
index d262c44..95fb2b4 100644
--- a/docs/monitoring/metrics.zh.md
+++ b/docs/monitoring/metrics.zh.md
@@ -1002,7 +1002,8 @@ Thus, in order to infer the metric identifier:
   </tbody>
 </table>
 
-### Network (Deprecated: use [Default shuffle service metrics]({{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service))
+
+### Network (Deprecated: use [Default shuffle service metrics]({{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service))
 <table class="table table-bordered">
   <thead>
     <tr>
@@ -1358,49 +1359,49 @@ Certain RocksDB native metrics are available but disabled by default, you can fi
   <tbody>
     <tr>
       <th rowspan="1"><strong>Job (only available on TaskManager)</strong></th>
-      <td>&lt;source_id&gt;.&lt;source_subtask_index&gt;.&lt;operator_id&gt;.&lt;operator_subtask_index&gt;.latency</td>
-      <td>The latency distributions from a given source subtask to an operator subtask (in milliseconds).</td>
+      <td>[&lt;source_id&gt;.[&lt;source_subtask_index&gt;.]]&lt;operator_id&gt;.&lt;operator_subtask_index&gt;.latency</td>
+      <td>The latency distributions from a given source (subtask) to an operator subtask (in milliseconds), depending on the [latency granularity]({{ site.baseurl }}/ops/config.html#metrics-latency-granularity).</td>
       <td>Histogram</td>
     </tr>
     <tr>
       <th rowspan="12"><strong>Task</strong></th>
       <td>numBytesInLocal</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Counter</td>
     </tr>
     <tr>
       <td>numBytesInLocalPerSecond</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Meter</td>
     </tr>
     <tr>
       <td>numBytesInRemote</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Counter</td>
     </tr>
     <tr>
       <td>numBytesInRemotePerSecond</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Meter</td>
     </tr>
     <tr>
       <td>numBuffersInLocal</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Counter</td>
     </tr>
     <tr>
       <td>numBuffersInLocalPerSecond</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Meter</td>
     </tr>
     <tr>
       <td>numBuffersInRemote</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Counter</td>
     </tr>
     <tr>
       <td>numBuffersInRemotePerSecond</td>
-      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/zh/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
+      <td><span class="label label-danger">Attention:</span> deprecated, use <a href="{{ site.baseurl }}/monitoring/metrics.html#default-shuffle-service">Default shuffle service metrics</a>.</td>
       <td>Meter</td>
     </tr>
     <tr>
@@ -1778,7 +1779,7 @@ logged by `SystemResourcesMetricsInitializer` during the startup.
 
 ## Latency tracking
 
-Flink allows to track the latency of records traveling through the system. This feature is disabled by default.
+Flink allows to track the latency of records travelling through the system. This feature is disabled by default.
 To enable the latency tracking you must set the `latencyTrackingInterval` to a positive number in either the
 [Flink configuration]({{ site.baseurl }}/ops/config.html#metrics-latency-interval) or `ExecutionConfig`.
 
diff --git a/docs/ops/config.zh.md b/docs/ops/config.zh.md
index ecdb54e..ec080f3 100644
--- a/docs/ops/config.zh.md
+++ b/docs/ops/config.zh.md
@@ -70,7 +70,7 @@ These parameters configure the default HDFS used by Flink. Setups that do not sp
 
 {% include generated/task_manager_configuration.html %}
 
-For *batch* jobs (or if `taskmanager.memoy.preallocate` is enabled) Flink allocates a fraction of 0.7 of the free memory (total memory configured via taskmanager.heap.mb minus memory used for network buffers) for its managed memory. Managed memory helps Flink to run the batch operators efficiently. It prevents OutOfMemoryExceptions because Flink knows how much memory it can use to execute operations. If Flink runs out of managed memory, it utilizes disk space. Using managed memory, some  [...]
+For *batch* jobs (or if `taskmanager.memoy.preallocate` is enabled) Flink allocates a fraction of 0.7 of the free memory (total memory configured via taskmanager.heap.size minus memory used for network buffers) for its managed memory. Managed memory helps Flink to run the batch operators efficiently. It prevents OutOfMemoryExceptions because Flink knows how much memory it can use to execute operations. If Flink runs out of managed memory, it utilizes disk space. Using managed memory, som [...]
 
 The default fraction for managed memory can be adjusted using the taskmanager.memory.fraction parameter. An absolute value may be set using taskmanager.memory.size (overrides the fraction parameter). If desired, the managed memory may be allocated outside the JVM heap. This may improve performance in setups with large memory sizes.
 
diff --git a/docs/ops/state/large_state_tuning.zh.md b/docs/ops/state/large_state_tuning.zh.md
index e6b0498..47b1f2c 100644
--- a/docs/ops/state/large_state_tuning.zh.md
+++ b/docs/ops/state/large_state_tuning.zh.md
@@ -108,7 +108,7 @@ impact.
 
 For state to be snapshotted asynchronsously, you need to use a state backend which supports asynchronous snapshotting.
 Starting from Flink 1.3, both RocksDB-based as well as heap-based state backends (`filesystem`) support asynchronous
-snapshotting and use it by default. This applies to to both managed operator state as well as managed keyed state (incl. timers state).
+snapshotting and use it by default. This applies to both managed operator state as well as managed keyed state (incl. timers state).
 
 <span class="label label-info">Note</span> *The combination RocksDB state backend with heap-based timers currently does NOT support asynchronous snapshots for the timers state.
 Other state like keyed state is still snapshotted asynchronously. Please note that this is not a regression from previous versions and will be resolved with `FLINK-10026`.*
diff --git a/docs/ops/upgrading.zh.md b/docs/ops/upgrading.zh.md
index 5259ca4..e36163f 100644
--- a/docs/ops/upgrading.zh.md
+++ b/docs/ops/upgrading.zh.md
@@ -214,6 +214,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
       <th class="text-center">1.6.x</th>
       <th class="text-center">1.7.x</th>
       <th class="text-center">1.8.x</th>
+      <th class="text-center">1.9.x</th>
       <th class="text-center">Limitations</th>
     </tr>
   </thead>
@@ -228,6 +229,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center"></td>
           <td class="text-center"></td>
           <td class="text-center"></td>
+          <td class="text-center"></td>
           <td class="text-left">The maximum parallelism of a job that was migrated from Flink 1.1.x to 1.2.x+ is
           currently fixed as the parallelism of the job. This means that the parallelism can not be increased after
           migration. This limitation might be removed in a future bugfix release.</td>
@@ -242,9 +244,16 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center">O</td>
           <td class="text-center">O</td>
           <td class="text-center">O</td>
-          <td class="text-left">When migrating from Flink 1.2.x to Flink 1.3.x+, changing parallelism at the same
+          <td class="text-center">O</td>
+          <td class="text-left">
+          When migrating from Flink 1.2.x to Flink 1.3.x+, changing parallelism at the same
           time is not supported. Users have to first take a savepoint after migrating to Flink 1.3.x+, and then change
-          parallelism. Savepoints created for CEP applications cannot be restored in 1.4.x+.</td>
+          parallelism.
+          <br/><br/>Savepoints created for CEP applications cannot be restored in 1.4.x+.
+          <br/><br/>Savepoints from Flink 1.2 that contain a Scala TraversableSerializer are not compatible with Flink 1.8 anymore
+          because of an update in this serializer. You can get around this restriction by first upgrading to a version between Flink 1.3 and
+          Flink 1.7 and then updating to Flink 1.8.
+          </td>
     </tr>
     <tr>
           <td class="text-center"><strong>1.3.x</strong></td>
@@ -256,6 +265,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center">O</td>
           <td class="text-center">O</td>
           <td class="text-center">O</td>
+          <td class="text-center">O</td>
           <td class="text-left">Migrating from Flink 1.3.0 to Flink 1.4.[0,1] will fail if the savepoint contains Scala case classes. Users have to directly migrate to 1.4.2+ instead.</td>
     </tr>
     <tr>
@@ -268,6 +278,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center">O</td>
           <td class="text-center">O</td>
           <td class="text-center">O</td>
+          <td class="text-center">O</td>
           <td class="text-left"></td>
     </tr>
     <tr>
@@ -280,6 +291,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center">O</td>
           <td class="text-center">O</td>
           <td class="text-center">O</td>
+          <td class="text-center">O</td>
           <td class="text-left">There is a known issue with resuming broadcast state created with 1.5.x in versions
           1.6.x up to 1.6.2, and 1.7.0: <a href="https://issues.apache.org/jira/browse/FLINK-11087">FLINK-11087</a>. Users
           upgrading to 1.6.x or 1.7.x series need to directly migrate to minor versions higher than 1.6.2 and 1.7.0,
@@ -295,6 +307,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center">O</td>
           <td class="text-center">O</td>
           <td class="text-center">O</td>
+          <td class="text-center">O</td>
           <td class="text-left"></td>
     </tr>
     <tr>
@@ -307,6 +320,7 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center"></td>
           <td class="text-center">O</td>
           <td class="text-center">O</td>
+          <td class="text-center">O</td>
           <td class="text-left"></td>
     </tr>
     <tr>
@@ -319,11 +333,21 @@ Savepoints are compatible across Flink versions as indicated by the table below:
           <td class="text-center"></td>
           <td class="text-center"></td>
           <td class="text-center">O</td>
-          <td class="text-left">Savepoints from Flink 1.2 that contain a Scala
-          TraversableSerializer are not compatible with Flink 1.8 anymore
-          because of an update in this serializer. You can get around this
-          restriction by first upgrading to a version between Flink 1.3 and
-          Flink 1.7 and then updating to Flink 1.8.</td>
+          <td class="text-center">O</td>
+          <td class="text-left"></td>
+    </tr>
+    <tr>
+          <td class="text-center"><strong>1.9.x</strong></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center"></td>
+          <td class="text-center">O</td>
+          <td class="text-left"></td>
     </tr>
   </tbody>
 </table>