You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ma...@apache.org on 2022/03/21 07:46:33 UTC

[flink] branch release-1.15 updated: [FLINK-26578][docs-zh] Translate new Project Configuration section to Chinese. This closes #19162

This is an automated email from the ASF dual-hosted git repository.

martijnvisser pushed a commit to branch release-1.15
in repository https://gitbox.apache.org/repos/asf/flink.git


The following commit(s) were added to refs/heads/release-1.15 by this push:
     new e4b8d92  [FLINK-26578][docs-zh] Translate new Project Configuration section to Chinese. This closes #19162
e4b8d92 is described below

commit e4b8d9285fde28f63fec26be97c4e27742d06c23
Author: Yubin Li <li...@163.com>
AuthorDate: Sun Mar 13 22:35:23 2022 +0800

    [FLINK-26578][docs-zh] Translate new Project Configuration section to Chinese. This closes #19162
    
    (cherry picked from commit 2cb0c7d01f505e7a4d3fa75f2b9da2671f81d65c)
---
 .../docs/connectors/datastream/cassandra.md        |   2 +-
 .../docs/connectors/datastream/elasticsearch.md    |   4 +-
 docs/content.zh/docs/connectors/datastream/jdbc.md |   2 +-
 .../content.zh/docs/connectors/datastream/kafka.md |   2 +-
 docs/content.zh/docs/connectors/datastream/nifi.md |   2 +-
 .../docs/connectors/datastream/pubsub.md           |   2 +-
 .../docs/connectors/datastream/pulsar.md           |   2 +-
 .../docs/connectors/datastream/rabbitmq.md         |   2 +-
 .../docs/connectors/table/elasticsearch.md         |   2 +
 docs/content.zh/docs/connectors/table/hbase.md     |   1 +
 docs/content.zh/docs/connectors/table/jdbc.md      |   4 +-
 docs/content.zh/docs/connectors/table/kafka.md     |   2 +-
 docs/content.zh/docs/connectors/table/kinesis.md   |   2 +
 docs/content.zh/docs/connectors/table/overview.md  |   2 +
 .../docs/connectors/table/upsert-kafka.md          |   2 +
 .../dev/{datastream => configuration}/_index.md    |   4 +-
 docs/content.zh/docs/dev/configuration/advanced.md | 106 ++++
 .../content.zh/docs/dev/configuration/connector.md |  56 ++
 docs/content.zh/docs/dev/configuration/gradle.md   |  92 ++++
 docs/content.zh/docs/dev/configuration/maven.md    | 142 +++++
 docs/content.zh/docs/dev/configuration/overview.md | 206 ++++++++
 docs/content.zh/docs/dev/configuration/testing.md  |  49 ++
 docs/content.zh/docs/dev/datastream/_index.md      |   4 +-
 .../datastream/fault-tolerance/queryable_state.md  |   2 +-
 .../docs/dev/datastream/project-configuration.md   | 570 ---------------------
 docs/content.zh/docs/dev/datastream/testing.md     |   6 +-
 docs/content.zh/docs/dev/table/data_stream_api.md  |   2 +
 docs/content.zh/docs/dev/table/overview.md         |  69 +--
 docs/content.zh/docs/dev/table/sourcesSinks.md     |  27 +
 .../docs/dev/table/sql/queries/match_recognize.md  |   2 +-
 docs/content.zh/docs/dev/table/sqlClient.md        |  11 +-
 docs/content.zh/docs/flinkDev/ide_setup.md         |   2 +-
 docs/content.zh/docs/libs/cep.md                   |   6 +-
 docs/content.zh/docs/libs/gelly/overview.md        |   2 +-
 34 files changed, 723 insertions(+), 668 deletions(-)

diff --git a/docs/content.zh/docs/connectors/datastream/cassandra.md b/docs/content.zh/docs/connectors/datastream/cassandra.md
index bddb9f2..b13bfad 100644
--- a/docs/content.zh/docs/connectors/datastream/cassandra.md
+++ b/docs/content.zh/docs/connectors/datastream/cassandra.md
@@ -37,7 +37,7 @@ To use this connector, add the following dependency to your project:
 
 {{< artifact flink-connector-cassandra withScalaVersion >}}
 
-Note that the streaming connectors are currently __NOT__ part of the binary distribution. See how to link with them for cluster execution [here]({{< ref "docs/dev/datastream/project-configuration" >}}).
+Note that the streaming connectors are currently __NOT__ part of the binary distribution. See how to link with them for cluster execution [here]({{< ref "docs/dev/configuration/overview" >}}).
 
 ## Installing Apache Cassandra
 There are multiple ways to bring up a Cassandra instance on local machine:
diff --git a/docs/content.zh/docs/connectors/datastream/elasticsearch.md b/docs/content.zh/docs/connectors/datastream/elasticsearch.md
index c97e852..28d2f8f 100644
--- a/docs/content.zh/docs/connectors/datastream/elasticsearch.md
+++ b/docs/content.zh/docs/connectors/datastream/elasticsearch.md
@@ -52,7 +52,7 @@ under the License.
 </table>
 
 请注意,流连接器目前不是二进制发行版的一部分。
-有关如何将程序和用于集群执行的库一起打包,参考[此文档]({{< ref "docs/dev/datastream/project-configuration" >}})
+有关如何将程序和用于集群执行的库一起打包,参考[此文档]({{< ref "docs/dev/configuration/overview" >}})。
 
 ## 安装 Elasticsearch
 
@@ -373,7 +373,7 @@ checkpoint 会进行等待,直到 Elasticsearch 节点队列有足够的容量
 ## 将 Elasticsearch 连接器打包到 Uber-Jar 中
 
 建议构建一个包含所有依赖的 uber-jar (可执行的 jar),以便更好地执行你的 Flink 程序。
-(更多信息参见[此文档]({{< ref "docs/dev/datastream/project-configuration" >}}))。
+(更多信息参见[此文档]({{< ref "docs/dev/configuration/overview" >}}))。
 
 或者,你可以将连接器的 jar 文件放入 Flink 的 `lib/` 目录下,使其在全局范围内可用,即可用于所有的作业。
 
diff --git a/docs/content.zh/docs/connectors/datastream/jdbc.md b/docs/content.zh/docs/connectors/datastream/jdbc.md
index 50fde2e..31e4f18 100644
--- a/docs/content.zh/docs/connectors/datastream/jdbc.md
+++ b/docs/content.zh/docs/connectors/datastream/jdbc.md
@@ -32,7 +32,7 @@ under the License.
 
 {{< artifact flink-connector-jdbc >}}
 
-注意该连接器目前还 __不是__ 二进制发行版的一部分,如何在集群中运行请参考 [这里]({{< ref "docs/dev/datastream/project-configuration" >}})。
+注意该连接器目前还 __不是__ 二进制发行版的一部分,如何在集群中运行请参考 [这里]({{< ref "docs/dev/configuration/overview" >}})。
 
 已创建的 JDBC Sink 能够保证至少一次的语义。
 更有效的精确执行一次可以通过 upsert 语句或幂等更新实现。
diff --git a/docs/content.zh/docs/connectors/datastream/kafka.md b/docs/content.zh/docs/connectors/datastream/kafka.md
index 3908f19..d064d60 100644
--- a/docs/content.zh/docs/connectors/datastream/kafka.md
+++ b/docs/content.zh/docs/connectors/datastream/kafka.md
@@ -43,7 +43,7 @@ Apache Flink 集成了通用的 Kafka 连接器,它会尽力与 Kafka client 
 {{< artifact flink-connector-base >}}
 
 Flink 目前的流连接器还不是二进制发行版的一部分。
-[在此处]({{< ref "docs/dev/datastream/project-configuration" >}})可以了解到如何链接它们,从而在集群中运行。
+[在此处]({{< ref "docs/dev/configuration/overview" >}})可以了解到如何链接它们,从而在集群中运行。
 
 ## Kafka Source
 {{< hint info >}}
diff --git a/docs/content.zh/docs/connectors/datastream/nifi.md b/docs/content.zh/docs/connectors/datastream/nifi.md
index d8e53e6..a67b9cb 100644
--- a/docs/content.zh/docs/connectors/datastream/nifi.md
+++ b/docs/content.zh/docs/connectors/datastream/nifi.md
@@ -35,7 +35,7 @@ The NiFi connector is deprecated and will be removed with Flink 1.16.
 
 {{< artifact flink-connector-nifi >}}
 
-注意这些连接器目前还没有包含在二进制发行版中。添加依赖、打包配置以及集群运行的相关信息请参考 [这里]({{< ref "docs/dev/datastream/project-configuration" >}})。
+注意这些连接器目前还没有包含在二进制发行版中。添加依赖、打包配置以及集群运行的相关信息请参考 [这里]({{< ref "docs/dev/configuration/overview" >}})。
 
 #### 安装 Apache NiFi
 
diff --git a/docs/content.zh/docs/connectors/datastream/pubsub.md b/docs/content.zh/docs/connectors/datastream/pubsub.md
index 18225ca..04e2a01 100644
--- a/docs/content.zh/docs/connectors/datastream/pubsub.md
+++ b/docs/content.zh/docs/connectors/datastream/pubsub.md
@@ -34,7 +34,7 @@ under the License.
 <b>注意</b>:此连接器最近才加到 Flink 里,还未接受广泛测试。
 </p>
 
-注意连接器目前还不是二进制发行版的一部分,添加依赖、打包配置以及集群运行信息请参考[这里]({{< ref "docs/dev/datastream/project-configuration" >}})
+注意连接器目前还不是二进制发行版的一部分,添加依赖、打包配置以及集群运行信息请参考[这里]({{< ref "docs/dev/configuration/overview" >}})
 
 ## Consuming or Producing PubSubMessages
 
diff --git a/docs/content.zh/docs/connectors/datastream/pulsar.md b/docs/content.zh/docs/connectors/datastream/pulsar.md
index 1779c72..301f242 100644
--- a/docs/content.zh/docs/connectors/datastream/pulsar.md
+++ b/docs/content.zh/docs/connectors/datastream/pulsar.md
@@ -35,7 +35,7 @@ Flink 当前只提供 [Apache Pulsar](https://pulsar.apache.org) 数据源,用
 
 {{< artifact flink-connector-pulsar >}}
 
-Flink 的流连接器并不会放到发行文件里面一同发布,阅读[此文档]({{< ref "docs/dev/datastream/project-configuration" >}}),了解如何将连接器添加到集群实例内。
+Flink 的流连接器并不会放到发行文件里面一同发布,阅读[此文档]({{< ref "docs/dev/configuration/overview" >}}),了解如何将连接器添加到集群实例内。
 
 ## Pulsar 数据源
 
diff --git a/docs/content.zh/docs/connectors/datastream/rabbitmq.md b/docs/content.zh/docs/connectors/datastream/rabbitmq.md
index 7c3a2a1..668987f 100644
--- a/docs/content.zh/docs/connectors/datastream/rabbitmq.md
+++ b/docs/content.zh/docs/connectors/datastream/rabbitmq.md
@@ -40,7 +40,7 @@ Flink 自身既没有复用 "RabbitMQ AMQP Java Client" 的代码,也没有将
 
 {{< artifact flink-connector-rabbitmq >}}
 
-注意连接器现在没有包含在二进制发行版中。集群执行的相关信息请参考 [这里]({{< ref "docs/dev/datastream/project-configuration" >}}).
+注意连接器现在没有包含在二进制发行版中。集群执行的相关信息请参考 [这里]({{< ref "docs/dev/configuration/overview" >}}).
 
 ### 安装 RabbitMQ
 安装 RabbitMQ 请参考 [RabbitMQ 下载页面](http://www.rabbitmq.com/download.html)。安装完成之后,服务会自动拉起,应用程序就可以尝试连接到 RabbitMQ 了。
diff --git a/docs/content.zh/docs/connectors/table/elasticsearch.md b/docs/content.zh/docs/connectors/table/elasticsearch.md
index 339d9d5..bb2ca45 100644
--- a/docs/content.zh/docs/connectors/table/elasticsearch.md
+++ b/docs/content.zh/docs/connectors/table/elasticsearch.md
@@ -40,6 +40,8 @@ Elasticsearch 连接器允许将数据写入到 Elasticsearch 引擎的索引中
 
 {{< sql_download_table "elastic" >}}
 
+Elasticsearch 连接器不是二进制发行版的一部分,请查阅[这里]({{< ref "docs/dev/configuration/overview" >}})了解如何在集群运行中引用 Elasticsearch 连接器。
+
 如何创建 Elasticsearch 表
 ----------------
 
diff --git a/docs/content.zh/docs/connectors/table/hbase.md b/docs/content.zh/docs/connectors/table/hbase.md
index 85b376f..2ff3fa49 100644
--- a/docs/content.zh/docs/connectors/table/hbase.md
+++ b/docs/content.zh/docs/connectors/table/hbase.md
@@ -40,6 +40,7 @@ HBase 连接器在 upsert 模式下运行,可以使用 DDL 中定义的主键
 
 {{< sql_download_table "hbase" >}}
 
+HBase 连接器不是二进制发行版的一部分,请查阅[这里]({{< ref "docs/dev/configuration/overview" >}})了解如何在集群运行中引用 HBase 连接器。
 
 如何使用 HBase 表
 ----------------
diff --git a/docs/content.zh/docs/connectors/table/jdbc.md b/docs/content.zh/docs/connectors/table/jdbc.md
index ec8adaa..9a812fc 100644
--- a/docs/content.zh/docs/connectors/table/jdbc.md
+++ b/docs/content.zh/docs/connectors/table/jdbc.md
@@ -44,6 +44,8 @@ JDBC 连接器允许使用 JDBC 驱动向任意类型的关系型数据库读取
 
 {{< sql_download_table "jdbc" >}}
 
+JDBC 连接器不是二进制发行版的一部分,请查阅[这里]({{< ref "docs/dev/configuration/overview" >}})了解如何在集群运行中引用 JDBC 连接器。
+
 在连接到具体数据库时,也需要对应的驱动依赖,目前支持的驱动如下:
 
 | Driver      |      Group Id      |      Artifact Id       |      JAR         |
@@ -53,7 +55,7 @@ JDBC 连接器允许使用 JDBC 驱动向任意类型的关系型数据库读取
 | PostgreSQL  |  `org.postgresql`  |      `postgresql`      | [下载](https://jdbc.postgresql.org/download.html) |
 | Derby       | `org.apache.derby` |        `derby`         | [下载](http://db.apache.org/derby/derby_downloads.html) | |
 
-当前,JDBC 连接器和驱动不在 Flink 二进制发布包中,请参阅[这里]({{< ref "docs/dev/datastream/project-configuration" >}})了解在集群上执行时何连接它们。
+当前,JDBC 连接器和驱动不在 Flink 二进制发布包中,请参阅[这里]({{< ref "docs/dev/configuration" >}})了解在集群上执行时何连接它们。
 
 
 <a name="how-to-create-a-jdbc-table"></a>
diff --git a/docs/content.zh/docs/connectors/table/kafka.md b/docs/content.zh/docs/connectors/table/kafka.md
index 6e3d3f5..d02d1cf 100644
--- a/docs/content.zh/docs/connectors/table/kafka.md
+++ b/docs/content.zh/docs/connectors/table/kafka.md
@@ -36,7 +36,7 @@ Kafka 连接器提供从 Kafka topic 中消费和写入数据的能力。
 
 {{< sql_download_table "kafka" >}}
 
-Kafka 连接器目前并不包含在 Flink 的二进制发行版中,请查阅 [这里]({{< ref "docs/dev/datastream/project-configuration" >}}) 了解如何在集群运行中引用 Kafka 连接器。
+Kafka 连接器目前并不包含在 Flink 的二进制发行版中,请查阅[这里]({{< ref "docs/dev/configuration/overview" >}})了解如何在集群运行中引用 Kafka 连接器。
 
 如何创建 Kafka 表
 ----------------
diff --git a/docs/content.zh/docs/connectors/table/kinesis.md b/docs/content.zh/docs/connectors/table/kinesis.md
index 93d7657..e706c32 100644
--- a/docs/content.zh/docs/connectors/table/kinesis.md
+++ b/docs/content.zh/docs/connectors/table/kinesis.md
@@ -36,6 +36,8 @@ Dependencies
 
 {{< sql_download_table "kinesis" >}}
 
+Kinesis 连接器目前并不包含在 Flink 的二进制发行版中,请查阅[这里]({{< ref "docs/dev/configuration/overview" >}})了解如何在集群运行中引用 Kinesis 连接器。
+
 How to create a Kinesis data stream table
 -----------------------------------------
 
diff --git a/docs/content.zh/docs/connectors/table/overview.md b/docs/content.zh/docs/connectors/table/overview.md
index 01fa0f8..03fb840 100644
--- a/docs/content.zh/docs/connectors/table/overview.md
+++ b/docs/content.zh/docs/connectors/table/overview.md
@@ -95,6 +95,8 @@ Flink natively support various connectors. The following tables list all availab
 
 {{< top >}}
 
+请查阅[配置]({{< ref "docs/dev/configuration/connector" >}})小节了解如何添加连接器依赖。
+
 How to use connectors
 --------
 
diff --git a/docs/content.zh/docs/connectors/table/upsert-kafka.md b/docs/content.zh/docs/connectors/table/upsert-kafka.md
index 298a5f9..40df1fa 100644
--- a/docs/content.zh/docs/connectors/table/upsert-kafka.md
+++ b/docs/content.zh/docs/connectors/table/upsert-kafka.md
@@ -40,6 +40,8 @@ Upsert Kafka 连接器支持以 upsert 方式从 Kafka topic 中读取数据并
 
 {{< sql_download_table "upsert-kafka" >}}
 
+Upsert Kafka 连接器不是二进制发行版的一部分,请查阅[这里]({{< ref "docs/dev/configuration/overview" >}})了解如何在集群运行中引用 Upsert Kafka 连接器。
+
 完整示例
 ----------------
 
diff --git a/docs/content.zh/docs/dev/datastream/_index.md b/docs/content.zh/docs/dev/configuration/_index.md
similarity index 96%
copy from docs/content.zh/docs/dev/datastream/_index.md
copy to docs/content.zh/docs/dev/configuration/_index.md
index 1a32813..0ad3d6b 100644
--- a/docs/content.zh/docs/dev/datastream/_index.md
+++ b/docs/content.zh/docs/dev/configuration/_index.md
@@ -1,5 +1,5 @@
 ---
-title: DataStream API
+title: "项目配置"
 bookCollapseSection: true
 weight: 1
 ---
@@ -20,4 +20,4 @@ software distributed under the License is distributed on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
\ No newline at end of file
+-->
diff --git a/docs/content.zh/docs/dev/configuration/advanced.md b/docs/content.zh/docs/dev/configuration/advanced.md
new file mode 100644
index 0000000..b940947
--- /dev/null
+++ b/docs/content.zh/docs/dev/configuration/advanced.md
@@ -0,0 +1,106 @@
+---
+title: "高级配置"
+weight: 10
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 高级配置主题
+
+## Flink 依赖剖析
+
+Flink 自身由一组类和依赖项组成,这些共同构成了 Flink 运行时的核心,在 Flink 应用程序启动时必须存在,会提供诸如通信协调、网络管理、检查点、容错、API、算子(如窗口)、资源管理等领域的服务。
+
+这些核心类和依赖项都打包在 `flink-dist.jar`,可以在下载的发行版 `/lib` 文件夹中找到,也是 Flink 容器镜像的基础部分。您可以将其近似地看作是包含 `String` 和 `List` 等公用类的 Java 核心库。
+
+为了保持核心依赖项尽可能小并避免依赖冲突,Flink Core Dependencies 不包含任何连接器或库(如 CEP、SQL、ML),以避免在类路径中有过多的类和依赖项。
+
+Flink 发行版的 `/lib` 目录里还有包括常用模块在内的各种 JAR 文件,例如 [执行 Table 作业的必需模块](#Table-依赖剖析) 、一组连接器和 format。默认情况下会自动加载,若要禁止加载只需将它们从 classpath 中的 `/lib` 目录中删除即可。
+
+Flink 还在 `/opt` 文件夹下提供了额外的可选依赖项,可以通过移动这些 JAR 文件到 `/lib` 目录来启用这些依赖项。
+
+有关类加载的更多细节,请查阅 [Flink 类加载]({{< ref "docs/ops/debugging/debugging_classloading.zh.md" >}})。
+
+## Scala 版本
+
+不同的 Scala 版本二进制不兼容,所有(传递地)依赖于 Scala 的 Flink 依赖项都以它们构建的 Scala 版本为后缀(如 `flink-streaming-scala_2.12`)。
+
+如果您只使用 Flink 的 Java API,您可以使用任何 Scala 版本。如果您使用 Flink 的 Scala API,则需要选择与应用程序的 Scala 匹配的 Scala 版本。
+
+有关如何为特定 Scala 版本构建 Flink 的细节,请查阅[构建指南]({{< ref "docs/flinkDev/building" >}}#scala-versions)。
+
+2.12.8 之后的 Scala 版本与之前的 2.12.x 版本二进制不兼容,使 Flink 项目无法将其 2.12.x 版本直接升级到 2.12.8 以上。您可以按照[构建指南]({{< ref "docs/flinkDev/building" >}}#scala-versions)在本地为更高版本的 Scala 构建 Flink 。为此,您需要在构建时添加 `-Djapicmp.skip` 以跳过二进制兼容性检查。
+
+有关更多细节,请查阅 [Scala 2.12.8 版本说明](https://github.com/scala/scala/releases/tag/v2.12.8)。相关部分指出:
+
+第二项修改是二进制不兼容的:2.12.8 编译器忽略了由更早版本的 2.12 编译器生成的某些方法。然而我们相信这些方法永远不会被使用,现有的编译代码仍可工作。有关更多详细信息,请查阅[pull request 描述](https://github.com/scala/scala/pull/7469)。
+
+## Table 依赖剖析
+
+Flink 发行版默认包含执行 Flink SQL 任务的必要 JAR 文件(位于 `/lib` 目录),主要有:
+
+- `flink-table-api-java-uber-{{< version >}}.jar` &#8594; 包含所有的 Java API;
+- `flink-table-runtime-{{< version >}}.jar` &#8594; 包含 Table 运行时;
+- `flink-table-planner-loader-{{< version >}}.jar` &#8594; 包含查询计划器。
+
+{{< hint warning >}}
+以前,这些 JAR 都打包进了 `flink-table.jar`,自从 Flink 1.15 开始,已将其划分成三个 JAR,以允许用户使用 `flink-table-planner-loader-{{< version >}}.jar` 充当 `flink-table-planner{{< scala_version >}}-{{< version >}}.jar`。
+{{< /hint >}}
+
+虽然 Table Java API 内置于发行版中,但默认情况下不包含 Table Scala API。在 Flink Scala API 中使用格式和连接器时,您需要手动下载这些 JAR 包并将其放到发行版的 `/lib` 文件夹中(推荐),或者将它们打包为 Flink SQL 作业的 uber/fat JAR 包中的依赖项。
+
+有关更多细节,请查阅如何[连接外部系统]({{< ref "docs/connectors/table/overview" >}})。
+
+### Table Planner 和 Table Planner 加载器
+
+从 Flink 1.15 开始,发行版包含两个 planner:
+
+- `flink-table-planner{{< scala_version >}}-{{< version >}}.jar`, 位于 `/opt` 目录, 包含查询计划器;
+- `flink-table-planner-loader-{{< version >}}.jar`, 位于 `/lib` 目录默认被加载, 包含隐藏在单独的 classpath 里的查询计划器 (您无法直接使用 `io.apache.flink.table.planner` 包)。
+
+这两个 planner JAR 文件的代码功能相同,但打包方式不同。若使用第一个文件,您必须使用与其相同版本的 Scala;若使用第二个,由于 Scala 已经被打包进该文件里,您不需要考虑 Scala 版本问题。
+
+默认情况下,发行版使用 `flink-table-planner-loader`。如果想使用内部查询计划器,您可以换掉 JAR 包(拷贝 `flink-table-planner{{< scala_version >}}.jar` 并复制到发行版的 `/lib` 目录)。请注意,此时会被限制用于 Flink 发行版的 Scala 版本。
+
+{{< hint danger >}}
+这两个 planner 无法同时存在于 classpath,如果您在 `/lib` 目录同时加载他们,Table 任务将会失败。
+{{< /hint >}}
+
+{{< hint warning >}}
+在即将发布的 Flink 版本中,我们将停止在 Flink 发行版中发布 `flink-table-planner{{< scala_version >}}` 组件。我们强烈建议迁移您的作业/自定义连接器/格式以使用前述 API 模块,而不依赖此内部 planner。如果您需要 planner 中尚未被 API 模块暴露的一些功能,请与社区讨论。
+{{< /hint >}}
+
+## Hadoop 依赖
+
+**一般规则:** 没有必要直接添加 Hadoop 依赖到您的应用程序里,唯一的例外是您通过 [Hadoop 兼容](https://nightlies.apache.org/flink/flink-docs-master/docs/dev/dataset/hadoop_compatibility/) 使用已有的 Hadoop 读写 format。
+
+如果您想将 Flink 与 Hadoop 一起使用,您需要有一个包含 Hadoop 依赖项的 Flink 系统,而不是添加 Hadoop 作为应用程序依赖项。换句话说,Hadoop 必须是 Flink 系统本身的依赖,而不是用户代码的依赖。Flink 将使用 `HADOOP_CLASSPATH` 环境变量指定 Hadoop 依赖项,可以这样设置:
+
+```bash
+export HADOOP_CLASSPATH=`hadoop classpath`
+```
+
+这样设计有两个主要原因:
+
+- 一些 Hadoop 交互可能在用户应用程序启动之前就发生在 Flink 内核。其中包括为检查点配置 HDFS、通过 Hadoop 的 Kerberos 令牌进行身份验证或在 YARN 上部署;
+
+- Flink 的反向类加载方式在核心依赖项中隐藏了许多传递依赖项。这不仅适用于 Flink 自己的核心依赖项,也适用于已有的 Hadoop 依赖项。这样,应用程序可以使用相同依赖项的不同版本,而不会遇到依赖项冲突。当依赖树变得非常大时,这非常有用。
+
+如果您在 IDE 内开发或测试期间需要 Hadoop 依赖项(比如用于 HDFS 访问),应该限定这些依赖项的使用范围(如 *test* 或 *provided*)。
diff --git a/docs/content.zh/docs/dev/configuration/connector.md b/docs/content.zh/docs/dev/configuration/connector.md
new file mode 100644
index 0000000..95f131a
--- /dev/null
+++ b/docs/content.zh/docs/dev/configuration/connector.md
@@ -0,0 +1,56 @@
+---
+title: "连接器和格式"
+weight: 5
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 连接器和格式
+
+Flink 应用程序可以通过连接器读取和写入各种外部系统。它支持多种格式,以便对数据进行编码和解码以匹配 Flink 的数据结构。
+
+[DataStream]({{< ref "docs/connectors/datastream/overview.zh.md" >}}) 和 [Table API/SQL]({{< ref "docs/connectors/table/overview.zh.md" >}}) 都提供了连接器和格式的概述。
+
+## 可用的组件
+
+为了使用连接器和格式,您需要确保 Flink 可以访问实现了这些功能的组件。对于 Flink 社区支持的每个连接器,我们在 [Maven Central](https://search.maven.org) 发布了两类组件:
+
+* `flink-connector-<NAME>` 这是一个精简 JAR,仅包括连接器代码,但不包括最终的第三方依赖项;
+* `flink-sql-connector-<NAME>` 这是一个包含连接器第三方依赖项的 uber JAR;
+
+这同样适用于格式。请注意,某些连接器可能没有相应的 `flink-sql-connector-<NAME>` 组件,因为它们不需要第三方依赖项。
+
+{{< hint info >}}
+uber/fat JAR 主要与[SQL 客户端]({{< ref "docs/dev/table/sqlClient" >}})一起使用,但您也可以在任何 DataStream/Table 应用程序中使用它们。
+{{< /hint >}}
+
+## 使用组件
+
+为了使用连接器/格式模块,您可以:
+
+* 把精简 JAR 及其传递依赖项打包进您的作业 JAR;
+* 把 uber JAR 打包进您的作业 JAR;
+* 把 uber JAR 直接复制到 Flink 发行版的 `/lib` 文件夹内;
+
+关于打包依赖项,请查看 [Maven]({{< ref "docs/dev/configuration/maven" >}}) 和 [Gradle]({{< ref "docs/dev/configuration/gradle" >}}) 指南。有关 Flink 发行版的参考,请查看[Flink 依赖剖析]({{< ref "docs/dev/configuration/overview" >}}#Flink-依赖剖析)。
+
+{{< hint info >}}
+决定是打成 uber JAR、精简 JAR 还是仅在发行版包含依赖项取决于您和您的使用场景。如果您使用 uber JAR,您将对作业里的依赖项版本有更多的控制权;如果您使用精简 JAR,由于您可以在不更改连接器版本的情况下更改版本(允许二进制兼容),您将对传递依赖项有更多的控制权;如果您直接在 Flink 发行版的 `/lib` 目录里内嵌连接器 uber JAR,您将能够在一处控制所有作业的连接器版本。
+{{< /hint >}}
diff --git a/docs/content.zh/docs/dev/configuration/gradle.md b/docs/content.zh/docs/dev/configuration/gradle.md
new file mode 100644
index 0000000..a745506
--- /dev/null
+++ b/docs/content.zh/docs/dev/configuration/gradle.md
@@ -0,0 +1,92 @@
+---
+title: "使用 Gradle"
+weight: 3
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 如何使用 Gradle 配置您的项目
+
+您可能需要一个构建工具来配置您的 Flink 项目,本指南将向您展示如何使用 [Gradle](https://gradle.org) 执行此操作。Gradle 是一个开源的通用构建工具,可用于在开发过程中自动化执行任务。
+
+## 要求
+
+- Gradle 7.x 
+- Java 11
+
+## 将项目导入 IDE
+
+创建[项目目录和文件]({{< ref "docs/dev/configuration/overview#getting-started" >}})后,我们建议您将此项目导入到 IDE 进行开发和测试。
+
+IntelliJ IDEA 通过 `Gradle` 插件支持 Gradle 项目。
+
+Eclipse 通过 [Eclipse Buildship](https://projects.eclipse.org/projects/tools.buildship) 插件执行此操作(确保在导入向导的最后一步中指定 Gradle 版本 >= 3.0,`shadow` 插件会用到它)。您还可以使用 [Gradle 的 IDE 集成](https://docs.gradle.org/current/userguide/userguide.html#ide-integration) 来使用 Gradle 创建项目文件。
+
+**注意:** Java 的默认 JVM 堆大小对于 Flink 来说可能太小,您应该手动增加它。在 Eclipse 中,选中 `Run Configurations -> Arguments` 并在 `VM Arguments` 框里填上:`-Xmx800m`。在 IntelliJ IDEA 中,推荐选中 `Help | Edit Custom VM Options` 菜单修改 JVM 属性。详情请查阅[本文](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties)。
+
+**关于 IntelliJ 的注意事项:** 要使应用程序在 IntelliJ IDEA 中运行,需要在运行配置中的 `Include dependencies with "Provided" scope` 打勾。如果此选项不可用(可能是由于使用了较旧的 IntelliJ IDEA 版本),可创建一个调用应用程序 `main()` 方法的测试用例。
+
+## 构建项目
+
+如果您想 __构建/打包__ 您的项目,请转到您的项目目录并运行 '`gradle clean shadowJar`' 命令。您将 __找到一个 JAR 文件__,其中包含您的应用程序,还有已作为依赖项添加到应用程序的连接器和库:`build/libs/<project-name>-<version>-all.jar`。
+
+__注意:__ 如果您使用不同于 *StreamingJob* 的类作为应用程序的主类/入口点,我们建议您对 `build.gradle` 文件里的 `mainClassName` 配置进行相应的修改。这样,Flink 可以通过 JAR 文件运行应用程序,而无需额外指定主类。
+
+## 向项目添加依赖项
+
+在 `build.gradle` 文件的 dependencies 块中配置依赖项
+
+例如,如果您使用我们的 Gradle 构建脚本或快速启动脚本创建了项目,如下所示,可以将 Kafka 连接器添加为依赖项:
+
+**build.gradle**
+
+```gradle
+...
+dependencies {
+    ...  
+    flinkShadowJar "org.apache.flink:flink-connector-kafka:${flinkVersion}"
+    ...
+}
+...
+```
+
+**重要提示:** 请注意,应将所有这些(核心)依赖项的生效范围置为 [*provided*](https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#dependency-scope)。这意味着需要对它们进行编译,但不应将它们打包进项目生成的应用程序 JAR 文件中。如果不设置为 *provided*,最好的情况是生成的 JAR 变得过大,因为它还包含所有 Flink 核心依赖项。最坏的情况是添加到应用程序 JAR 文件中的 Flink 核心依赖项与您自己的一些依赖项的版本冲突(通常通过反向类加载来避免)。
+
+要将依赖项正确地打包进应用程序 JAR 中,必须把应用程序依赖项的生效范围设置为 *compile* 。
+
+## 打包应用程序
+
+在部署应用到 Flink 环境之前,您需要根据使用场景用不同的方式打包 Flink 应用程序。
+
+如果您想为 Flink 作业创建 JAR 并且只使用 Flink 依赖而不使用任何第三方依赖(比如使用 JSON 格式的文件系统连接器),您不需要创建一个 uber/fat JAR 或将任何依赖打进包。
+
+您可以使用 `gradle clean installDist` 命令,如果您使用的是 [Gradle Wrapper](https://docs.gradle.org/current/userguide/gradle_wrapper.html) ,则用 `./gradlew clean installDist`。
+
+如果您想为 Flink 作业创建 JAR 并使用未内置在 Flink 发行版中的外部依赖项,您可以将它们添加到发行版的类路径中,或者将它们打包进您的 uber/fat 应用程序 JAR 中。
+
+您可以使用该命令 `gradle clean installShadowDist`,该命令将在 `/build/install/yourProject/lib` 目录生成一个 fat JAR。如果您使用的是 [Gradle Wrapper](https://docs.gradle.org/current/userguide/gradle_wrapper.html) ,则用 `./gradlew clean installShadowDist`。
+
+您可以将生成的 uber/fat JAR 提交到本地或远程集群:
+
+```sh
+bin/flink run -c org.example.MyJob myFatJar.jar
+```
+
+要了解有关如何部署 Flink 作业的更多信息,请查看[部署指南]({{< ref "docs/deployment/cli" >}})。
diff --git a/docs/content.zh/docs/dev/configuration/maven.md b/docs/content.zh/docs/dev/configuration/maven.md
new file mode 100644
index 0000000..bc1a715
--- /dev/null
+++ b/docs/content.zh/docs/dev/configuration/maven.md
@@ -0,0 +1,142 @@
+---
+title: "使用 Maven"
+weight: 2
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 如何使用 Maven 配置您的项目
+
+本指南将向您展示如何使用 [Maven](https://maven.apache.org) 配置 Flink 作业项目,Maven是 由 Apache Software Foundation 开源的自动化构建工具,使您能够构建、发布和部署项目。您可以使用它来管理软件项目的整个生命周期。
+
+## 要求
+
+- Maven 3.0.4 (or higher)
+- Java 11
+
+## 将项目导入 IDE
+
+创建[项目目录和文件]({{< ref "docs/dev/configuration/overview#getting-started" >}})后,我们建议您将此项目导入到 IDE 进行开发和测试。
+
+IntelliJ IDEA 支持开箱即用的 Maven 项目。Eclipse 提供了 [m2e 插件](http://www.eclipse.org/m2e/) 来[导入 Maven 项目](http://books.sonatype.com/m2eclipse-book/reference/creating-sect-importing-projects.html#fig-creating-import)。
+
+**注意:** Java 的默认 JVM 堆大小对于 Flink 来说可能太小,您应该手动增加它。在 Eclipse 中,选中 `Run Configurations -> Arguments` 并在 `VM Arguments` 框里填上:`-Xmx800m`。在 IntelliJ IDEA 中,推荐选中 `Help | Edit Custom VM Options` 菜单修改 JVM 属性。详情请查阅[本文](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties)。
+
+**关于 IntelliJ 的注意事项:** 要使应用程序在 IntelliJ IDEA 中运行,需要在运行配置中的 Include dependencies with "Provided" scope` 打勾。如果此选项不可用(可能是由于使用了较旧的 IntelliJ IDEA 版本),可创建一个调用应用程序 `main()` 方法的测试用例。
+
+## 构建项目
+
+如果您想 __构建/打包__ 您的项目,请转到您的项目目录并运行 '`mvn clean package`' 命令。您将 __找到一个 JAR 文件__,其中包含您的应用程序(还有已作为依赖项添加到应用程序的连接器和库):`target/<artifact-id>-<version>.jar`。
+
+__注意:__ 如果您使用不同于 `DataStreamJob` 的类作为应用程序的主类/入口点,我们建议您对 `pom.xml` 文件里的 `mainClassName` 配置进行相应的修改。这样,Flink 可以通过 JAR 文件运行应用程序,而无需额外指定主类。
+
+## 向项目添加依赖项
+
+打开您项目目录的 `pom.xml`,在 `dependencies` 标签内添加依赖项。
+
+例如,您可以用如下方式添加 Kafka 连接器依赖:
+
+```xml
+<dependencies>
+    
+    <dependency>
+        <groupId>org.apache.flink</groupId>
+        <artifactId>flink-connector-kafka</artifactId>
+        <version>{{< version >}}</version>
+    </dependency>
+    
+</dependencies>
+```
+
+然后在命令行执行 `mvn install`。
+
+当您在由 `Java Project Template`、`Scala Project Template` 或 Gradle 创建出来的项目里,运行 `mvn clean package` 会自动将应用程序依赖打包进应用程序 JAR。对于不是通过这些模板创建的项目,我们建议使用 Maven Shade 插件以将所有必需的依赖项打包进应用程序 jar。
+
+**重要提示:** 请注意,应将所有这些(核心)依赖项的生效范围置为 [*provided*](https://maven.apache.org/guides/introduction/introduction-to-dependency-mechanism.html#dependency-scope)。这意味着需要对它们进行编译,但不应将它们打包进项目生成的应用程序 JAR 文件中。如果不设置为 *provided*,最好的情况是生成的 JAR 变得过大,因为它还包含所有 Flink 核心依赖项。最坏的情况是添加到应用程序 JAR 文件中的 Flink 核心依赖项与您自己的一些依赖项的版本冲突(通常通过反向类加载来避免)。
+
+要将依赖项正确地打包进应用程序 JAR 中,必须把应用程序依赖项的生效范围设置为 *compile* 。
+
+## 打包应用程序
+
+在部署应用到 Flink 环境之前,您需要根据使用场景用不同的方式打包 Flink 应用程序。
+
+如果您想为 Flink 作业创建 JAR 并且只使用 Flink 依赖而不使用任何第三方依赖(比如使用 JSON 格式的文件系统连接器),您不需要创建一个 uber/fat JAR 或将任何依赖打进包。
+
+如果您想为 Flink 作业创建 JAR 并使用未内置在 Flink 发行版中的外部依赖项,您可以将它们添加到发行版的类路径中,或者将它们打包进您的 uber/fat 应用程序 JAR 中。
+
+您可以将生成的 uber/fat JAR 提交到本地或远程集群:
+
+```sh
+bin/flink run -c org.example.MyJob myFatJar.jar
+```
+
+要了解有关如何部署 Flink 作业的更多信息,请查看[部署指南]({{< ref "docs/deployment/cli" >}})。
+
+## 创建包含依赖项的 uber/fat JAR 的模板
+
+为构建一个包含所有必需的连接器、 类库依赖项的应用程序 JAR,您可以使用如下 shade 插件定义:
+
+```xml
+<build>
+    <plugins>
+        <plugin>
+            <groupId>org.apache.maven.plugins</groupId>
+            <artifactId>maven-shade-plugin</artifactId>
+            <version>3.1.1</version>
+            <executions>
+                <execution>
+                    <phase>package</phase>
+                    <goals>
+                        <goal>shade</goal>
+                    </goals>
+                    <configuration>
+                        <artifactSet>
+                            <excludes>
+                                <exclude>com.google.code.findbugs:jsr305</exclude>
+                            </excludes>
+                        </artifactSet>
+                        <filters>
+                            <filter>
+                                <!-- Do not copy the signatures in the META-INF folder.
+                                Otherwise, this might cause SecurityExceptions when using the JAR. -->
+                                <artifact>*:*</artifact>
+                                <excludes>
+                                    <exclude>META-INF/*.SF</exclude>
+                                    <exclude>META-INF/*.DSA</exclude>
+                                    <exclude>META-INF/*.RSA</exclude>
+                                </excludes>
+                            </filter>
+                        </filters>
+                        <transformers>
+                            <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
+                                <!-- Replace this with the main class of your job -->
+                                <mainClass>my.programs.main.clazz</mainClass>
+                            </transformer>
+                            <transformer implementation="org.apache.maven.plugins.shade.resource.ServicesResourceTransformer"/>
+                        </transformers>
+                    </configuration>
+                </execution>
+            </executions>
+        </plugin>
+    </plugins>
+</build>
+```
+
+[Maven shade 插件](https://maven.apache.org/plugins/maven-shade-plugin/index.html) 默认会包含所有的生效范围是 "runtime" 或 "compile" 的依赖项。
diff --git a/docs/content.zh/docs/dev/configuration/overview.md b/docs/content.zh/docs/dev/configuration/overview.md
new file mode 100644
index 0000000..84f20ab
--- /dev/null
+++ b/docs/content.zh/docs/dev/configuration/overview.md
@@ -0,0 +1,206 @@
+---
+title: "概览"
+weight: 1
+type: docs
+aliases:
+- /dev/project-configuration.html
+- /start/dependencies.html
+- /getting-started/project-setup/dependencies.html
+- /quickstart/java_api_quickstart.html
+- /dev/projectsetup/java_api_quickstart.html
+- /dev/linking_with_flink.html
+- /dev/linking.html
+- /dev/projectsetup/dependencies.html
+- /dev/projectsetup/java_api_quickstart.html
+- /getting-started/project-setup/java_api_quickstart.html
+- /dev/getting-started/project-setup/scala_api_quickstart.html
+- /getting-started/project-setup/scala_api_quickstart.html
+- /quickstart/scala_api_quickstart.html
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 项目配置
+
+本节将向您展示如何通过流行的构建工具 ([Maven]({{< ref "docs/dev/configuration/maven" >}})、[Gradle]({{< ref "docs/dev/configuration/gradle" >}})) 配置您的项目,必要的依赖项(比如[连接器和格式]({{< ref "docs/dev/configuration/connector" >}})),以及覆盖一些[高级]({{< ref "docs/dev/configuration/advanced" >}})配置主题。
+
+每个 Flink 应用程序都依赖于一组 Flink 库。应用程序至少依赖于 Flink API,此外还依赖于某些连接器库(比如 Kafka、Cassandra),以及用户开发的自定义的数据处理逻辑所需要的第三方依赖项。
+
+## 开始
+
+要开始使用 Flink 应用程序,请使用以下命令、脚本和模板来创建 Flink 项目。
+
+{{< tabs "creating project" >}}
+{{< tab "Maven" >}}
+
+您可以使用如下的 Maven 命令或快速启动脚本,基于[原型](https://maven.apache.org/guides/introduction/introduction-to-archetypes.html)创建一个项目。
+
+### Maven 命令
+```bash
+$ mvn archetype:generate                \
+  -DarchetypeGroupId=org.apache.flink   \
+  -DarchetypeArtifactId=flink-quickstart-java \
+  -DarchetypeVersion={{< version >}}
+```
+这允许您命名新建的项目,而且会交互式地询问 groupId、artifactId、package 的名字。
+
+### 快速启动脚本
+```bash
+$ curl https://flink.apache.org/q/quickstart.sh | bash -s {{< version >}}
+```
+
+{{< /tab >}}
+{{< tab "Gradle" >}}
+您可以使用如下的 Gradle 构建脚本或快速启动脚本创建一个项目。
+
+### Gradle 构建脚本
+
+请在脚本的所在目录执行 `gradle` 命令来执行这些构建配置脚本。
+
+**build.gradle**
+
+```gradle
+plugins {
+    id 'java'
+    id 'application'
+    // shadow plugin to produce fat JARs
+    id 'com.github.johnrengelman.shadow' version '7.1.2'
+}
+// artifact properties
+group = 'org.quickstart'
+version = '0.1-SNAPSHOT'
+mainClassName = 'org.quickstart.StreamingJob'
+mainClassName = 'org.quickstart.StreamingJob'
+description = """Flink Quickstart Job"""
+ext {
+    javaVersion = '1.8'
+    flinkVersion = '{{< version >}}'
+    slf4jVersion = '1.7.32'
+    log4jVersion = '2.17.1'
+}
+sourceCompatibility = javaVersion
+targetCompatibility = javaVersion
+tasks.withType(JavaCompile) {
+	options.encoding = 'UTF-8'
+}
+applicationDefaultJvmArgs = ["-Dlog4j.configurationFile=log4j2.properties"]
+
+// declare where to find the dependencies of your project
+repositories {
+    mavenCentral()
+}
+// NOTE: We cannot use "compileOnly" or "shadow" configurations since then we could not run code
+// in the IDE or with "gradle run". We also cannot exclude transitive dependencies from the
+// shadowJar yet (see https://github.com/johnrengelman/shadow/issues/159).
+// -> Explicitly define the // libraries we want to be included in the "flinkShadowJar" configuration!
+configurations {
+    flinkShadowJar // dependencies which go into the shadowJar
+    // always exclude these (also from transitive dependencies) since they are provided by Flink
+    flinkShadowJar.exclude group: 'org.apache.flink', module: 'force-shading'
+    flinkShadowJar.exclude group: 'com.google.code.findbugs', module: 'jsr305'
+    flinkShadowJar.exclude group: 'org.slf4j'
+    flinkShadowJar.exclude group: 'org.apache.logging.log4j'
+}
+// declare the dependencies for your production and test code
+dependencies {
+    // --------------------------------------------------------------
+    // Compile-time dependencies that should NOT be part of the
+    // shadow (uber) jar and are provided in the lib folder of Flink
+    // --------------------------------------------------------------
+    implementation "org.apache.flink:flink-streaming-java:${flinkVersion}"
+    implementation "org.apache.flink:flink-clients:${flinkVersion}"
+    // --------------------------------------------------------------
+    // Dependencies that should be part of the shadow jar, e.g.
+    // connectors. These must be in the flinkShadowJar configuration!
+    // --------------------------------------------------------------
+    //flinkShadowJar "org.apache.flink:flink-connector-kafka:${flinkVersion}"
+    runtimeOnly "org.apache.logging.log4j:log4j-api:${log4jVersion}"
+    runtimeOnly "org.apache.logging.log4j:log4j-core:${log4jVersion}"
+    runtimeOnly "org.apache.logging.log4j:log4j-slf4j-impl:${log4jVersion}"
+    runtimeOnly "org.slf4j:slf4j-log4j12:${slf4jVersion}"
+    // Add test dependencies here.
+    // testCompile "junit:junit:4.12"
+}
+// make compileOnly dependencies available for tests:
+sourceSets {
+    main.compileClasspath += configurations.flinkShadowJar
+    main.runtimeClasspath += configurations.flinkShadowJar
+    test.compileClasspath += configurations.flinkShadowJar
+    test.runtimeClasspath += configurations.flinkShadowJar
+    javadoc.classpath += configurations.flinkShadowJar
+}
+run.classpath = sourceSets.main.runtimeClasspath
+
+shadowJar {
+    configurations = [project.configurations.flinkShadowJar]
+}
+```
+
+**settings.gradle**
+
+```gradle
+rootProject.name = 'quickstart'
+```
+
+### 快速启动脚本
+
+```bash
+bash -c "$(curl https://flink.apache.org/q/gradle-quickstart.sh)" -- {{< version >}} {{< scala_version >}}
+```
+{{< /tab >}}
+{{< /tabs >}}
+
+## 需要哪些依赖项?
+
+要开始一个 Flink 作业,您通常需要如下依赖项:
+
+* Flink API, 用来开发您的作业
+* [连接器和格式]({{< ref "docs/dev/configuration/connector" >}}), 以将您的作业与外部系统集成
+* [测试实用程序]({{< ref "docs/dev/configuration/testing" >}}), 以测试您的作业
+
+除此之外,若要开发自定义功能,您还要添加必要的第三方依赖项。
+
+### Flink API
+
+Flink提供了两大 API:[Datastream API]({{< ref "docs/dev/datastream/overview" >}}) 和 [Table API & SQL]({{< ref "docs/dev/table/overview" >}}),它们可以单独使用,也可以混合使用,具体取决于您的使用场景:
+
+| 您要使用的 API                                                                      | 您需要添加的依赖项                                     |
+|-----------------------------------------------------------------------------------|-----------------------------------------------------|
+| [DataStream]({{< ref "docs/dev/datastream/overview" >}})                          | `flink-streaming-java`                              |  
+| [DataStream Scala 版]({{< ref "docs/dev/datastream/scala_api_extensions" >}})     | `flink-streaming-scala{{< scala_version >}}`        |   
+| [Table API]({{< ref "docs/dev/table/common" >}})                                  | `flink-table-api-java`                              |   
+| [Table API Scala 版]({{< ref "docs/dev/table/common" >}})                         | `flink-table-api-scala{{< scala_version >}}`        |
+| [Table API + DataStream]({{< ref "docs/dev/table/data_stream_api" >}})            | `flink-table-api-java-bridge`                       |
+| [Table API + DataStream Scala 版]({{< ref "docs/dev/table/data_stream_api" >}})   | `flink-table-api-scala-bridge{{< scala_version >}}` |
+
+您只需将它们包含在您的构建工具脚本/描述符中,就可以开发您的作业了!
+
+## 运行和打包
+
+如果您想通过简单地执行主类来运行你的作业,您需要 classpath 里有 `flink-runtime`。对于 Table API 程序,您还需要 `flink-table-runtime` 和 `flink-table-planner-loader`。
+
+根据经验,我们**建议**将应用程序代码及其所有必需的依赖项打包进一个 fat/uber JAR 中。这包括打包您作业用到的连接器、格式和第三方依赖项。此规则**不适用于** Java API、DataStream Scala API 以及前面提到的运行时模块,它们已经由 Flink 本身提供,**不应**包含在作业的 uber JAR 中。您可以把该作业 JAR 提交到已经运行的 Flink 集群,也可以轻松将其添加到 Flink 应用程序容器镜像中,而无需修改发行版。
+
+## 下一步是什么?
+
+* 要开发您的作业,请查阅 [DataStream API]({{< ref "docs/dev/datastream/overview" >}}) 和 [Table API & SQL]({{< ref "docs/dev/table/overview" >}});
+* 关于如何使用特定的构建工具打包您的作业的更多细节,请查阅如下指南:
+  * [Maven]({{< ref "docs/dev/configuration/maven" >}})
+  * [Gradle]({{< ref "docs/dev/configuration/gradle" >}})
+* 关于项目配置的高级内容,请查阅[高级主题]({{< ref "docs/dev/configuration/advanced" >}})部分。
diff --git a/docs/content.zh/docs/dev/configuration/testing.md b/docs/content.zh/docs/dev/configuration/testing.md
new file mode 100644
index 0000000..33e0e9e
--- /dev/null
+++ b/docs/content.zh/docs/dev/configuration/testing.md
@@ -0,0 +1,49 @@
+---
+title: "测试的依赖项"
+weight: 6
+type: docs
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+# 用于测试的依赖项
+
+Flink 提供了用于测试作业的实用程序,您可以将其添加为依赖项。
+
+## DataStream API 测试
+
+如果要为使用 DataStream API 构建的作业开发测试用例,则需要添加以下依赖项:
+
+{{< artifact_tabs flink-test-utils withTestScope >}}
+
+在各种测试实用程序中,该模块提供了 `MiniCluster` (一个可配置的轻量级 Flink 集群,能在 JUnit 测试中运行),可以直接执行作业。
+
+有关如何使用这些实用程序的更多细节,请查看 [DataStream API 测试]({{< ref "docs/dev/datastream/testing" >}})。
+
+## Table API 测试
+
+如果您想在您的 IDE 中本地测试 Table API 和 SQL 程序,除了前述提到的 `flink-test-utils` 之外,您还要添加以下依赖项:
+
+{{< artifact_tabs flink-table-test-utils withTestScope >}}
+
+这将自动引入查询计划器和运行时,分别用于计划和执行查询。
+
+{{< hint info >}}
+`flink-table-test-utils` 模块已在 Flink 1.15 中引入,目前被认为是实验性的。
+{{< /hint >}}
diff --git a/docs/content.zh/docs/dev/datastream/_index.md b/docs/content.zh/docs/dev/datastream/_index.md
index 1a32813..5e035cc 100644
--- a/docs/content.zh/docs/dev/datastream/_index.md
+++ b/docs/content.zh/docs/dev/datastream/_index.md
@@ -1,7 +1,7 @@
 ---
 title: DataStream API
 bookCollapseSection: true
-weight: 1
+weight: 2
 ---
 <!--
 Licensed to the Apache Software Foundation (ASF) under one
@@ -20,4 +20,4 @@ software distributed under the License is distributed on an
 KIND, either express or implied.  See the License for the
 specific language governing permissions and limitations
 under the License.
--->
\ No newline at end of file
+-->
diff --git a/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md b/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
index 982854e..3c2c7e5 100644
--- a/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
+++ b/docs/content.zh/docs/dev/datastream/fault-tolerance/queryable_state.md
@@ -143,7 +143,7 @@ descriptor.setQueryable("query-name"); // queryable state name
 </dependency>
 ```
 
-关于依赖的更多信息, 可以参考如何 [配置 Flink 项目]({{< ref "docs/dev/datastream/project-configuration" >}}).
+关于依赖的更多信息, 可以参考如何[配置 Flink 项目]({{< ref "docs/dev/configuration/overview" >}})。
 
 `QueryableStateClient` 将提交你的请求到内部代理,代理会处理请求并返回结果。客户端的初始化只需要提供一个有效的 `TaskManager` 主机名
 (每个 task manager 上都运行着一个 queryable state 代理),以及代理监听的端口号。关于如何配置代理以及端口号可以参考 [Configuration Section](#configuration).
diff --git a/docs/content.zh/docs/dev/datastream/project-configuration.md b/docs/content.zh/docs/dev/datastream/project-configuration.md
deleted file mode 100644
index 24c8bf8..0000000
--- a/docs/content.zh/docs/dev/datastream/project-configuration.md
+++ /dev/null
@@ -1,570 +0,0 @@
----
-title: "Project Configuration"
-weight: 302
-type: docs
-aliases:
-  - /zh/dev/project-configuration.html
-  - /zh/start/dependencies.html
-  - /zh/getting-started/project-setup/dependencies.html
-  - /zh/quickstart/java_api_quickstart.html
-  - /zh/dev/projectsetup/java_api_quickstart.html
-  - /zh/dev/linking_with_flink.html
-  - /zh/dev/linking.html
-  - /zh/dev/projectsetup/dependencies.html
-  - /zh/dev/projectsetup/java_api_quickstart.html
-  - /zh/getting-started/project-setup/java_api_quickstart.html
-  - /zh/dev/projectsetup/scala_api_quickstart.html
-  - /zh/getting-started/project-setup/scala_api_quickstart.html
-  - /zh/quickstart/scala_api_quickstart.html
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-# Project Configuration
-
-Every Flink application depends on a set of Flink libraries. At the bare minimum, the application depends
-on the Flink APIs. Many applications depend in addition on certain connector libraries (like Kafka, Cassandra, etc.).
-When running Flink applications (either in a distributed deployment, or in the IDE for testing), the Flink
-runtime library must be available as well.
-
-## Flink Core and Application Dependencies
-
-As with most systems that run user-defined applications, there are two broad categories of dependencies and libraries in Flink:
-
-  - **Flink Core Dependencies**: Flink itself consists of a set of classes and dependencies that are needed to run the system, for example
-    coordination, networking, checkpoints, failover, APIs, operations (such as windowing), resource management, etc.
-    The set of all these classes and dependencies forms the core of Flink's runtime and must be present when a Flink
-    application is started.
-
-    These core classes and dependencies are packaged in the `flink-dist` jar. They are part of Flink's `lib` folder and
-    part of the basic Flink container images. Think of these dependencies as similar to Java's core library (`rt.jar`, `charsets.jar`, etc.),
-    which contains the classes like `String` and `List`.
-
-    The Flink Core Dependencies do not contain any connectors or libraries (CEP, SQL, ML, etc.) in order to avoid having an excessive
-    number of dependencies and classes in the classpath by default. In fact, we try to keep the core dependencies as slim as possible
-    to keep the default classpath small and avoid dependency clashes.
-
-  - The **User Application Dependencies** are all connectors, formats, or libraries that a specific user application needs.
-
-    The user application is typically packaged into an *application jar*, which contains the application code and the required
-    connector and library dependencies.
-
-    The user application dependencies explicitly do not include the Flink DataStream APIs and runtime dependencies,
-    because those are already part of Flink's Core Dependencies.
-
-
-## Setting up a Project: Basic Dependencies
-
-Every Flink application needs as the bare minimum the API dependencies, to develop against.
-
-When setting up a project manually, you need to add the following dependencies for the Java/Scala API
-(here presented in Maven syntax, but the same dependencies apply to other build tools (Gradle, SBT, etc.) as well.
-
-{{< tabs "a49d57a4-27ee-4dd3-a2b8-a673b99b011e" >}}
-{{< tab "Java" >}}
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-streaming-java</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-```
-{{< /tab >}}
-{{< tab "Scala" >}}
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-streaming-scala{{< scala_version >}}</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-```
-{{< /tab >}}
-{{< /tabs >}}
-
-**Important:** Please note that all these dependencies have their scope set to *provided*.
-That means that they are needed to compile against, but that they should not be packaged into the
-project's resulting application jar file - these dependencies are Flink Core Dependencies,
-which are already available in any setup.
-
-It is highly recommended keeping the dependencies in scope *provided*. If they are not set to *provided*,
-the best case is that the resulting JAR becomes excessively large, because it also contains all Flink core
-dependencies. The worst case is that the Flink core dependencies that are added to the application's jar file
-clash with some of your own dependency versions (which is normally avoided through inverted classloading).
-
-**Note on IntelliJ:** To make the applications run within IntelliJ IDEA it is necessary to tick the
-`Include dependencies with "Provided" scope` box in the run configuration.
-If this option is not available (possibly due to using an older IntelliJ IDEA version), then a simple workaround
-is to create a test that calls the applications `main()` method.
-
-
-## Adding Connector and Library Dependencies
-
-Most applications need specific connectors or libraries to run, for example a connector to Kafka, Cassandra, etc.
-These connectors are not part of Flink's core dependencies and must be added as dependencies to the application.
-
-Below is an example adding the connector for Kafka as a dependency (Maven syntax):
-```xml
-<dependency>
-    <groupId>org.apache.flink</groupId>
-    <artifactId>flink-connector-kafka</artifactId>
-    <version>{{< version >}}</version>
-</dependency>
-```
-
-We recommend packaging the application code and all its required dependencies into one *jar-with-dependencies* which
-we refer to as the *application jar*. The application jar can be submitted to an already running Flink cluster,
-or added to a Flink application container image.
-
-Projects created from the [Java Project Template]({{< ref "docs/dev/datastream/project-configuration" >}}) or
-[Scala Project Template]({{< ref "docs/dev/datastream/project-configuration" >}}) are configured to automatically include
-the application dependencies into the application jar when running `mvn clean package`. For projects that are
-not set up from those templates, we recommend adding the Maven Shade Plugin (as listed in the Appendix below)
-to build the application jar with all required dependencies.
-
-**Important:** For Maven (and other build tools) to correctly package the dependencies into the application jar,
-these application dependencies must be specified in scope *compile* (unlike the core dependencies, which
-must be specified in scope *provided*).
-
-
-## Scala Versions
-
-Scala versions (2.11, 2.12, etc.) are not binary compatible with one another.
-For that reason, Flink for Scala 2.11 cannot be used with an application that uses
-Scala 2.12.
-
-All Flink dependencies that (transitively) depend on Scala are suffixed with the
-Scala version that they are built for, for example `flink-streaming-scala_2.12`.
-
-Developers that only use Java can pick any Scala version, Scala developers need to
-pick the Scala version that matches their application's Scala version.
-
-Please refer to the [build guide]({{< ref "docs/flinkDev/building" >}}#scala-versions)
-for details on how to build Flink for a specific Scala version.
-
-Scala versions after 2.12.8 are not binary compatible with previous 2.12.x
-versions, preventing the Flink project from upgrading its 2.12.x builds beyond
-2.12.8.  Users can build Flink locally for latter Scala versions by following
-the above mentioned [build guide]({{< ref "docs/flinkDev/building" >}}#scala-versions).
-For this to work, users need to add `-Djapicmp.skip` to
-skip binary compatibility checks when building.
-
-See the [Scala 2.12.8 release notes](https://github.com/scala/scala/releases/tag/v2.12.8) for more details,
-the relevant quote is this:
-
-> The second fix is not binary compatible: the 2.12.8 compiler omits certain
-> methods that are generated by earlier 2.12 compilers. However, we believe
-> that these methods are never used and existing compiled code will continue to
-> work.  See the [pull request
-> description](https://github.com/scala/scala/pull/7469) for more details.
-
-## Hadoop Dependencies
-
-**General rule: It should never be necessary to add Hadoop dependencies directly to your application.**
-*(The only exception being when using existing Hadoop input-/output formats with Flink's Hadoop compatibility wrappers)*
-
-If you want to use Flink with Hadoop, you need to have a Flink setup that includes the Hadoop dependencies, rather than
-adding Hadoop as an application dependency. Flink will use the Hadoop dependencies specified by the `HADOOP_CLASSPATH`
-environment variable, which can be set in the following way:
-
-```bash
-export HADOOP_CLASSPATH=`hadoop classpath`
-```
-
-There are two main reasons for that design:
-
-  - Some Hadoop interaction happens in Flink's core, possibly before the user application is started, for example
-    setting up HDFS for checkpoints, authenticating via Hadoop's Kerberos tokens, or deployment on YARN.
-
-  - Flink's inverted classloading approach hides many transitive dependencies from the core dependencies. That applies not only
-    to Flink's own core dependencies, but also to Hadoop's dependencies when present in the setup.
-    That way, applications can use different versions of the same dependencies without running into dependency conflicts (and
-    trust us, that's a big deal, because Hadoops dependency tree is huge.)
-
-If you need Hadoop dependencies during testing or development inside the IDE (for example for HDFS access), please configure
-these dependencies similar to the scope of the dependencies to *test* or to *provided*.
-
-## Maven Quickstart
-
-#### Requirements
-
-The only requirements are working __Maven 3.0.4__ (or higher) and __Java 11__ installations.
-
-#### Create Project
-
-Use one of the following commands to __create a project__:
-
-{{< tabs "maven" >}}
-{{< tab "Maven Archetypes" >}}
-```bash
-$ mvn archetype:generate                \
-  -DarchetypeGroupId=org.apache.flink   \
-  -DarchetypeArtifactId=flink-quickstart-java \
-  -DarchetypeVersion={{< version >}}
-```
-This allows you to **name your newly created project**. 
-It will interactively ask you for the groupId, artifactId, and package name.
-{{< /tab >}}
-{{< tab "Quickstart Script" >}}
-{{< stable >}}
-```bash
-$ curl https://flink.apache.org/q/quickstart.sh | bash -s {{< version >}}
-```
-{{< /stable >}}
-{{< unstable >}}
-```bash
-$ curl https://flink.apache.org/q/quickstart-SNAPSHOT.sh | bash -s {{< version >}}
-
-```
-{{< /unstable >}}
-{{< /tab >}}
-{{< /tabs >}}
-
-{{< unstable >}}
-{{< hint info >}}
-For Maven 3.0 or higher, it is no longer possible to specify the repository (-DarchetypeCatalog) via the command line. For details about this change, please refer to <a href="http://maven.apache.org/archetype/maven-archetype-plugin/archetype-repository.html">Maven official document</a>
-If you wish to use the snapshot repository, you need to add a repository entry to your settings.xml. For example:
-
-```xml
-<settings>
-  <activeProfiles>
-    <activeProfile>apache</activeProfile>
-  </activeProfiles>
-  <profiles>
-    <profile>
-      <id>apache</id>
-      <repositories>
-        <repository>
-          <id>apache-snapshots</id>
-          <url>https://repository.apache.org/content/repositories/snapshots/</url>
-        </repository>
-      </repositories>
-    </profile>
-  </profiles>
-</settings>
-```
-
-{{< /hint >}}
-{{< /unstable >}}
-
-We recommend you __import this project into your IDE__ to develop and
-test it. IntelliJ IDEA supports Maven projects out of the box.
-If you use Eclipse, the [m2e plugin](http://www.eclipse.org/m2e/)
-allows to [import Maven projects](http://books.sonatype.com/m2eclipse-book/reference/creating-sect-importing-projects.html#fig-creating-import).
-Some Eclipse bundles include that plugin by default, others require you
-to install it manually. 
-
-*Please note*: The default JVM heapsize for Java may be too
-small for Flink. You have to manually increase it.
-In Eclipse, choose `Run Configurations -> Arguments` and write into the `VM Arguments` box: `-Xmx800m`.
-In IntelliJ IDEA recommended way to change JVM options is from the `Help | Edit Custom VM Options` menu. See [this article](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties) for details. 
-
-
-#### Build Project
-
-If you want to __build/package your project__, go to your project directory and
-run the '`mvn clean package`' command.
-You will __find a JAR file__ that contains your application, plus connectors and libraries
-that you may have added as dependencies to the application: `target/<artifact-id>-<version>.jar`.
-
-__Note:__ If you use a different class than *StreamingJob* as the application's main class / entry point,
-we recommend you change the `mainClass` setting in the `pom.xml` file accordingly. That way, Flink
-can run the application from the JAR file without additionally specifying the main class.
-
-## Gradle
-
-#### Requirements
-
-The only requirements are working __Gradle 3.x__ (or higher) and __Java 11__ installations.
-
-#### Create Project
-
-Use one of the following commands to __create a project__:
-
-{{< tabs gradle >}}
-{{< tab "Gradle Example" >}}
-**build.gradle**
-
-```gradle
-buildscript {
-    repositories {
-        jcenter() // this applies only to the Gradle 'Shadow' plugin
-    }
-    dependencies {
-        classpath 'com.github.jengelman.gradle.plugins:shadow:2.0.4'
-    }
-}
-
-plugins {
-    id 'java'
-    id 'application'
-    // shadow plugin to produce fat JARs
-    id 'com.github.johnrengelman.shadow' version '2.0.4'
-}
-
-
-// artifact properties
-group = 'org.myorg.quickstart'
-version = '0.1-SNAPSHOT'
-mainClassName = 'org.myorg.quickstart.StreamingJob'
-description = """Flink Quickstart Job"""
-
-ext {
-    javaVersion = '1.8'
-    flinkVersion = '1.13-SNAPSHOT'
-    scalaBinaryVersion = '2.11'
-    slf4jVersion = '1.7.32'
-    log4jVersion = '2.17.1'
-}
-
-
-sourceCompatibility = javaVersion
-targetCompatibility = javaVersion
-tasks.withType(JavaCompile) {
-	options.encoding = 'UTF-8'
-}
-
-applicationDefaultJvmArgs = ["-Dlog4j.configurationFile=log4j2.properties"]
-
-task wrapper(type: Wrapper) {
-    gradleVersion = '3.1'
-}
-
-// declare where to find the dependencies of your project
-repositories {
-    mavenCentral()
-    maven { url "https://repository.apache.org/content/repositories/snapshots/" }
-}
-
-// NOTE: We cannot use "compileOnly" or "shadow" configurations since then we could not run code
-// in the IDE or with "gradle run". We also cannot exclude transitive dependencies from the
-// shadowJar yet (see https://github.com/johnrengelman/shadow/issues/159).
-// -> Explicitly define the // libraries we want to be included in the "flinkShadowJar" configuration!
-configurations {
-    flinkShadowJar // dependencies which go into the shadowJar
-
-    // always exclude these (also from transitive dependencies) since they are provided by Flink
-    flinkShadowJar.exclude group: 'org.apache.flink', module: 'force-shading'
-    flinkShadowJar.exclude group: 'com.google.code.findbugs', module: 'jsr305'
-    flinkShadowJar.exclude group: 'org.slf4j'
-    flinkShadowJar.exclude group: 'org.apache.logging.log4j'
-}
-
-// declare the dependencies for your production and test code
-dependencies {
-    // --------------------------------------------------------------
-    // Compile-time dependencies that should NOT be part of the
-    // shadow jar and are provided in the lib folder of Flink
-    // --------------------------------------------------------------
-    compile "org.apache.flink:flink-streaming-java:${flinkVersion}"
-
-    // --------------------------------------------------------------
-    // Dependencies that should be part of the shadow jar, e.g.
-    // connectors. These must be in the flinkShadowJar configuration!
-    // --------------------------------------------------------------
-    //flinkShadowJar "org.apache.flink:flink-connector-kafka:${flinkVersion}"
-
-    compile "org.apache.logging.log4j:log4j-api:${log4jVersion}"
-    compile "org.apache.logging.log4j:log4j-core:${log4jVersion}"
-    compile "org.apache.logging.log4j:log4j-slf4j-impl:${log4jVersion}"
-    compile "org.slf4j:slf4j-log4j12:${slf4jVersion}"
-
-    // Add test dependencies here.
-    // testCompile "junit:junit:4.12"
-}
-
-// make compileOnly dependencies available for tests:
-sourceSets {
-    main.compileClasspath += configurations.flinkShadowJar
-    main.runtimeClasspath += configurations.flinkShadowJar
-
-    test.compileClasspath += configurations.flinkShadowJar
-    test.runtimeClasspath += configurations.flinkShadowJar
-
-    javadoc.classpath += configurations.flinkShadowJar
-}
-
-run.classpath = sourceSets.main.runtimeClasspath
-
-jar {
-    manifest {
-        attributes 'Built-By': System.getProperty('user.name'),
-                'Build-Jdk': System.getProperty('java.version')
-    }
-}
-
-shadowJar {
-    configurations = [project.configurations.flinkShadowJar]
-}
-```
-
-**settings.gradle**
-```gradle
-rootProject.name = 'quickstart'
-```
-{{< /tab >}}
-{{< tab "Quickstart Script">}}
-```bash
-bash -c "$(curl https://flink.apache.org/q/gradle-quickstart.sh)" -- {{< version >}} {{< scala_version >}}
-```
-{{< /tab >}}
-{{< /tabs >}}
-
-We recommend you __import this project into your IDE__ to develop and
-test it. IntelliJ IDEA supports Gradle projects after installing the `Gradle` plugin.
-Eclipse does so via the [Eclipse Buildship](https://projects.eclipse.org/projects/tools.buildship) plugin
-(make sure to specify a Gradle version >= 3.0 in the last step of the import wizard; the `shadow` plugin requires it).
-You may also use [Gradle's IDE integration](https://docs.gradle.org/current/userguide/userguide.html#ide-integration)
-to create project files from Gradle.
-
-
-*Please note*: The default JVM heapsize for Java may be too
-small for Flink. You have to manually increase it.
-In Eclipse, choose `Run Configurations -> Arguments` and write into the `VM Arguments` box: `-Xmx800m`.
-In IntelliJ IDEA recommended way to change JVM options is from the `Help | Edit Custom VM Options` menu. See [this article](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties) for details.
-
-#### Build Project
-
-If you want to __build/package your project__, go to your project directory and
-run the '`gradle clean shadowJar`' command.
-You will __find a JAR file__ that contains your application, plus connectors and libraries
-that you may have added as dependencies to the application: `build/libs/<project-name>-<version>-all.jar`.
-
-__Note:__ If you use a different class than *DataStreamJob* as the application's main class / entry point,
-we recommend you change the `mainClassName` setting in the `build.gradle` file accordingly. That way, Flink
-can run the application from the JAR file without additionally specifying the main class.
-
-## SBT
-
-#### Create Project
-
-You can scaffold a new project via either of the following two methods:
-
-{{< tabs sbt >}}
-{{< tab "SBT Template" >}}
-```bash
-$ sbt new tillrohrmann/flink-project.g8
-```
-{{< /tab >}}
-{{< tab "Quickstart Script" >}}
-```bash
-$ bash <(curl https://flink.apache.org/q/sbt-quickstart.sh)
-```
-{{< /tab >}}
-{{< /tabs >}}
-
-#### Build Project
-
-In order to build your project you simply have to issue the `sbt clean assembly` command.
-This will create the fat-jar __your-project-name-assembly-0.1-SNAPSHOT.jar__ in the directory __target/scala_your-major-scala-version/__.
-
-#### Run Project
-
-In order to run your project you have to issue the `sbt run` command.
-
-Per default, this will run your job in the same JVM as `sbt` is running.
-In order to run your job in a distinct JVM, add the following line to `build.sbt`
-
-```scala
-fork in run := true
-```
-
-#### IntelliJ
-
-We recommend using [IntelliJ](https://www.jetbrains.com/idea/) for your Flink job development.
-In order to get started, you have to import your newly created project into IntelliJ.
-You can do this via `File -> New -> Project from Existing Sources...` and then choosing your project's directory.
-IntelliJ will then automatically detect the `build.sbt` file and set everything up.
-
-In order to run your Flink job, it is recommended to choose the `mainRunner` module as the classpath of your __Run/Debug Configuration__.
-This will ensure, that all dependencies which are set to _provided_ will be available upon execution.
-You can configure the __Run/Debug Configurations__ via `Run -> Edit Configurations...` and then choose `mainRunner` from the _Use classpath of module_ dropbox.
-
-#### Eclipse
-
-In order to import the newly created project into [Eclipse](https://eclipse.org/), you first have to create Eclipse project files for it.
-These project files can be created via the [sbteclipse](https://github.com/typesafehub/sbteclipse) plugin.
-Add the following line to your `PROJECT_DIR/project/plugins.sbt` file:
-
-```bash
-addSbtPlugin("com.typesafe.sbteclipse" % "sbteclipse-plugin" % "4.0.0")
-```
-
-In `sbt` use the following command to create the Eclipse project files
-
-```bash
-> eclipse
-```
-
-Now you can import the project into Eclipse via `File -> Import... -> Existing Projects into Workspace` and then select the project directory.
-
-
-## Appendix: Template for building a Jar with Dependencies
-
-To build an application JAR that contains all dependencies required for declared connectors and libraries,
-you can use the following shade plugin definition:
-
-```xml
-<build>
-    <plugins>
-        <plugin>
-            <groupId>org.apache.maven.plugins</groupId>
-            <artifactId>maven-shade-plugin</artifactId>
-            <version>3.1.1</version>
-            <executions>
-                <execution>
-                    <phase>package</phase>
-                    <goals>
-                        <goal>shade</goal>
-                    </goals>
-                    <configuration>
-                        <artifactSet>
-                            <excludes>
-                                <exclude>com.google.code.findbugs:jsr305</exclude>
-                                <exclude>org.slf4j:*</exclude>
-                                <exclude>log4j:*</exclude>
-                            </excludes>
-                        </artifactSet>
-                        <filters>
-                            <filter>
-                                <!-- Do not copy the signatures in the META-INF folder.
-                                Otherwise, this might cause SecurityExceptions when using the JAR. -->
-                                <artifact>*:*</artifact>
-                                <excludes>
-                                    <exclude>META-INF/*.SF</exclude>
-                                    <exclude>META-INF/*.DSA</exclude>
-                                    <exclude>META-INF/*.RSA</exclude>
-                                </excludes>
-                            </filter>
-                        </filters>
-                        <transformers>
-                            <transformer implementation="org.apache.maven.plugins.shade.resource.ManifestResourceTransformer">
-                                <mainClass>my.programs.main.clazz</mainClass>
-                            </transformer>
-                        </transformers>
-                    </configuration>
-                </execution>
-            </executions>
-        </plugin>
-    </plugins>
-</build>
-```
-
-{{< top >}}
diff --git a/docs/content.zh/docs/dev/datastream/testing.md b/docs/content.zh/docs/dev/datastream/testing.md
index c8e7093..5fdd52c 100644
--- a/docs/content.zh/docs/dev/datastream/testing.md
+++ b/docs/content.zh/docs/dev/datastream/testing.md
@@ -151,11 +151,7 @@ class IncrementFlatMapFunctionTest extends FlatSpec with MockFactory {
 * `TwoInputStreamOperatorTestHarness` (f适用于两个 `DataStream` 的 `ConnectedStreams` 算子)
 * `KeyedTwoInputStreamOperatorTestHarness` (适用于两个 `KeyedStream` 上的 `ConnectedStreams` 算子)
 
-要使用测试工具,还需要一组其他的依赖项(测试范围)。
-
-{{< artifact flink-test-utils withTestScope >}}
-{{< artifact flink-runtime withTestScope >}}
-{{< artifact flink-streaming-java withTestScope withTestClassifier >}}
+要使用测试工具,还需要一组其他的依赖项,请查阅[配置]({{< ref "docs/dev/configuration/testing" >}})小节了解更多细节。
 
 现在,可以使用测试工具将记录和 watermark 推送到用户自定义函数或自定义算子中,控制处理时间,最后对算子的输出(包括旁路输出)进行校验。
 
diff --git a/docs/content.zh/docs/dev/table/data_stream_api.md b/docs/content.zh/docs/dev/table/data_stream_api.md
index ff12448..3c91213 100644
--- a/docs/content.zh/docs/dev/table/data_stream_api.md
+++ b/docs/content.zh/docs/dev/table/data_stream_api.md
@@ -467,6 +467,8 @@ import org.apache.flink.table.api.bridge.scala._
 {{< /tab >}}
 {{< /tabs >}}
 
+请查阅[配置]({{< ref "docs/dev/configuration/overview" >}})小节了解更多细节。
+
 ### Configuration
 
 The `TableEnvironment` will adopt all configuration options from the passed `StreamExecutionEnvironment`.
diff --git a/docs/content.zh/docs/dev/table/overview.md b/docs/content.zh/docs/dev/table/overview.md
index b3038cb..3365f0d 100644
--- a/docs/content.zh/docs/dev/table/overview.md
+++ b/docs/content.zh/docs/dev/table/overview.md
@@ -33,74 +33,11 @@ Table API 和 SQL 两种 API 是紧密集成的,以及 DataStream API。你可
 
 ## Table 程序依赖
 
-取决于你使用的编程语言,选择 Java 或者 Scala API 来构建你的 Table API 和 SQL 程序:
+您需要将 Table API 作为依赖项添加到项目中,以便用 Table API 和 SQL 定义数据管道。
 
-{{< tabs "94f8aceb-507f-4c8f-977e-df00fe903203" >}}
-{{< tab "Java" >}}
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-api-java-bridge{{< scala_version >}}</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-```
-{{< /tab >}}
-{{< tab "Scala" >}}
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-api-scala-bridge{{< scala_version >}}</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-```
-{{< /tab >}}
-{{< tab "Python" >}}
-{{< stable >}}
-```bash
-$ python -m pip install apache-flink {{< version >}}
-```
-{{< /stable >}}
-{{< unstable >}}
-```bash
-$ python -m pip install apache-flink
-```
-{{< /unstable >}}
-{{< /tab >}}
-{{< /tabs >}}
+有关如何为 Java 和 Scala 配置这些依赖项的更多细节,请查阅[项目配置]({{< ref "docs/dev/configuration/overview" >}})小节。
 
-除此之外,如果你想在 IDE 本地运行你的程序,你需要添加下面的模块,具体用哪个取决于你使用哪个 Planner:
-
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-planner{{< scala_version >}}</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-streaming-scala{{< scala_version >}}</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-```
-
-### 扩展依赖
-
-如果你想实现[自定义格式或连接器]({{< ref "docs/dev/table/sourcesSinks" >}}) 用于(反)序列化行或一组[用户定义的函数]({{< ref "docs/dev/table/functions/udfs" >}}),下面的依赖就足够了,编译出来的 jar 文件可以直接给 SQL Client 使用:
-
-```xml
-<dependency>
-  <groupId>org.apache.flink</groupId>
-  <artifactId>flink-table-common</artifactId>
-  <version>{{< version >}}</version>
-  <scope>provided</scope>
-</dependency>
-```
-
-{{< top >}}
+如果您使用 Python,请查阅 [Python API]({{< ref "docs/dev/python/overview" >}}) 文档。
 
 接下来?
 -----------------
diff --git a/docs/content.zh/docs/dev/table/sourcesSinks.md b/docs/content.zh/docs/dev/table/sourcesSinks.md
index 4f1c508..d1940ba 100644
--- a/docs/content.zh/docs/dev/table/sourcesSinks.md
+++ b/docs/content.zh/docs/dev/table/sourcesSinks.md
@@ -106,6 +106,33 @@ that the planner can handle.
 
 {{< top >}}
 
+
+Project Configuration
+---------------------
+
+If you want to implement a custom connector or a custom format, the following dependency is usually
+sufficient:
+
+{{< artifact_tabs flink-table-common withProvidedScope >}}
+
+If you want to develop a connector that needs to bridge with DataStream APIs (i.e. if you want to adapt
+a DataStream connector to the Table API), you need to add this dependency:
+
+{{< artifact_tabs flink-table-api-java-bridge withProvidedScope >}}
+
+When developing the connector/format, we suggest shipping both a thin JAR and an uber JAR, so users
+can easily load the uber JAR in the SQL client or in the Flink distribution and start using it.
+The uber JAR should include all the third-party dependencies of the connector,
+excluding the table dependencies listed above.
+
+{{< hint warning >}}
+You should not depend on `flink-table-planner{{< scala_version >}}` in production code.
+With the new module `flink-table-planner-loader` introduced in Flink 1.15, the
+application's classpath will not have direct access to `org.apache.flink.table.planner` classes anymore.
+If you need a feature available only internally within the `org.apache.flink.table.planner` package and subpackages, please open an issue.
+To learn more, check out [Anatomy of Table Dependencies]({{< ref "docs/dev/configuration/advanced" >}}#anatomy-of-table-dependencies).
+{{< /hint >}}
+
 Extension Points
 ----------------
 
diff --git a/docs/content.zh/docs/dev/table/sql/queries/match_recognize.md b/docs/content.zh/docs/dev/table/sql/queries/match_recognize.md
index e6b11fc..9f3e15b 100644
--- a/docs/content.zh/docs/dev/table/sql/queries/match_recognize.md
+++ b/docs/content.zh/docs/dev/table/sql/queries/match_recognize.md
@@ -84,7 +84,7 @@ Flink 的 `MATCH_RECOGNIZE` 子句实现是一个完整标准子集。仅支持
 </dependency>
 ```
 
-或者,也可以将依赖项添加到集群的 classpath(查看 [dependency section]({{< ref "docs/dev/datastream/project-configuration" >}}) 获取更多相关依赖信息)。
+或者,也可以将依赖项添加到集群的 classpath(查看 [dependency section]({{< ref "docs/dev/configuration/overview" >}}) 获取更多相关依赖信息)。
 
 如果你想在 [SQL Client]({{< ref "docs/dev/table/sqlClient" >}}) 中使用 `MATCH_RECOGNIZE` 子句,你无需执行任何操作,因为默认情况下包含所有依赖项。
 
diff --git a/docs/content.zh/docs/dev/table/sqlClient.md b/docs/content.zh/docs/dev/table/sqlClient.md
index 2239121..6f60670 100644
--- a/docs/content.zh/docs/dev/table/sqlClient.md
+++ b/docs/content.zh/docs/dev/table/sqlClient.md
@@ -368,16 +368,17 @@ When execute queries or insert statements, please enter the interactive mode or
 
 ### Dependencies
 
-The SQL Client does not require to setup a Java project using Maven or SBT. Instead, you can pass the
-dependencies as regular JAR files that get submitted to the cluster. You can either specify each JAR
-file separately (using `--jar`) or define entire library directories (using `--library`). For
+The SQL Client does not require setting up a Java project using Maven, Gradle, or sbt. Instead, you
+can pass the dependencies as regular JAR files that get submitted to the cluster. You can either specify
+each JAR file separately (using `--jar`) or define entire library directories (using `--library`). For
 connectors to external systems (such as Apache Kafka) and corresponding data formats (such as JSON),
 Flink provides **ready-to-use JAR bundles**. These JAR files can be downloaded for each release from
 the Maven central repository.
 
-The full list of offered SQL JARs and documentation about how to use them can be found on the [connection to external systems page]({{< ref "docs/connectors/table/overview" >}}).
+The full list of offered SQL JARs can be found on the [connection to external systems page]({{< ref "docs/connectors/table/overview" >}}).
 
-{{< top >}}
+You can refer to the [configuration]({{< ref "docs/dev/configuration/connector" >}}) section for
+information on how to configure connector and format dependencies.
 
 Use SQL Client to submit job
 ----------------------------
diff --git a/docs/content.zh/docs/flinkDev/ide_setup.md b/docs/content.zh/docs/flinkDev/ide_setup.md
index f17b32d..a4c9859 100644
--- a/docs/content.zh/docs/flinkDev/ide_setup.md
+++ b/docs/content.zh/docs/flinkDev/ide_setup.md
@@ -28,7 +28,7 @@ under the License.
 
 # 导入 Flink 到 IDE 中
 
-以下章节描述了如何将 Flink 项目导入到 IDE 中以进行 Flink 本身的源码开发。有关 Flink 程序编写的信息,请参阅 [Java API]({{< ref "docs/dev/datastream/project-configuration" >}}) 和 [Scala API]({{< ref "docs/dev/datastream/project-configuration" >}}) 快速入门指南。
+以下章节描述了如何将 Flink 项目导入到 IDE 中以进行 Flink 本身的源码开发。有关 Flink 程序编写的信息,请参阅 [Java API]({{< ref "docs/dev/configuration/overview" >}}) 和 [Scala API]({{< ref "docs/dev/configuration/overview" >}}) 快速入门指南。
 
 {{< hint info >}}
 每当你的 IDE 无法正常工作时,请优先尝试使用 Maven 命令行(`mvn clean package -DskipTests`),因为它可能是由于你的 IDE 中存在错误或未正确设置。
diff --git a/docs/content.zh/docs/libs/cep.md b/docs/content.zh/docs/libs/cep.md
index a8f53ed..f2b6775 100644
--- a/docs/content.zh/docs/libs/cep.md
+++ b/docs/content.zh/docs/libs/cep.md
@@ -38,8 +38,8 @@ FlinkCEP是在Flink上层实现的复杂事件处理库。
 
 ## 开始
 
-如果你想现在开始尝试,[创建一个Flink程序]({{< ref "docs/dev/datastream/project-configuration" >}}),
-添加FlinkCEP的依赖到项目的`pom.xml`文件中。
+如果你想现在开始尝试,[创建一个 Flink 程序]({{< ref "docs/dev/configuration/overview" >}}),
+添加 FlinkCEP 的依赖到项目的`pom.xml`文件中。
 
 {{< tabs "722d55a5-7f12-4bcc-b080-b28d5e8860ac" >}}
 {{< tab "Java" >}}
@@ -51,7 +51,7 @@ FlinkCEP是在Flink上层实现的复杂事件处理库。
 {{< /tabs >}}
 
 {{< hint info >}}
-FlinkCEP不是二进制发布包的一部分。在集群上执行如何链接它可以看[这里]({{< ref "docs/dev/datastream/project-configuration" >}})。
+FlinkCEP 不是二进制发布包的一部分。在集群上执行如何链接它可以看[这里]({{< ref "docs/dev/configuration/overview" >}})。
 {{< /hint >}}
 
 现在可以开始使用Pattern API写你的第一个CEP程序了。
diff --git a/docs/content.zh/docs/libs/gelly/overview.md b/docs/content.zh/docs/libs/gelly/overview.md
index 1c8248c..3abe4e5 100644
--- a/docs/content.zh/docs/libs/gelly/overview.md
+++ b/docs/content.zh/docs/libs/gelly/overview.md
@@ -52,7 +52,7 @@ Add the following dependency to your `pom.xml` to use Gelly.
 {{< /tab >}}
 {{< /tabs >}}
 
-Note that Gelly is not part of the binary distribution. See [linking]({{< ref "docs/dev/datastream/project-configuration" >}}) for
+Note that Gelly is not part of the binary distribution. See [linking]({{< ref "docs/dev/configuration/overview" >}}) for
 instructions on packaging Gelly libraries into Flink user programs.
 
 The remaining sections provide a description of available methods and present several examples of how to use Gelly and how to mix it with the Flink DataSet API.