You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2022/07/21 10:49:52 UTC

[GitHub] [flink] ChengkaiYang2022 opened a new pull request, #20331: [FLINK-28121]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

ChengkaiYang2022 opened a new pull request, #20331:
URL: https://github.com/apache/flink/pull/20331

   
   
   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Azure Pipelines CI to do that following [this guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   [FLINK-28121]Translate "Extension Points" and "Full Stack Example" in  "User-defined Sources & Sinks" page
   
   ## Brief change log
   
   
   ## Verifying this change
   
   Please make sure both new and modified tests in this PR follows the conventions defined in our code quality guide: https://flink.apache.org/contributing/code-style-and-quality-common.html#testing
   
   *(Please pick either of the following options)*
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   *(or)*
   
   This change is already covered by existing tests, such as *(please describe tests)*.
   
   *(or)*
   
   This change added tests and can be verified as follows:
   
   *(example:)*
     - *Added integration tests for end-to-end deployment with large payloads (100MB)*
     - *Extended integration test for recovery after master (JobManager) failure*
     - *Added test that validates that TaskInfo is transferred only once across recoveries*
     - *Manually verified the change by running a 4 node cluster with 2 JobManagers and 4 TaskManagers, a stateful streaming program, and killing one JobManager and two TaskManagers during the execution, verifying that recovery happens correctly.*
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no) no
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) no
     - The serializers: (yes / no / don't know) no
     - The runtime per-record code paths (performance sensitive): (yes / no / don't know) no
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn, ZooKeeper: (yes / no / don't know)
     - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no) no
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) no
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] ChengkaiYang2022 commented on pull request #20331: [FLINK-28121]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
ChengkaiYang2022 commented on PR #20331:
URL: https://github.com/apache/flink/pull/20331#issuecomment-1191337214

   ![flink](https://user-images.githubusercontent.com/8577744/180196685-94b0dbf6-b025-4f73-867e-ef0e1c776268.jpeg)
   


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] ChengkaiYang2022 commented on pull request #20331: [FLINK-28121][docs-zh]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
ChengkaiYang2022 commented on PR #20331:
URL: https://github.com/apache/flink/pull/20331#issuecomment-1221256587

   @coder-zjh Thank your for your feedback! I will solve the problems ASAP!


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] coder-zjh commented on a diff in pull request #20331: [FLINK-28121][docs-zh]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
coder-zjh commented on code in PR #20331:
URL: https://github.com/apache/flink/pull/20331#discussion_r950003553


##########
docs/content.zh/docs/dev/table/sourcesSinks.md:
##########
@@ -133,242 +133,209 @@ If you need a feature available only internally within the `org.apache.flink.tab
 To learn more, check out [Anatomy of Table Dependencies]({{< ref "docs/dev/configuration/advanced" >}}#anatomy-of-table-dependencies).
 {{< /hint >}}
 
-Extension Points
+<a name="extension-points"></a>
+
+扩展点
 ----------------
 
-This section explains the available interfaces for extending Flink's table connectors.
+这一部分主要介绍扩展 Flink table connector 时可能用到的接口。
+
+<a name="dynamic-table-factories"></a>
 
-### Dynamic Table Factories
+### 动态表的工厂类
 
-Dynamic table factories are used to configure a dynamic table connector for an external storage system from catalog
-and session information.
+在根据 catalog 与 Flink 运行时上下文信息,为某个外部存储系统配置动态表连接器时,需要用到动态表的工厂类。
 
-`org.apache.flink.table.factories.DynamicTableSourceFactory` can be implemented to construct a `DynamicTableSource`.
+比如,通过实现 `org.apache.flink.table.factories.DynamicTableSourceFactory` 接口完成一个工厂类,来生产 `DynamicTableSource` 类。
 
-`org.apache.flink.table.factories.DynamicTableSinkFactory` can be implemented to construct a `DynamicTableSink`.
+通过实现 `org.apache.flink.table.factories.DynamicTableSinkFactory` 接口完成一个工厂类,来生产 `DynamicTableSink` 类。
 
-By default, the factory is discovered using the value of the `connector` option as the factory identifier
-and Java's Service Provider Interface.
+默认情况下,Java 的 SPI 机制会自动识别这些工厂类,同时将 `connector` 配置项作为工厂类的”标识符“。
 
-In JAR files, references to new implementations can be added to the service file:
+在 JAR 文件中,需要将实现的工厂类路径放入到下面这个配置文件:
 
 `META-INF/services/org.apache.flink.table.factories.Factory`
 
-The framework will check for a single matching factory that is uniquely identified by factory identifier
-and requested base class (e.g. `DynamicTableSourceFactory`).
+Flink 会对工厂类逐个进行检查,确保其”标识符“是全局唯一的,并且按照要求实现了上面提到的接口 (比如 `DynamicTableSourceFactory`)。
+
+如果必要的话,也可以在实现 catalog 时绕过上述 SPI 机制识别工厂类的过程。即在实现 catalog 接口时,在`org.apache.flink.table.catalog.Catalog#getFactory` 方法中直接返回工厂类的实例。
+
+<a name="dynamic-table-source"></a>
 
-The factory discovery process can be bypassed by the catalog implementation if necessary. For this, a
-catalog needs to return an instance that implements the requested base class in `org.apache.flink.table.catalog.Catalog#getFactory`.
+### 动态表的 source 端
 
-### Dynamic Table Source
+按照定义,动态表是随时间变化的。
 
-By definition, a dynamic table can change over time.
+在读取动态表时,表中数据可以是以下情况之一:
+- changelog 流(支持有界或无界),在 changelog 流结束前,所有的改变都会被源源不断地消费,由 `ScanTableSource` 接口表示。
+- 处于一直变换或数据量很大的外部表,其中的数据一般不会被全量读取,除非是在查询某个值时,由 `LookupTableSource` 接口表示。
 
-When reading a dynamic table, the content can either be considered as:
-- A changelog (finite or infinite) for which all changes are consumed continuously until the changelog
-  is exhausted. This is represented by the `ScanTableSource` interface.
-- A continuously changing or very large external table whose content is usually never read entirely
-  but queried for individual values when necessary. This is represented by the `LookupTableSource`
-  interface.
+一个类可以同时实现这两个接口,Planner 会根据查询的 Query 选择相应接口中的方法。
 
-A class can implement both of these interfaces at the same time. The planner decides about their usage depending
-on the specified query.
+<a name= "scan-table-source"></a>
 
 #### Scan Table Source
 
-A `ScanTableSource` scans all rows from an external storage system during runtime.
+在运行期间,`ScanTableSource` 接口会按行扫描外部存储系统中所有数据。
 
-The scanned rows don't have to contain only insertions but can also contain updates and deletions. Thus,
-the table source can be used to read a (finite or infinite) changelog. The returned _changelog mode_ indicates
-the set of changes that the planner can expect during runtime.
+被扫描的数据可以是 insert、update、delete 三种操作类型,因此数据源可以用作读取 changelog (支持有界或无界)。在运行时,返回的 **_changelog mode_** 表示 Planner 要处理的操作类型。
 
-For regular batch scenarios, the source can emit a bounded stream of insert-only rows.
+在常规批处理的场景下,数据源可以处理 insert-only 操作类型的有界数据流。
 
-For regular streaming scenarios, the source can emit an unbounded stream of insert-only rows.
+在常规流处理的场景下,数据源可以处理 insert-only 操作类型的无界数据流。
 
-For change data capture (CDC) scenarios, the source can emit bounded or unbounded streams with insert,
-update, and delete rows.
+在变更日志数据捕获(即 CDC)场景下,数据源可以处理 insert、update、delete 操作类型的有界或无界数据流。
 
-A table source can implement further ability interfaces such as `SupportsProjectionPushDown` that might
-mutate an instance during planning. All abilities can be found in the `org.apache.flink.table.connector.source.abilities`
-package and are listed in the [source abilities table](#source-abilities).
+可以实现更多的功能接口来优化数据源,比如实现 `SupportsProjectionPushDown` 接口,这样在运行时在 source 端就处理数据。在 `org.apache.flink.table.connector.source.abilities` 包下可以找到各种功能接口,更多内容可查看 [source abilities table](#source-abilities)。
 
-The runtime implementation of a `ScanTableSource` must produce internal data structures. Thus, records
-must be emitted as `org.apache.flink.table.data.RowData`. The framework provides runtime converters such
-that a source can still work on common data structures and perform a conversion at the end.
+实现 `ScanTableSource` 接口的类必须能够生产 Flink 内部数据结构,因此每条记录都会按照`org.apache.flink.table.data.RowData` 的方式进行处理。Flink 运行时提供了转换机制保证 source 端可以处理常见的数据结构,并且在最后进行转换。
+
+<a name="lookup-table-source"></a>
 
 #### Lookup Table Source
 
-A `LookupTableSource` looks up rows of an external storage system by one or more keys during runtime.
+在运行期间,`LookupTableSource` 接口会在外部存储系统中按照 key 进行查找。
+
+相比于`ScanTableSource`,`LookupTableSource` 接口不会全量读取表中数据,只会在需要时向外部存储(其中的数据有可能会一直变化)发起查询请求,惰性地获取数据。
 
-Compared to `ScanTableSource`, the source does not have to read the entire table and can lazily fetch individual
-values from a (possibly continuously changing) external table when necessary.
+同时相较于`ScanTableSource`,`LookupTableSource` 接口目前只支持处理 insert-only 数据流。
 
-Compared to `ScanTableSource`, a `LookupTableSource` does only support emitting insert-only changes currently.
+暂时不支持扩展功能接口,可查看 `org.apache.flink.table.connector.source.LookupTableSource` 中的文档了解更多。
 
-Further abilities are not supported. See the documentation of `org.apache.flink.table.connector.source.LookupTableSource`
-for more information.
+`LookupTableSource` 的实现方法可以是 `TableFunction` 或者 `AsyncTableFunction`,Flink运行时会根据要查询的 key 值,调用这个实现方法进行查询。
 
-The runtime implementation of a `LookupTableSource` is a `TableFunction` or `AsyncTableFunction`. The function
-will be called with values for the given lookup keys during runtime.
+<a name="source-abilities"></a>
 
-#### Source Abilities
+#### source 端的功能接口
 
 <table class="table table-bordered">
     <thead>
         <tr>
-        <th class="text-left" style="width: 25%">Interface</th>
-        <th class="text-center" style="width: 75%">Description</th>
+        <th class="text-left" style="width: 25%">接口</th>
+        <th class="text-center" style="width: 75%">描述</th>
         </tr>
     </thead>
     <tbody>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsFilterPushDown.java'>SupportsFilterPushDown</a></td>
-        <td>Enables to push down the filter into the <code>DynamicTableSource</code>. For efficiency, a source can
-        push filters further down in order to be close to the actual data generation.</td>
+        <td>支持将过滤条件下推到 <code>DynamicTableSource</code>。为了更高效处理数据,source 端会将过滤条件下推,以便在数据产生时就处理。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsLimitPushDown.java'>SupportsLimitPushDown</a></td>
-        <td>Enables to push down a limit (the expected maximum number of produced records) into a <code>DynamicTableSource</code>.</td>
+        <td>支持将 limit 下推到(期望生产的最大数据条数)<code>DynamicTableSource</code>。</td>

Review Comment:
   "支持将 limit 下推到(期望生产的最大数据条数)" 括号中内容靠近limit本身可能好点:
   "支持将 limit(期望生产的最大数据条数)下推到"



##########
docs/content.zh/docs/dev/table/sourcesSinks.md:
##########
@@ -133,242 +133,209 @@ If you need a feature available only internally within the `org.apache.flink.tab
 To learn more, check out [Anatomy of Table Dependencies]({{< ref "docs/dev/configuration/advanced" >}}#anatomy-of-table-dependencies).
 {{< /hint >}}
 
-Extension Points
+<a name="extension-points"></a>
+
+扩展点
 ----------------
 
-This section explains the available interfaces for extending Flink's table connectors.
+这一部分主要介绍扩展 Flink table connector 时可能用到的接口。
+
+<a name="dynamic-table-factories"></a>
 
-### Dynamic Table Factories
+### 动态表的工厂类
 
-Dynamic table factories are used to configure a dynamic table connector for an external storage system from catalog
-and session information.
+在根据 catalog 与 Flink 运行时上下文信息,为某个外部存储系统配置动态表连接器时,需要用到动态表的工厂类。
 
-`org.apache.flink.table.factories.DynamicTableSourceFactory` can be implemented to construct a `DynamicTableSource`.
+比如,通过实现 `org.apache.flink.table.factories.DynamicTableSourceFactory` 接口完成一个工厂类,来生产 `DynamicTableSource` 类。
 
-`org.apache.flink.table.factories.DynamicTableSinkFactory` can be implemented to construct a `DynamicTableSink`.
+通过实现 `org.apache.flink.table.factories.DynamicTableSinkFactory` 接口完成一个工厂类,来生产 `DynamicTableSink` 类。
 
-By default, the factory is discovered using the value of the `connector` option as the factory identifier
-and Java's Service Provider Interface.
+默认情况下,Java 的 SPI 机制会自动识别这些工厂类,同时将 `connector` 配置项作为工厂类的”标识符“。
 
-In JAR files, references to new implementations can be added to the service file:
+在 JAR 文件中,需要将实现的工厂类路径放入到下面这个配置文件:
 
 `META-INF/services/org.apache.flink.table.factories.Factory`
 
-The framework will check for a single matching factory that is uniquely identified by factory identifier
-and requested base class (e.g. `DynamicTableSourceFactory`).
+Flink 会对工厂类逐个进行检查,确保其”标识符“是全局唯一的,并且按照要求实现了上面提到的接口 (比如 `DynamicTableSourceFactory`)。
+
+如果必要的话,也可以在实现 catalog 时绕过上述 SPI 机制识别工厂类的过程。即在实现 catalog 接口时,在`org.apache.flink.table.catalog.Catalog#getFactory` 方法中直接返回工厂类的实例。
+
+<a name="dynamic-table-source"></a>
 
-The factory discovery process can be bypassed by the catalog implementation if necessary. For this, a
-catalog needs to return an instance that implements the requested base class in `org.apache.flink.table.catalog.Catalog#getFactory`.
+### 动态表的 source 端
 
-### Dynamic Table Source
+按照定义,动态表是随时间变化的。
 
-By definition, a dynamic table can change over time.
+在读取动态表时,表中数据可以是以下情况之一:
+- changelog 流(支持有界或无界),在 changelog 流结束前,所有的改变都会被源源不断地消费,由 `ScanTableSource` 接口表示。
+- 处于一直变换或数据量很大的外部表,其中的数据一般不会被全量读取,除非是在查询某个值时,由 `LookupTableSource` 接口表示。
 
-When reading a dynamic table, the content can either be considered as:
-- A changelog (finite or infinite) for which all changes are consumed continuously until the changelog
-  is exhausted. This is represented by the `ScanTableSource` interface.
-- A continuously changing or very large external table whose content is usually never read entirely
-  but queried for individual values when necessary. This is represented by the `LookupTableSource`
-  interface.
+一个类可以同时实现这两个接口,Planner 会根据查询的 Query 选择相应接口中的方法。
 
-A class can implement both of these interfaces at the same time. The planner decides about their usage depending
-on the specified query.
+<a name= "scan-table-source"></a>
 
 #### Scan Table Source
 
-A `ScanTableSource` scans all rows from an external storage system during runtime.
+在运行期间,`ScanTableSource` 接口会按行扫描外部存储系统中所有数据。
 
-The scanned rows don't have to contain only insertions but can also contain updates and deletions. Thus,
-the table source can be used to read a (finite or infinite) changelog. The returned _changelog mode_ indicates
-the set of changes that the planner can expect during runtime.
+被扫描的数据可以是 insert、update、delete 三种操作类型,因此数据源可以用作读取 changelog (支持有界或无界)。在运行时,返回的 **_changelog mode_** 表示 Planner 要处理的操作类型。
 
-For regular batch scenarios, the source can emit a bounded stream of insert-only rows.
+在常规批处理的场景下,数据源可以处理 insert-only 操作类型的有界数据流。
 
-For regular streaming scenarios, the source can emit an unbounded stream of insert-only rows.
+在常规流处理的场景下,数据源可以处理 insert-only 操作类型的无界数据流。
 
-For change data capture (CDC) scenarios, the source can emit bounded or unbounded streams with insert,
-update, and delete rows.
+在变更日志数据捕获(即 CDC)场景下,数据源可以处理 insert、update、delete 操作类型的有界或无界数据流。
 
-A table source can implement further ability interfaces such as `SupportsProjectionPushDown` that might
-mutate an instance during planning. All abilities can be found in the `org.apache.flink.table.connector.source.abilities`
-package and are listed in the [source abilities table](#source-abilities).
+可以实现更多的功能接口来优化数据源,比如实现 `SupportsProjectionPushDown` 接口,这样在运行时在 source 端就处理数据。在 `org.apache.flink.table.connector.source.abilities` 包下可以找到各种功能接口,更多内容可查看 [source abilities table](#source-abilities)。
 
-The runtime implementation of a `ScanTableSource` must produce internal data structures. Thus, records
-must be emitted as `org.apache.flink.table.data.RowData`. The framework provides runtime converters such
-that a source can still work on common data structures and perform a conversion at the end.
+实现 `ScanTableSource` 接口的类必须能够生产 Flink 内部数据结构,因此每条记录都会按照`org.apache.flink.table.data.RowData` 的方式进行处理。Flink 运行时提供了转换机制保证 source 端可以处理常见的数据结构,并且在最后进行转换。
+
+<a name="lookup-table-source"></a>
 
 #### Lookup Table Source
 
-A `LookupTableSource` looks up rows of an external storage system by one or more keys during runtime.
+在运行期间,`LookupTableSource` 接口会在外部存储系统中按照 key 进行查找。
+
+相比于`ScanTableSource`,`LookupTableSource` 接口不会全量读取表中数据,只会在需要时向外部存储(其中的数据有可能会一直变化)发起查询请求,惰性地获取数据。
 
-Compared to `ScanTableSource`, the source does not have to read the entire table and can lazily fetch individual
-values from a (possibly continuously changing) external table when necessary.
+同时相较于`ScanTableSource`,`LookupTableSource` 接口目前只支持处理 insert-only 数据流。
 
-Compared to `ScanTableSource`, a `LookupTableSource` does only support emitting insert-only changes currently.
+暂时不支持扩展功能接口,可查看 `org.apache.flink.table.connector.source.LookupTableSource` 中的文档了解更多。
 
-Further abilities are not supported. See the documentation of `org.apache.flink.table.connector.source.LookupTableSource`
-for more information.
+`LookupTableSource` 的实现方法可以是 `TableFunction` 或者 `AsyncTableFunction`,Flink运行时会根据要查询的 key 值,调用这个实现方法进行查询。
 
-The runtime implementation of a `LookupTableSource` is a `TableFunction` or `AsyncTableFunction`. The function
-will be called with values for the given lookup keys during runtime.
+<a name="source-abilities"></a>
 
-#### Source Abilities
+#### source 端的功能接口
 
 <table class="table table-bordered">
     <thead>
         <tr>
-        <th class="text-left" style="width: 25%">Interface</th>
-        <th class="text-center" style="width: 75%">Description</th>
+        <th class="text-left" style="width: 25%">接口</th>
+        <th class="text-center" style="width: 75%">描述</th>
         </tr>
     </thead>
     <tbody>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsFilterPushDown.java'>SupportsFilterPushDown</a></td>
-        <td>Enables to push down the filter into the <code>DynamicTableSource</code>. For efficiency, a source can
-        push filters further down in order to be close to the actual data generation.</td>
+        <td>支持将过滤条件下推到 <code>DynamicTableSource</code>。为了更高效处理数据,source 端会将过滤条件下推,以便在数据产生时就处理。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsLimitPushDown.java'>SupportsLimitPushDown</a></td>
-        <td>Enables to push down a limit (the expected maximum number of produced records) into a <code>DynamicTableSource</code>.</td>
+        <td>支持将 limit 下推到(期望生产的最大数据条数)<code>DynamicTableSource</code>。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsPartitionPushDown.java'>SupportsPartitionPushDown</a></td>
-        <td>Enables to pass available partitions to the planner and push down partitions into a <code>DynamicTableSource</code>.
-        During the runtime, the source will only read data from the passed partition list for efficiency.</td>
+        <td>支持将可用的分区信息提供给 planner 并且将分区信息下推到 <code>DynamicTableSource</code>。在运行时为了更高效处理数据,source 端会只从提供的分区列表中读取数据。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsProjectionPushDown.java'>SupportsProjectionPushDown</a> </td>
-        <td>Enables to push down a (possibly nested) projection into a <code>DynamicTableSource</code>. For efficiency,
-        a source can push a projection further down in order to be close to the actual data generation. If the source
-        also implements <code>SupportsReadingMetadata</code>, the source will also read the required metadata only.
+        <td>支持将查询列(可嵌套)下推到 <code>DynamicTableSource</code>。为了更高效处理数据,source 端会将查询列下推,以便在数据产生时就处理。如果 source 端同时实现了 <code>SupportsReadingMetadata</code>,那么 source 端也会读取相对应列的元数据信息。
         </td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsReadingMetadata.java'>SupportsReadingMetadata</a></td>
-        <td>Enables to read metadata columns from a <code>DynamicTableSource</code>. The source
-        is responsible to add the required metadata at the end of the produced rows. This includes
-        potentially forwarding metadata column from contained formats.</td>
+        <td>支持通过 <code>DynamicTableSource</code> 读取列的元数据信息。source 端会在生产数据行时,在最后添加相应的元数据信息,其中包括元数据的格式信息。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsWatermarkPushDown.java'>SupportsWatermarkPushDown</a></td>
-        <td>Enables to push down a watermark strategy into a <code>DynamicTableSource</code>. The watermark
-        strategy is a builder/factory for timestamp extraction and watermark generation. During the runtime, the
-        watermark generator is located inside the source and is able to generate per-partition watermarks.</td>
+        <td>支持将水印策略下推到 <code>DynamicTableSource</code>。水印策略可以通过工厂模式或 Builder 模式来构建,用于抽取时间戳以及水印的生成。在运行时,source 端内部的水印生成器会为每个分区生产水印。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsSourceWatermark.java'>SupportsSourceWatermark</a></td>
-        <td>Enables to fully rely on the watermark strategy provided by the <code>ScanTableSource</code>
-        itself. Thus, a <code>CREATE TABLE</code> DDL is able to use <code>SOURCE_WATERMARK()</code> which
-        is a built-in marker function that will be detected by the planner and translated into a call
-        to this interface if available.</td>
+        <td>支持使用 <code>ScanTableSource</code> 中提供的水印策略。当使用 <code>CREATE TABLE</code> DDL 时,<可以使用></可以使用> <code>SOURCE_WATERMARK()</code> 来告诉 planner 调用这个接口中的水印策略方法。</td>
     </tr>
     </tbody>
 </table>
 
-<span class="label label-danger">Attention</span> The interfaces above are currently only available for
-`ScanTableSource`, not for `LookupTableSource`.
+<span class="label label-danger">注意</span>上述接口当前只适用于 `ScanTableSource`,不适用于`LookupTableSource`。
+
+<a name="dynamic-table-sink"></a>
 
-### Dynamic Table Sink
+### 动态表的 sink 端
 
-By definition, a dynamic table can change over time.
+按照定义,动态表是随时间变化的。
 
-When writing a dynamic table, the content can always be considered as a changelog (finite or infinite)
-for which all changes are written out continuously until the changelog is exhausted. The returned _changelog mode_
-indicates the set of changes that the sink accepts during runtime.
+当写入一个动态表时,数据流可以被看作是 changelog (有界或无界都可),在 changelog 结束前,所有的变更都会被持续写入。在运行时,返回的 **_changelog mode_** 会显示 sink 端支持的数据操作类型。
 
-For regular batch scenarios, the sink can solely accept insert-only rows and write out bounded streams.
+在常规批处理的场景下,sink 端可以持续接收 insert-only 操作类型的数据,并写入到有界数据流中。
 
-For regular streaming scenarios, the sink can solely accept insert-only rows and can write out unbounded streams.
+在常规流处理的场景下,sink 端可以持续接收 insert-only 操作类型的数据,并写入到无界数据流中。
 
-For change data capture (CDC) scenarios, the sink can write out bounded or unbounded streams with insert,
-update, and delete rows.
+在变更日志数据捕获(即 CDC)场景下,sink 端可以将 insert、update、delete 操作类型的数据写入有界或无界数据流。
 
-A table sink can implement further ability interfaces such as `SupportsOverwrite` that might mutate an
-instance during planning. All abilities can be found in the `org.apache.flink.table.connector.sink.abilities`
-package and are listed in the [sink abilities table](#sink-abilities).
+可以实现 `SupportsOverwrite` 等功能接口,在 sink 端处理数据。可以在 `org.apache.flink.table.connector.sink.abilities` 包下找到各种功能接口,更多内容可查看[sink abilities table](#sink-abilities)。
 
-The runtime implementation of a `DynamicTableSink` must consume internal data structures. Thus, records
-must be accepted as `org.apache.flink.table.data.RowData`. The framework provides runtime converters such
-that a sink can still work on common data structures and perform a conversion at the beginning.
+实现 `DynamicTableSink` 接口的类必须能够处理 Flink 内部数据结构,因此每条记录都会按照 `org.apache.flink.table.data.RowData` 的方式进行处理。Flink 运行时提供了转换机制来保证在最开始进行数据类型转换,以便 sink 端可以处理常见的数据结构。
 
-#### Sink Abilities
+<a name="sink-abilities"></a>
+
+#### sink 端的功能接口
 
 <table class="table table-bordered">
     <thead>
         <tr>
-        <th class="text-left" style="width: 25%">Interface</th>
-        <th class="text-center" style="width: 75%">Description</th>
+        <th class="text-left" style="width: 25%">接口</th>
+        <th class="text-center" style="width: 75%">描述</th>
         </tr>
     </thead>
     <tbody>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/abilities/SupportsOverwrite.java'>SupportsOverwrite</a></td>
-        <td>Enables to overwrite existing data in a <code>DynamicTableSink</code>. By default, if
-        this interface is not implemented, existing tables or partitions cannot be overwritten using
-        e.g. the SQL <code>INSERT OVERWRITE</code> clause.</td>
+        <td>支持 <code>DynamicTableSink</code> 覆盖写入已存在的数据。默认情况下,如果不实现这个接口,在使用 <code>INSERT OVERWRITE</code> SQL 语法时,已存在的表或分区不会被覆盖写入</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/abilities/SupportsPartitioning.java'>SupportsPartitioning</a></td>
-        <td>Enables to write partitioned data in a <code>DynamicTableSink</code>.</td>
+        <td>支持 <code>DynamicTableSink</code> 写入分区数据。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/abilities/SupportsWritingMetadata.java'>SupportsWritingMetadata</a></td>
-        <td>Enables to write metadata columns into a <code>DynamicTableSource</code>. A table sink is
-        responsible for accepting requested metadata columns at the end of consumed rows and persist
-        them. This includes potentially forwarding metadata columns to contained formats.</td>
+        <td>支持 <code>DynamicTableSource</code> 写入分区数据。sink 端会在消费数据行时,在最后接受相应的元数据信息并进行持久化,其中包括元数据的格式信息。</td>

Review Comment:
   Enables to write metadata columns into a <code>DynamicTableSource</code>.
   "写入分区数据" 应是"写入元数据列"。



##########
docs/content.zh/docs/dev/table/sourcesSinks.md:
##########
@@ -133,242 +133,209 @@ If you need a feature available only internally within the `org.apache.flink.tab
 To learn more, check out [Anatomy of Table Dependencies]({{< ref "docs/dev/configuration/advanced" >}}#anatomy-of-table-dependencies).
 {{< /hint >}}
 
-Extension Points
+<a name="extension-points"></a>
+
+扩展点
 ----------------
 
-This section explains the available interfaces for extending Flink's table connectors.
+这一部分主要介绍扩展 Flink table connector 时可能用到的接口。
+
+<a name="dynamic-table-factories"></a>
 
-### Dynamic Table Factories
+### 动态表的工厂类
 
-Dynamic table factories are used to configure a dynamic table connector for an external storage system from catalog
-and session information.
+在根据 catalog 与 Flink 运行时上下文信息,为某个外部存储系统配置动态表连接器时,需要用到动态表的工厂类。
 
-`org.apache.flink.table.factories.DynamicTableSourceFactory` can be implemented to construct a `DynamicTableSource`.
+比如,通过实现 `org.apache.flink.table.factories.DynamicTableSourceFactory` 接口完成一个工厂类,来生产 `DynamicTableSource` 类。
 
-`org.apache.flink.table.factories.DynamicTableSinkFactory` can be implemented to construct a `DynamicTableSink`.
+通过实现 `org.apache.flink.table.factories.DynamicTableSinkFactory` 接口完成一个工厂类,来生产 `DynamicTableSink` 类。
 
-By default, the factory is discovered using the value of the `connector` option as the factory identifier
-and Java's Service Provider Interface.
+默认情况下,Java 的 SPI 机制会自动识别这些工厂类,同时将 `connector` 配置项作为工厂类的”标识符“。
 
-In JAR files, references to new implementations can be added to the service file:
+在 JAR 文件中,需要将实现的工厂类路径放入到下面这个配置文件:
 
 `META-INF/services/org.apache.flink.table.factories.Factory`
 
-The framework will check for a single matching factory that is uniquely identified by factory identifier
-and requested base class (e.g. `DynamicTableSourceFactory`).
+Flink 会对工厂类逐个进行检查,确保其”标识符“是全局唯一的,并且按照要求实现了上面提到的接口 (比如 `DynamicTableSourceFactory`)。
+
+如果必要的话,也可以在实现 catalog 时绕过上述 SPI 机制识别工厂类的过程。即在实现 catalog 接口时,在`org.apache.flink.table.catalog.Catalog#getFactory` 方法中直接返回工厂类的实例。
+
+<a name="dynamic-table-source"></a>
 
-The factory discovery process can be bypassed by the catalog implementation if necessary. For this, a
-catalog needs to return an instance that implements the requested base class in `org.apache.flink.table.catalog.Catalog#getFactory`.
+### 动态表的 source 端
 
-### Dynamic Table Source
+按照定义,动态表是随时间变化的。
 
-By definition, a dynamic table can change over time.
+在读取动态表时,表中数据可以是以下情况之一:
+- changelog 流(支持有界或无界),在 changelog 流结束前,所有的改变都会被源源不断地消费,由 `ScanTableSource` 接口表示。
+- 处于一直变换或数据量很大的外部表,其中的数据一般不会被全量读取,除非是在查询某个值时,由 `LookupTableSource` 接口表示。
 
-When reading a dynamic table, the content can either be considered as:
-- A changelog (finite or infinite) for which all changes are consumed continuously until the changelog
-  is exhausted. This is represented by the `ScanTableSource` interface.
-- A continuously changing or very large external table whose content is usually never read entirely
-  but queried for individual values when necessary. This is represented by the `LookupTableSource`
-  interface.
+一个类可以同时实现这两个接口,Planner 会根据查询的 Query 选择相应接口中的方法。
 
-A class can implement both of these interfaces at the same time. The planner decides about their usage depending
-on the specified query.
+<a name= "scan-table-source"></a>
 
 #### Scan Table Source
 
-A `ScanTableSource` scans all rows from an external storage system during runtime.
+在运行期间,`ScanTableSource` 接口会按行扫描外部存储系统中所有数据。
 
-The scanned rows don't have to contain only insertions but can also contain updates and deletions. Thus,
-the table source can be used to read a (finite or infinite) changelog. The returned _changelog mode_ indicates
-the set of changes that the planner can expect during runtime.
+被扫描的数据可以是 insert、update、delete 三种操作类型,因此数据源可以用作读取 changelog (支持有界或无界)。在运行时,返回的 **_changelog mode_** 表示 Planner 要处理的操作类型。
 
-For regular batch scenarios, the source can emit a bounded stream of insert-only rows.
+在常规批处理的场景下,数据源可以处理 insert-only 操作类型的有界数据流。
 
-For regular streaming scenarios, the source can emit an unbounded stream of insert-only rows.
+在常规流处理的场景下,数据源可以处理 insert-only 操作类型的无界数据流。
 
-For change data capture (CDC) scenarios, the source can emit bounded or unbounded streams with insert,
-update, and delete rows.
+在变更日志数据捕获(即 CDC)场景下,数据源可以处理 insert、update、delete 操作类型的有界或无界数据流。
 
-A table source can implement further ability interfaces such as `SupportsProjectionPushDown` that might
-mutate an instance during planning. All abilities can be found in the `org.apache.flink.table.connector.source.abilities`
-package and are listed in the [source abilities table](#source-abilities).
+可以实现更多的功能接口来优化数据源,比如实现 `SupportsProjectionPushDown` 接口,这样在运行时在 source 端就处理数据。在 `org.apache.flink.table.connector.source.abilities` 包下可以找到各种功能接口,更多内容可查看 [source abilities table](#source-abilities)。
 
-The runtime implementation of a `ScanTableSource` must produce internal data structures. Thus, records
-must be emitted as `org.apache.flink.table.data.RowData`. The framework provides runtime converters such
-that a source can still work on common data structures and perform a conversion at the end.
+实现 `ScanTableSource` 接口的类必须能够生产 Flink 内部数据结构,因此每条记录都会按照`org.apache.flink.table.data.RowData` 的方式进行处理。Flink 运行时提供了转换机制保证 source 端可以处理常见的数据结构,并且在最后进行转换。
+
+<a name="lookup-table-source"></a>
 
 #### Lookup Table Source
 
-A `LookupTableSource` looks up rows of an external storage system by one or more keys during runtime.
+在运行期间,`LookupTableSource` 接口会在外部存储系统中按照 key 进行查找。
+
+相比于`ScanTableSource`,`LookupTableSource` 接口不会全量读取表中数据,只会在需要时向外部存储(其中的数据有可能会一直变化)发起查询请求,惰性地获取数据。
 
-Compared to `ScanTableSource`, the source does not have to read the entire table and can lazily fetch individual
-values from a (possibly continuously changing) external table when necessary.
+同时相较于`ScanTableSource`,`LookupTableSource` 接口目前只支持处理 insert-only 数据流。
 
-Compared to `ScanTableSource`, a `LookupTableSource` does only support emitting insert-only changes currently.
+暂时不支持扩展功能接口,可查看 `org.apache.flink.table.connector.source.LookupTableSource` 中的文档了解更多。
 
-Further abilities are not supported. See the documentation of `org.apache.flink.table.connector.source.LookupTableSource`
-for more information.
+`LookupTableSource` 的实现方法可以是 `TableFunction` 或者 `AsyncTableFunction`,Flink运行时会根据要查询的 key 值,调用这个实现方法进行查询。
 
-The runtime implementation of a `LookupTableSource` is a `TableFunction` or `AsyncTableFunction`. The function
-will be called with values for the given lookup keys during runtime.
+<a name="source-abilities"></a>
 
-#### Source Abilities
+#### source 端的功能接口
 
 <table class="table table-bordered">
     <thead>
         <tr>
-        <th class="text-left" style="width: 25%">Interface</th>
-        <th class="text-center" style="width: 75%">Description</th>
+        <th class="text-left" style="width: 25%">接口</th>
+        <th class="text-center" style="width: 75%">描述</th>
         </tr>
     </thead>
     <tbody>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsFilterPushDown.java'>SupportsFilterPushDown</a></td>
-        <td>Enables to push down the filter into the <code>DynamicTableSource</code>. For efficiency, a source can
-        push filters further down in order to be close to the actual data generation.</td>
+        <td>支持将过滤条件下推到 <code>DynamicTableSource</code>。为了更高效处理数据,source 端会将过滤条件下推,以便在数据产生时就处理。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsLimitPushDown.java'>SupportsLimitPushDown</a></td>
-        <td>Enables to push down a limit (the expected maximum number of produced records) into a <code>DynamicTableSource</code>.</td>
+        <td>支持将 limit 下推到(期望生产的最大数据条数)<code>DynamicTableSource</code>。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsPartitionPushDown.java'>SupportsPartitionPushDown</a></td>
-        <td>Enables to pass available partitions to the planner and push down partitions into a <code>DynamicTableSource</code>.
-        During the runtime, the source will only read data from the passed partition list for efficiency.</td>
+        <td>支持将可用的分区信息提供给 planner 并且将分区信息下推到 <code>DynamicTableSource</code>。在运行时为了更高效处理数据,source 端会只从提供的分区列表中读取数据。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsProjectionPushDown.java'>SupportsProjectionPushDown</a> </td>
-        <td>Enables to push down a (possibly nested) projection into a <code>DynamicTableSource</code>. For efficiency,
-        a source can push a projection further down in order to be close to the actual data generation. If the source
-        also implements <code>SupportsReadingMetadata</code>, the source will also read the required metadata only.
+        <td>支持将查询列(可嵌套)下推到 <code>DynamicTableSource</code>。为了更高效处理数据,source 端会将查询列下推,以便在数据产生时就处理。如果 source 端同时实现了 <code>SupportsReadingMetadata</code>,那么 source 端也会读取相对应列的元数据信息。
         </td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsReadingMetadata.java'>SupportsReadingMetadata</a></td>
-        <td>Enables to read metadata columns from a <code>DynamicTableSource</code>. The source
-        is responsible to add the required metadata at the end of the produced rows. This includes
-        potentially forwarding metadata column from contained formats.</td>
+        <td>支持通过 <code>DynamicTableSource</code> 读取列的元数据信息。source 端会在生产数据行时,在最后添加相应的元数据信息,其中包括元数据的格式信息。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsWatermarkPushDown.java'>SupportsWatermarkPushDown</a></td>
-        <td>Enables to push down a watermark strategy into a <code>DynamicTableSource</code>. The watermark
-        strategy is a builder/factory for timestamp extraction and watermark generation. During the runtime, the
-        watermark generator is located inside the source and is able to generate per-partition watermarks.</td>
+        <td>支持将水印策略下推到 <code>DynamicTableSource</code>。水印策略可以通过工厂模式或 Builder 模式来构建,用于抽取时间戳以及水印的生成。在运行时,source 端内部的水印生成器会为每个分区生产水印。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/source/abilities/SupportsSourceWatermark.java'>SupportsSourceWatermark</a></td>
-        <td>Enables to fully rely on the watermark strategy provided by the <code>ScanTableSource</code>
-        itself. Thus, a <code>CREATE TABLE</code> DDL is able to use <code>SOURCE_WATERMARK()</code> which
-        is a built-in marker function that will be detected by the planner and translated into a call
-        to this interface if available.</td>
+        <td>支持使用 <code>ScanTableSource</code> 中提供的水印策略。当使用 <code>CREATE TABLE</code> DDL 时,<可以使用></可以使用> <code>SOURCE_WATERMARK()</code> 来告诉 planner 调用这个接口中的水印策略方法。</td>
     </tr>
     </tbody>
 </table>
 
-<span class="label label-danger">Attention</span> The interfaces above are currently only available for
-`ScanTableSource`, not for `LookupTableSource`.
+<span class="label label-danger">注意</span>上述接口当前只适用于 `ScanTableSource`,不适用于`LookupTableSource`。
+
+<a name="dynamic-table-sink"></a>
 
-### Dynamic Table Sink
+### 动态表的 sink 端
 
-By definition, a dynamic table can change over time.
+按照定义,动态表是随时间变化的。
 
-When writing a dynamic table, the content can always be considered as a changelog (finite or infinite)
-for which all changes are written out continuously until the changelog is exhausted. The returned _changelog mode_
-indicates the set of changes that the sink accepts during runtime.
+当写入一个动态表时,数据流可以被看作是 changelog (有界或无界都可),在 changelog 结束前,所有的变更都会被持续写入。在运行时,返回的 **_changelog mode_** 会显示 sink 端支持的数据操作类型。
 
-For regular batch scenarios, the sink can solely accept insert-only rows and write out bounded streams.
+在常规批处理的场景下,sink 端可以持续接收 insert-only 操作类型的数据,并写入到有界数据流中。
 
-For regular streaming scenarios, the sink can solely accept insert-only rows and can write out unbounded streams.
+在常规流处理的场景下,sink 端可以持续接收 insert-only 操作类型的数据,并写入到无界数据流中。
 
-For change data capture (CDC) scenarios, the sink can write out bounded or unbounded streams with insert,
-update, and delete rows.
+在变更日志数据捕获(即 CDC)场景下,sink 端可以将 insert、update、delete 操作类型的数据写入有界或无界数据流。
 
-A table sink can implement further ability interfaces such as `SupportsOverwrite` that might mutate an
-instance during planning. All abilities can be found in the `org.apache.flink.table.connector.sink.abilities`
-package and are listed in the [sink abilities table](#sink-abilities).
+可以实现 `SupportsOverwrite` 等功能接口,在 sink 端处理数据。可以在 `org.apache.flink.table.connector.sink.abilities` 包下找到各种功能接口,更多内容可查看[sink abilities table](#sink-abilities)。
 
-The runtime implementation of a `DynamicTableSink` must consume internal data structures. Thus, records
-must be accepted as `org.apache.flink.table.data.RowData`. The framework provides runtime converters such
-that a sink can still work on common data structures and perform a conversion at the beginning.
+实现 `DynamicTableSink` 接口的类必须能够处理 Flink 内部数据结构,因此每条记录都会按照 `org.apache.flink.table.data.RowData` 的方式进行处理。Flink 运行时提供了转换机制来保证在最开始进行数据类型转换,以便 sink 端可以处理常见的数据结构。
 
-#### Sink Abilities
+<a name="sink-abilities"></a>
+
+#### sink 端的功能接口
 
 <table class="table table-bordered">
     <thead>
         <tr>
-        <th class="text-left" style="width: 25%">Interface</th>
-        <th class="text-center" style="width: 75%">Description</th>
+        <th class="text-left" style="width: 25%">接口</th>
+        <th class="text-center" style="width: 75%">描述</th>
         </tr>
     </thead>
     <tbody>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/abilities/SupportsOverwrite.java'>SupportsOverwrite</a></td>
-        <td>Enables to overwrite existing data in a <code>DynamicTableSink</code>. By default, if
-        this interface is not implemented, existing tables or partitions cannot be overwritten using
-        e.g. the SQL <code>INSERT OVERWRITE</code> clause.</td>
+        <td>支持 <code>DynamicTableSink</code> 覆盖写入已存在的数据。默认情况下,如果不实现这个接口,在使用 <code>INSERT OVERWRITE</code> SQL 语法时,已存在的表或分区不会被覆盖写入</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/abilities/SupportsPartitioning.java'>SupportsPartitioning</a></td>
-        <td>Enables to write partitioned data in a <code>DynamicTableSink</code>.</td>
+        <td>支持 <code>DynamicTableSink</code> 写入分区数据。</td>
     </tr>
     <tr>
         <td><a href='https://github.com/apache/flink/blob/master/flink-table/flink-table-common/src/main/java/org/apache/flink/table/connector/sink/abilities/SupportsWritingMetadata.java'>SupportsWritingMetadata</a></td>
-        <td>Enables to write metadata columns into a <code>DynamicTableSource</code>. A table sink is
-        responsible for accepting requested metadata columns at the end of consumed rows and persist
-        them. This includes potentially forwarding metadata columns to contained formats.</td>
+        <td>支持 <code>DynamicTableSource</code> 写入分区数据。sink 端会在消费数据行时,在最后接受相应的元数据信息并进行持久化,其中包括元数据的格式信息。</td>
     </tr>
     </tbody>
 </table>
 
-### Encoding / Decoding Formats
+<a name="encoding--decoding-formats"></a>
+
+### 编码与解码
 
-Some table connectors accept different formats that encode and decode keys and/or values.
+有的表连接器支持 K/V 型数据的各类编码与解码方式。
 
-Formats work similar to the pattern `DynamicTableSourceFactory -> DynamicTableSource -> ScanRuntimeProvider`,
-where the factory is responsible for translating options and the source is responsible for creating runtime logic.
+编码与解码格式器的工作原理类似于 `DynamicTableSourceFactory -> DynamicTableSource -> ScanRuntimeProvider`,其中工厂类负责传参,source 负责提供处理逻辑。
 
-Because formats might be located in different modules, they are discovered using Java's Service Provider
-Interface similar to [table factories](#dynamic-table-factories). In order to discover a format factory,
-the dynamic table factory searches for a factory that corresponds to a factory identifier and connector-specific
-base class.
+由于编码与解码格式器处于不同的代码模块,类似于[table factories](#dynamic-table-factories),它们都需要通过 Java 的 SPI 机制自动识别为了找到格式器的工厂类,动态表工厂类会根据该格式器工厂类的”标识符“来搜索,并确认其实现了连接器相关的基类。

Review Comment:
   "它们都需要通过 Java 的 SPI 机制自动识别为了找到格式器的工厂类"
   缺个逗号,"它们都需要通过 Java 的 SPI 机制自动识别。为了找到格式器的工厂类,"



-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] ChengkaiYang2022 commented on pull request #20331: [FLINK-28121][docs-zh]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
ChengkaiYang2022 commented on PR #20331:
URL: https://github.com/apache/flink/pull/20331#issuecomment-1222362837

   @fsk119 Would you help to take a look at this? ^_^


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] flinkbot commented on pull request #20331: [FLINK-28121]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
flinkbot commented on PR #20331:
URL: https://github.com/apache/flink/pull/20331#issuecomment-1191359398

   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "90b07d33d12d70928620851c4e1225fda0caa447",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "90b07d33d12d70928620851c4e1225fda0caa447",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 90b07d33d12d70928620851c4e1225fda0caa447 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run azure` re-run the last Azure build
   </details>


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] ChengkaiYang2022 commented on pull request #20331: [FLINK-28121][docs-zh]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
ChengkaiYang2022 commented on PR #20331:
URL: https://github.com/apache/flink/pull/20331#issuecomment-1216470360

   Hi, @gaoyunhaii, could you help to review this PR? 


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] MartijnVisser merged pull request #20331: [FLINK-28121][docs-zh]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
MartijnVisser merged PR #20331:
URL: https://github.com/apache/flink/pull/20331


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org


[GitHub] [flink] ChengkaiYang2022 commented on pull request #20331: [FLINK-28121][docs-zh]Translate "Extension Points" and "Full Stack Example" in "User-defined Sources & Sinks" page

Posted by GitBox <gi...@apache.org>.
ChengkaiYang2022 commented on PR #20331:
URL: https://github.com/apache/flink/pull/20331#issuecomment-1221293450

   @coder-zjh Thanks for the review! @MartijnVisser  would you help to take a look and merge it?


-- 
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

To unsubscribe, e-mail: issues-unsubscribe@flink.apache.org

For queries about this service, please contact Infrastructure at:
users@infra.apache.org