You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/05/31 17:34:08 UTC

[GitHub] [flink] authuir opened a new pull request #12420: [FLINK-16198][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

authuir opened a new pull request #12420:
URL: https://github.com/apache/flink/pull/12420


   <!--
   *Thank you very much for contributing to Apache Flink - we are happy that you want to help us improve Flink. To help the community review your contribution in the best possible way, please go through the checklist below, which will get the contribution into a shape in which it can be best reviewed.*
   
   *Please understand that we do not do this to make contributions to Flink a hassle. In order to uphold a high standard of quality for code contributions, while at the same time managing a large number of contributions, we need contributors to prepare the contributions well, and give reviewers enough contextual information for the review. Please also understand that contributions that do not follow this guide will take longer to review and thus typically be picked up with lower priority by the community.*
   
   ## Contribution Checklist
   
     - Make sure that the pull request corresponds to a [JIRA issue](https://issues.apache.org/jira/projects/FLINK/issues). Exceptions are made for typos in JavaDoc or documentation files, which need no JIRA issue.
     
     - Name the pull request in the form "[FLINK-XXXX] [component] Title of the pull request", where *FLINK-XXXX* should be replaced by the actual issue number. Skip *component* if you are unsure about which is the best component.
     Typo fixes that have no associated JIRA issue should be named following this pattern: `[hotfix] [docs] Fix typo in event time introduction` or `[hotfix] [javadocs] Expand JavaDoc for PuncuatedWatermarkGenerator`.
   
     - Fill out the template below to describe the changes contributed by the pull request. That will give reviewers the context they need to do the review.
     
     - Make sure that the change passes the automated tests, i.e., `mvn clean verify` passes. You can set up Azure Pipelines CI to do that following [this guide](https://cwiki.apache.org/confluence/display/FLINK/Azure+Pipelines#AzurePipelines-Tutorial:SettingupAzurePipelinesforaforkoftheFlinkrepository).
   
     - Each pull request should address only one issue, not mix up code from multiple issues.
     
     - Each commit in the pull request has a meaningful commit message (including the JIRA id)
   
     - Once all items of the checklist are addressed, remove the above text and this checklist, leaving only the filled out template below.
   
   
   **(The sections below can be removed for hotfixes of typos)**
   -->
   
   ## What is the purpose of the change
   
   *Support Chinese version docs on "Joins in Continuous Queries"*
   
   
   ## Brief change log
   
     - Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (yes / no) no
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (yes / no) no
     - The serializers: (yes / no / don't know) no
     - The runtime per-record code paths (performance sensitive): (yes / no / don't know) no
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (yes / no / don't know) no
     - The S3 file system connector: (yes / no / don't know) no
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (yes / no) no
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented) not applicable
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320268



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458534394



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +138,21 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是处理时间,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins) 相反,时态表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的和不再需要的给定 primary key 所对应的记录将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让时态表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法

Review comment:
       这里也添加一下 `<a>` 标签吧

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表]({%link dev/table/streaming/dynamic_tables.zh.md %})中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API]({%link dev/table/tableApi.zh.md %}#joins) 和 [SQL]({%link dev/table/sql/queries.zh.md %}#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+<a name="regular-joins"></a>
+
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录进行 Join 运算。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
+
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。
 
-However, this operation has an important implication: it requires to keep both sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input tables are continuously growing.
+<a name="interval-joins"></a>
 
-Interval Joins
+时间区间 Join

Review comment:
       上面的 `常规 Join` 前面添加了 `<a>` 标签,这里也添加一下吧

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +182,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join 中的 State 保留(在[查询配置]({%link dev/table/streaming/query_configuration.zh.md %})中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于处理时间的时态 Join
 
-### Processing-time Temporal Joins
+如果将处理时间作为时间属性,将无法将 _过去_ 时间属性作为参数传递给时态表函数。
+根据定义,处理时间总会是当前时间戳。因此,基于处理时间的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+可以将处理时间的时态 Join 视作简单的 `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+### 基于事件时间的时态 Join

Review comment:
       这里也添加一下 `<a>` 标签吧

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the build side table. In our example, the query is using the processing-time notion, so a newly appended order would always be joined with the most recent version of `LatestRates` when executing the operation. Note that the result is not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用`处理时间`作为处理时间,因而新增订单将始终与表 `LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。
+
+与[常规 Join](#regular-joins) 相比,尽管构建侧表的数据发生了变化,但时态表 Join 的变化前结果不会随之变化。而且时态表 Join 运算非常轻量级且不会保留任何状态。
 
-In contrast to [regular joins](#regular-joins), the previous results of the temporal table join will not be affected despite the changes on the build side. Also, the temporal table join operator is very lightweight and does not keep any state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义决定哪些记录将被 Join 的时间窗口。
+探针侧的记录将总是与构建侧在对应`处理时间`的最新数据执行 Join。因而构建侧的数据可能是任意旧的。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not define a time window within which the records will be joined.
-Records from the probe side are always joined with the build side's latest version at processing time. Thus, records on the build side might be arbitrarily old.
+[时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join 都有类似的功能,但是有不同的 SQL 语法和 runtime 实现:
 
-Both [temporal table function join](#join-with-a-temporal-table-function) and temporal table join come from the same motivation but have different SQL syntax and runtime implementations:
-* The SQL syntax of the temporal table function join is a join UDTF, while the temporal table join uses the standard temporal table syntax introduced in SQL:2011.
-* The implementation of temporal table function joins actually joins two streams and keeps them in state, while temporal table joins just receive the only input stream and look up the external database according to the key in the record.
-* The temporal table function join is usually used to join a changelog stream, while the temporal table join is usually used to join an external table (i.e. dimension table).
+* 时态表函数 Join 的 SQL 语法是一种 Join 用户定义生成表函数(UDTF,User-Defined Table-Generating Functions),而时态表 Join 使用了 SQL:2011 标准引入的标准时态表语法。
+* 时态表函数 Join 的实现实际上是 Join 两个流并保存在 state 中,而时态表 Join 只接受唯一的输入流,并根据记录的键值查找外部数据库。
+* 时态表函数 Join 通常用于与变更日志流执行 Join,而时态表 Join 通常与外部表(例如维度表)执行 Join 操作。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让时态表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-In the future, the temporal table join will support the features of temporal table function joins, i.e. support to temporal join a changelog stream.
+将来,时态表 Join 将支持时态表函数 Join 的功能,即支持时态 Join 变更日志流。
 
-### Usage
+### 用法

Review comment:
       这里也添加一下 `<a>` 标签吧
   这里需要注意下,前面已经有一个 `Usage` 的标题了,这里可能需要使用 `<a name="usage-1"></a>` 具体的可以看一下英文文章的链接

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +182,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join 中的 State 保留(在[查询配置]({%link dev/table/streaming/query_configuration.zh.md %})中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于处理时间的时态 Join

Review comment:
       这里也添加一下 `<a>` 标签吧




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444589792



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694",
       "triggerID" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * bc4b8b49834d751271c7f0976f62f91923217420 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349) 
   * 50f2204b9ec3efcdd2619d201b8c8a21cda1341f Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * bc4b8b49834d751271c7f0976f62f91923217420 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458831384



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the build side table. In our example, the query is using the processing-time notion, so a newly appended order would always be joined with the most recent version of `LatestRates` when executing the operation. Note that the result is not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用`处理时间`作为处理时间,因而新增订单将始终与表 `LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。
+
+与[常规 Join](#regular-joins) 相比,尽管构建侧表的数据发生了变化,但时态表 Join 的变化前结果不会随之变化。而且时态表 Join 运算非常轻量级且不会保留任何状态。
 
-In contrast to [regular joins](#regular-joins), the previous results of the temporal table join will not be affected despite the changes on the build side. Also, the temporal table join operator is very lightweight and does not keep any state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义决定哪些记录将被 Join 的时间窗口。
+探针侧的记录将总是与构建侧在对应`处理时间`的最新数据执行 Join。因而构建侧的数据可能是任意旧的。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not define a time window within which the records will be joined.
-Records from the probe side are always joined with the build side's latest version at processing time. Thus, records on the build side might be arbitrarily old.
+[时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join 都有类似的功能,但是有不同的 SQL 语法和 runtime 实现:
 
-Both [temporal table function join](#join-with-a-temporal-table-function) and temporal table join come from the same motivation but have different SQL syntax and runtime implementations:
-* The SQL syntax of the temporal table function join is a join UDTF, while the temporal table join uses the standard temporal table syntax introduced in SQL:2011.
-* The implementation of temporal table function joins actually joins two streams and keeps them in state, while temporal table joins just receive the only input stream and look up the external database according to the key in the record.
-* The temporal table function join is usually used to join a changelog stream, while the temporal table join is usually used to join an external table (i.e. dimension table).
+* 时态表函数 Join 的 SQL 语法是一种 Join 用户定义生成表函数(UDTF,User-Defined Table-Generating Functions),而时态表 Join 使用了 SQL:2011 标准引入的标准时态表语法。
+* 时态表函数 Join 的实现实际上是 Join 两个流并保存在 state 中,而时态表 Join 只接受唯一的输入流,并根据记录的键值查找外部数据库。
+* 时态表函数 Join 通常用于与变更日志流执行 Join,而时态表 Join 通常与外部表(例如维度表)执行 Join 操作。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让时态表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-In the future, the temporal table join will support the features of temporal table function joins, i.e. support to temporal join a changelog stream.
+将来,时态表 Join 将支持时态表函数 Join 的功能,即支持时态 Join 变更日志流。
 
-### Usage
+### 用法

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495) 
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 19b463b5eac22be3a724e9437fd0be0d9b3d5d3a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865) 
   * 2b6feb97e452779487c38f13c260aeb0a6e3f5c7 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320945



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-### Event-time Temporal Joins
+### 基于 Event-time 临时 Join
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给临时表函数。
+这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-Compared to processing-time temporal joins, the temporal table does not only keep the latest version (with respect to the defined primary key) of the build side records in the state
-but stores all versions (identified by time) since the last watermark.
+与基于 processing-time 的临时 Join 相比,临时表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个水印以来的所有版本(按时间区分)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[临时表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-By definition of event time, [watermarks]({{ site.baseurl }}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer necessary because no incoming row with
-lower or equal timestamp is expected.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 19b463b5eac22be3a724e9437fd0be0d9b3d5d3a Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865) 
   * 2b6feb97e452779487c38f13c260aeb0a6e3f5c7 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443307501



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +138,22 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,时态表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。

Review comment:
       ```suggestion
   与[常规 Join](#regular-joins) 相反,时态表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
   ```

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+<a name="regular-joins"></a>
+
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录进行 Join 运算。

Review comment:
       `两侧的表的` -> `两侧表的` 会好一些吗?

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于 Processing-time 时态 Join
 
-### Processing-time Temporal Joins
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给时态表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+可以将 processing-time 的时态 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+### 基于 Event-time 时态 Join
 
-### Event-time Temporal Joins
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给时态表函数。这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+与基于 processing-time 的时态 Join 相比,时态表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个 watermarks 以来的所有版本(按时间区分)。
 
-Compared to processing-time temporal joins, the temporal table does not only keep the latest version (with respect to the defined primary key) of the build side records in the state
-but stores all versions (identified by time) since the last watermark.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[时态表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/zh/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
 
-By definition of event time, [watermarks]({{ site.baseurl }}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer necessary because no incoming row with
-lower or equal timestamp is expected.
+<a name="join-with-a-temporal-table"></a>
 
-Join with a Temporal Table
+时态表 Join
 --------------------------
 
-A join with a temporal table joins an arbitrary table (left input/probe side) with a temporal table (right input/build side),
-i.e., an external dimension table that changes over time. Please check the corresponding page for more information about [temporal tables](temporal_tables.html#temporal-table).
+时态表 Join 意味着对任意表(左输入/探针侧)和一个时态表(右输入/构建侧)执行的 Join 操作,即随时间变化的的扩展表。请参考相应的页面以获取更多有关[时态表](temporal_tables.html#temporal-table)的信息。
 
-<span class="label label-danger">Attention</span> Users can not use arbitrary tables as a temporal table, but need to use a table backed by a `LookupableTableSource`. A `LookupableTableSource` can only be used for temporal join as a temporal table. See the page for more details about [how to define LookupableTableSource](../sourceSinks.html#defining-a-tablesource-with-lookupable).
+<span class="label label-danger">注意</span> 不是任何表都能用作时态表,能作为时态表的表必须实现接口 `LookupableTableSource`。接口 `LookupableTableSource` 的实例只能作为时态表用于时态 Join 。查看此页面获取更多关于[如何实现接口 `LookupableTableSource`](../sourceSinks.html#defining-a-tablesource-with-lookupable) 的详细内容。

Review comment:
       这个地方的链接应该是变成了 `(../sourceSinks.html#defining-a-tablesource-for-lookupable) `
   这个后续你也可以提一个 hotfix 的 pr 来修改英文版的链接

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于 Processing-time 时态 Join
 
-### Processing-time Temporal Joins
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给时态表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+可以将 processing-time 的时态 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+### 基于 Event-time 时态 Join
 
-### Event-time Temporal Joins
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给时态表函数。这允许对两个表中在相同时间点的记录执行 Join 操作。

Review comment:
       这里 `_past_` 如果翻译成 `过去` 之类的会更好一些吗?

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。

Review comment:
       链接的跳转,最近[邮件列表](http://apache-flink-mailing-list-archive.1008284.n3.nabble.com/Reminder-Prefer-link-tag-in-documentation-td42362.html) 建议使用 `{%link dev/table/sql/queries.zh.md %}#joins` 这种形式来写链接了,麻烦再改一下吧
   
   其他地方也修改一下吧~




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320979



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -327,10 +313,10 @@ FROM table1 [AS <alias1>]
 ON table1.column-name1 = table2.column-name1
 {% endhighlight %}
 
-Currently, only support INNER JOIN and LEFT JOIN. The `FOR SYSTEM_TIME AS OF table1.proctime` should be followed after temporal table. `proctime` is a [processing time attribute](time_attributes.html#processing-time) of `table1`.
-This means that it takes a snapshot of the temporal table at processing time when joining every record from left table.
+目前只支持 INNER JOIN 和 LEFT JOIN,`FOR SYSTEM_TIME AS OF table1.proctime` 应位于临时表之后. `proctime` 是 `table1` 的 [processing time 属性](time_attributes.html#processing-time).

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694",
       "triggerID" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1b66bcc9096c203f1dc6942ee03e3f8c83943b80",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4736",
       "triggerID" : "1b66bcc9096c203f1dc6942ee03e3f8c83943b80",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 1b66bcc9096c203f1dc6942ee03e3f8c83943b80 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4736) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 2b6feb97e452779487c38f13c260aeb0a6e3f5c7 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864) 
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436322054



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-### Event-time Temporal Joins
+### 基于 Event-time 临时 Join
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给临时表函数。
+这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-Compared to processing-time temporal joins, the temporal table does not only keep the latest version (with respect to the defined primary key) of the build side records in the state
-but stores all versions (identified by time) since the last watermark.
+与基于 processing-time 的临时 Join 相比,临时表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个水印以来的所有版本(按时间区分)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[临时表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-By definition of event time, [watermarks]({{ site.baseurl }}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer necessary because no incoming row with
-lower or equal timestamp is expected.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
 
-Join with a Temporal Table
+临时表 Join
 --------------------------
 
-A join with a temporal table joins an arbitrary table (left input/probe side) with a temporal table (right input/build side),
-i.e., an external dimension table that changes over time. Please check the corresponding page for more information about [temporal tables](temporal_tables.html#temporal-table).
+临时表 Join 意味着对任意表(左输入/探针侧)和一个临时表(右输入/构建侧)执行的 Join 操作,即随时间变化的的扩展表。请参考相应的页面以获取更多有关[临时表](temporal_tables.html#temporal-table)的信息。
 
-<span class="label label-danger">Attention</span> Users can not use arbitrary tables as a temporal table, but need to use a table backed by a `LookupableTableSource`. A `LookupableTableSource` can only be used for temporal join as a temporal table. See the page for more details about [how to define LookupableTableSource](../sourceSinks.html#defining-a-tablesource-with-lookupable).
+<span class="label label-danger">注意</span> 不是任何表都能用作临时表,用户必须使用来自接口 `LookupableTableSource` 的表。接口 `LookupableTableSource` 的实例只能作为临时表用于临时 Join 。查看此页面获取更多关于[如何实现接口 `LookupableTableSource`](../sourceSinks.html#defining-a-tablesource-with-lookupable) 的详细内容。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * bc4b8b49834d751271c7f0976f62f91923217420 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349) 
   * 50f2204b9ec3efcdd2619d201b8c8a21cda1341f UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436321416



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -1,5 +1,5 @@
 ---
-title: "Joins in Continuous Queries"
+title: "流上的 Join"

Review comment:
       我看到这儿是这么翻译标题的,就沿用这个翻译了:https://github.com/apache/flink/blob/master/docs/dev/table/streaming/index.zh.md




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444586022



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于 Processing-time 时态 Join
 
-### Processing-time Temporal Joins
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给时态表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+可以将 processing-time 的时态 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+### 基于 Event-time 时态 Join
 
-### Event-time Temporal Joins
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给时态表函数。这允许对两个表中在相同时间点的记录执行 Join 操作。

Review comment:
       已修改




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r434999966



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。

Review comment:
       个人感觉 `一起合并查询` 这个可以再优化下意思。感觉需要更体现 “join” 的意思

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。

Review comment:
       ```suggestion
   欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。
   ```

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -1,5 +1,5 @@
 ---
-title: "Joins in Continuous Queries"
+title: "流上的 Join"

Review comment:
       不确定这个翻译成 “流上的 Join” 是否合适,需要其他人 check 下

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
 
-However, this operation has an important implication: it requires to keep both sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input tables are continuously growing.
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。
+因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。
 
-Time-windowed Joins
+时间窗口 Join

Review comment:
       ”时间窗口 Join“ 会有一种没有表达完整的感觉吗?是否需要修改成诸如 “基于时间窗口的 Join”(或者其他描述) 等?
   如果修改的话,则需要整篇文章都进行修改

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join

Review comment:
       这里翻译的话,需要在这上面增加一个标签,否则页内跳转失效了。其他的标题也是类似
   具体参考 [wiki](https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications)
   你可以翻译完成后,在本地执行 `./docs/build.sh -p` 然后找到翻译后的页面进行校验

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-### Event-time Temporal Joins
+### 基于 Event-time 临时 Join
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给临时表函数。
+这允许对两个表中在相同时间点的记录执行 Join 操作。

Review comment:
       建议和上一行合并

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -88,8 +85,8 @@ rowtime amount currency
 11:04        5 US Dollar
 {% endhighlight %}
 
-`RatesHistory` represents an ever changing append-only table of currency exchange rates with respect to `Yen` (which has a rate of `1`).
-For example, the exchange rate for the period from `09:00` to `10:45` of `Euro` to `Yen` was `114`. From `10:45` to `11:15` it was `116`.
+字段 `RatesHistory` 表示不断变化的汇率信息。汇率以日元为基准(即 `Yen` 永远为 1)。
+例如,`09:00` 到 `10:45` 间欧元对日元的汇率是 `114`,`10:45` 到 `11:15` 间为 `116`。

Review comment:
       建议和上一行合并

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。

Review comment:
       `旧值仅会被覆盖` -> `旧值会被覆盖` 会好一些吗

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-### Event-time Temporal Joins
+### 基于 Event-time 临时 Join
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给临时表函数。
+这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-Compared to processing-time temporal joins, the temporal table does not only keep the latest version (with respect to the defined primary key) of the build side records in the state
-but stores all versions (identified by time) since the last watermark.
+与基于 processing-time 的临时 Join 相比,临时表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个水印以来的所有版本(按时间区分)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[临时表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-By definition of event time, [watermarks]({{ site.baseurl }}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer necessary because no incoming row with
-lower or equal timestamp is expected.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。

Review comment:
       ```suggestion
   通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/zh/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
   ```
   整篇文章的 watermark 建议都不翻译,具体可以参考 [wiki](https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications)

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
 
-However, this operation has an important implication: it requires to keep both sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input tables are continuously growing.
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。
+因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。

Review comment:
       建议 48 行和 47 行合并到一起,否这 “数据。“ 和 ”因而“ 中间会有空格

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。

Review comment:
       `会更难以` -> `会难以` 是否会好一些。这里这样修改的初衷是因为没有比较,加上 “更” 一般会有一个被比较的对象在前面。

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
       o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports append-only tables with time attributes. Since time attributes are quasi-monotonic increasing, Flink can remove old values from its state without affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 可以从状态中移除过期的数据,而不会影响结果的正确性。
 
-Join with a Temporal Table Function
+临时表函数 Join
 --------------------------
 
-A join with a temporal table function joins an append-only table (left input/probe side) with a temporal table (right input/build side),
-i.e., a table that changes over time and tracks its changes. Please check the corresponding page for more information about [temporal tables](temporal_tables.html).
+临时表函数 Join 连接了一个递增表(左输入/探针侧)和一个临时表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[临时表](temporal_tables.html)的相关章节查看更多细节。
 
-The following example shows an append-only table `Orders` that should be joined with the continuously changing currency rates table `RatesHistory`.
+下方示例展示了一个递增表 `Orders` 与一个不断改变的汇率表 `RatesHistory` 的 Join 操作。
 
-`Orders` is an append-only table that represents payments for the given `amount` and the given `currency`.
-For example at `10:15` there was an order for an amount of `2 Euro`.
+`Orders` 表示了包含支付数据(数量字段 `amount` 和货币字段 `currency`)的递增表。
+例如 `10:15` 对应行的记录代表了一笔 2 欧元支付记录。

Review comment:
       这一行建议和上一行合并,否则会有多余的空格。

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
       o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports append-only tables with time attributes. Since time attributes are quasi-monotonic increasing, Flink can remove old values from its state without affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 可以从状态中移除过期的数据,而不会影响结果的正确性。

Review comment:
       个人意见 “append-only tables” 这里翻译成“递增表” 能否有一个更好的描述呢?

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。

Review comment:
       ```suggestion
   可以将 processing-time 的临时 Join 视作简单的哈希 Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
   ```
   “简单的 `HashMap<K, V>`” 会更好一些吗

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
       o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports append-only tables with time attributes. Since time attributes are quasi-monotonic increasing, Flink can remove old values from its state without affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 可以从状态中移除过期的数据,而不会影响结果的正确性。
 
-Join with a Temporal Table Function
+临时表函数 Join
 --------------------------
 
-A join with a temporal table function joins an append-only table (left input/probe side) with a temporal table (right input/build side),
-i.e., a table that changes over time and tracks its changes. Please check the corresponding page for more information about [temporal tables](temporal_tables.html).
+临时表函数 Join 连接了一个递增表(左输入/探针侧)和一个临时表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[临时表](temporal_tables.html)的相关章节查看更多细节。

Review comment:
       `build side` 翻译成 `构建侧` 不确定是否合理,这个需要其他人确认下

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。

Review comment:
       ```suggestion
   与[常规 Join](#regular-joins) 相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
   ```

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -104,17 +101,17 @@ rowtime currency   rate
 11:49   Pounds      108
 {% endhighlight %}
 
-Given that we would like to calculate the amount of all `Orders` converted to a common currency (`Yen`).
+基于上述信息,欲计算 `Orders` 表中所有交易量并全部转换成日元。
 
-For example, we would like to convert the following order using the appropriate conversion rate for the given `rowtime` (`114`).
+例如,若要转换下表中的交易,需要使用对应时间区间内的汇率(即 `114`)。

Review comment:
       和上一行合并

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。

Review comment:
       建议和上一行合并

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table function](temporal_tables.html#defining-temporal-table-function), we can start using it.
-Temporal table functions can be used in the same way as normal table functions would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。
+临时表函数可以和普通表函数一样使用。

Review comment:
       建议这一行和上一行合并

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table function](temporal_tables.html#defining-temporal-table-function), we can start using it.
-Temporal table functions can be used in the same way as normal table functions would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。
+临时表函数可以和普通表函数一样使用。
 
-The following code snippet solves our motivating problem of converting currencies from the `Orders` table:
+接下来这段代码解决了我们一开始提出的问题,即从计算 `Orders` 表中交易量之和并转换为对应货币:

Review comment:
       ”即从计算 `Orders` 表中交易量之和并转换为对应货币:“ 这句话感觉不太通顺

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table function](temporal_tables.html#defining-temporal-table-function), we can start using it.
-Temporal table functions can be used in the same way as normal table functions would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。

Review comment:
       ```suggestion
   在[定义临时表函数](temporal_tables.html#defining-temporal-table-function)之后就可以使用了。
   ```

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。

Review comment:
       `for the given primary key` 的意思是 `给定 primary key 对应的记录`?

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。

Review comment:
       `_past_ time` 是指过去的时间?

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join

Review comment:
       这句话感觉不通顺,是否需要增加一个 “的” 呢

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。

Review comment:
       ```suggestion
   **注意**: 临时 Join 中的 State 保留(在[查询配置](query_configuration.html) 中定义还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
   ```

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-### Event-time Temporal Joins
+### 基于 Event-time 临时 Join
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给临时表函数。
+这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-Compared to processing-time temporal joins, the temporal table does not only keep the latest version (with respect to the defined primary key) of the build side records in the state
-but stores all versions (identified by time) since the last watermark.
+与基于 processing-time 的临时 Join 相比,临时表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个水印以来的所有版本(按时间区分)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[临时表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-By definition of event time, [watermarks]({{ site.baseurl }}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer necessary because no incoming row with
-lower or equal timestamp is expected.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
 
-Join with a Temporal Table
+临时表 Join
 --------------------------
 
-A join with a temporal table joins an arbitrary table (left input/probe side) with a temporal table (right input/build side),
-i.e., an external dimension table that changes over time. Please check the corresponding page for more information about [temporal tables](temporal_tables.html#temporal-table).
+临时表 Join 意味着对任意表(左输入/探针侧)和一个临时表(右输入/构建侧)执行的 Join 操作,即随时间变化的的扩展表。请参考相应的页面以获取更多有关[临时表](temporal_tables.html#temporal-table)的信息。
 
-<span class="label label-danger">Attention</span> Users can not use arbitrary tables as a temporal table, but need to use a table backed by a `LookupableTableSource`. A `LookupableTableSource` can only be used for temporal join as a temporal table. See the page for more details about [how to define LookupableTableSource](../sourceSinks.html#defining-a-tablesource-with-lookupable).
+<span class="label label-danger">注意</span> 不是任何表都能用作临时表,用户必须使用来自接口 `LookupableTableSource` 的表。接口 `LookupableTableSource` 的实例只能作为临时表用于临时 Join 。查看此页面获取更多关于[如何实现接口 `LookupableTableSource`](../sourceSinks.html#defining-a-tablesource-with-lookupable) 的详细内容。

Review comment:
       这里能否再优化下,这里的意思是”不是任何表..., 能做为 xxx 的必须 yyy“?

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -327,10 +313,10 @@ FROM table1 [AS <alias1>]
 ON table1.column-name1 = table2.column-name1
 {% endhighlight %}
 
-Currently, only support INNER JOIN and LEFT JOIN. The `FOR SYSTEM_TIME AS OF table1.proctime` should be followed after temporal table. `proctime` is a [processing time attribute](time_attributes.html#processing-time) of `table1`.
-This means that it takes a snapshot of the temporal table at processing time when joining every record from left table.
+目前只支持 INNER JOIN 和 LEFT JOIN,`FOR SYSTEM_TIME AS OF table1.proctime` 应位于临时表之后. `proctime` 是 `table1` 的 [processing time 属性](time_attributes.html#processing-time).

Review comment:
       建议使用中文标点符号。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444584279



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+<a name="regular-joins"></a>
+
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录进行 Join 运算。

Review comment:
       已修改




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694",
       "triggerID" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1b66bcc9096c203f1dc6942ee03e3f8c83943b80",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "1b66bcc9096c203f1dc6942ee03e3f8c83943b80",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 50f2204b9ec3efcdd2619d201b8c8a21cda1341f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694) 
   * 1b66bcc9096c203f1dc6942ee03e3f8c83943b80 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444587512



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
       o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports append-only tables with time attributes. Since time attributes are quasi-monotonic increasing, Flink can remove old values from its state without affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 可以从状态中移除过期的数据,而不会影响结果的正确性。
 
-Join with a Temporal Table Function
+临时表函数 Join
 --------------------------
 
-A join with a temporal table function joins an append-only table (left input/probe side) with a temporal table (right input/build side),
-i.e., a table that changes over time and tracks its changes. Please check the corresponding page for more information about [temporal tables](temporal_tables.html).
+临时表函数 Join 连接了一个递增表(左输入/探针侧)和一个临时表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[临时表](temporal_tables.html)的相关章节查看更多细节。

Review comment:
       我也没想到更好的翻译了,确实‘构建侧’不是一个专业术语,您能帮忙想想这儿该怎么翻译好吗?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443218763



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table function](temporal_tables.html#defining-temporal-table-function), we can start using it.
-Temporal table functions can be used in the same way as normal table functions would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。
+临时表函数可以和普通表函数一样使用。
 
-The following code snippet solves our motivating problem of converting currencies from the `Orders` table:
+接下来这段代码解决了我们一开始提出的问题,即从计算 `Orders` 表中交易量之和并转换为对应货币:

Review comment:
       已修改




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320339



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。

Review comment:
       确实会通顺些,已fix




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444595912



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。

Review comment:
       已修改




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458828105



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +138,21 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是处理时间,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins) 相反,时态表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的和不再需要的给定 primary key 所对应的记录将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让时态表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443218625



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -104,17 +101,17 @@ rowtime currency   rate
 11:49   Pounds      108
 {% endhighlight %}
 
-Given that we would like to calculate the amount of all `Orders` converted to a common currency (`Yen`).
+基于上述信息,欲计算 `Orders` 表中所有交易量并全部转换成日元。
 
-For example, we would like to convert the following order using the appropriate conversion rate for the given `rowtime` (`114`).
+例如,若要转换下表中的交易,需要使用对应时间区间内的汇率(即 `114`)。

Review comment:
       已合并。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874) 
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436322260



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。

Review comment:
       改成了“所有记录进行 Join 运算”,不知是否妥当。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] rmetzger closed pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
rmetzger closed pull request #12420:
URL: https://github.com/apache/flink/pull/12420


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320317



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
 
-However, this operation has an important implication: it requires to keep both sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input tables are continuously growing.
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。
+因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320273



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。

Review comment:
       Fixed.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443218514



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join

Review comment:
       实在抱歉,确实漏掉了一些comment。上次设置成自动merge之后就没看了,我以为已经merge了。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495) 
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 19b463b5eac22be3a724e9437fd0be0d9b3d5d3a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458274448



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -327,10 +313,10 @@ FROM table1 [AS <alias1>]
 ON table1.column-name1 = table2.column-name1
 {% endhighlight %}
 
-Currently, only support INNER JOIN and LEFT JOIN. The `FOR SYSTEM_TIME AS OF table1.proctime` should be followed after temporal table. `proctime` is a [processing time attribute](time_attributes.html#processing-time) of `table1`.
-This means that it takes a snapshot of the temporal table at processing time when joining every record from left table.
+目前只支持 INNER JOIN 和 LEFT JOIN,`FOR SYSTEM_TIME AS OF table1.proctime` 应位于时态表之后. `proctime` 是 `table1` 的 [processing time 属性]({%link dev/table/streaming/time_attributes.zh.md %}#processing-time)。

Review comment:
       明白,等这个PR merge完后我会做这个hot fix PR。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r451970665



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the build side table. In our example, the query is using the processing-time notion, so a newly appended order would always be joined with the most recent version of `LatestRates` when executing the operation. Note that the result is not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用 `processing-time` 作为处理时间,因而新增订单将始终与表 `LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。
+
+与[常规 Join](#regular-joins) 相比,尽管构建侧表的数据发生了变化,但时态表 Join 的变化前结果不会随之变化。而且时态表 Join 运算非常轻量级且不会保留任何状态。
 
-In contrast to [regular joins](#regular-joins), the previous results of the temporal table join will not be affected despite the changes on the build side. Also, the temporal table join operator is very lightweight and does not keep any state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义决定哪些记录将被 Join 的时间窗口。
+探针侧的记录将总是与构建侧在对应 `processing time` 时间的最新数据执行 Join。因而构建侧的数据可能是任意旧的。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not define a time window within which the records will be joined.
-Records from the probe side are always joined with the build side's latest version at processing time. Thus, records on the build side might be arbitrarily old.
+[时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join都有类似的功能,但是有不同的 SQL 语法和 runtime 实现:

Review comment:
       ```suggestion
   [时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join 都有类似的功能,但是有不同的 SQL 语法和 runtime 实现:
   ```

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -327,10 +313,10 @@ FROM table1 [AS <alias1>]
 ON table1.column-name1 = table2.column-name1
 {% endhighlight %}
 
-Currently, only support INNER JOIN and LEFT JOIN. The `FOR SYSTEM_TIME AS OF table1.proctime` should be followed after temporal table. `proctime` is a [processing time attribute](time_attributes.html#processing-time) of `table1`.
-This means that it takes a snapshot of the temporal table at processing time when joining every record from left table.
+目前只支持 INNER JOIN 和 LEFT JOIN,`FOR SYSTEM_TIME AS OF table1.proctime` 应位于时态表之后. `proctime` 是 `table1` 的 [processing time 属性]({%link dev/table/streaming/time_attributes.zh.md %}#processing-time)。

Review comment:
       这里的链接 `{%link dev/table/streaming/time_attributes.zh.md %}#processing-time` 有问题,不是这个文档的问题,而是要在 `dev/table/streaming/time_attributes.zh.md` 这个文件的对应标题前面添加 锚点(具体可以参考 [wiki](https://cwiki.apache.org/confluence/display/FLINK/Flink+Translation+Specifications)),这个可以单独提一个 hotfix PR

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the build side table. In our example, the query is using the processing-time notion, so a newly appended order would always be joined with the most recent version of `LatestRates` when executing the operation. Note that the result is not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用 `processing-time` 作为处理时间,因而新增订单将始终与表 `LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。

Review comment:
       这里的 processing-time 和 event-time 能否都翻译一下呢?翻译的话整篇文章的都进行一下翻译
   因为 [时间属性](http://localhost:4000/zh/dev/table/streaming/time_attributes.html) 这里的都是翻译过的
   




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444589987



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table function](temporal_tables.html#defining-temporal-table-function), we can start using it.
-Temporal table functions can be used in the same way as normal table functions would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] rmetzger commented on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
rmetzger commented on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-662843164


   Thanks for the review & contribution. I will merge the PR now.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444610772



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于 Processing-time 时态 Join
 
-### Processing-time Temporal Joins
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给时态表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+可以将 processing-time 的时态 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+### 基于 Event-time 时态 Join
 
-### Event-time Temporal Joins
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给时态表函数。这允许对两个表中在相同时间点的记录执行 Join 操作。
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+与基于 processing-time 的时态 Join 相比,时态表不仅将构建侧记录的最新版本(是否最新由所定义的主键所决定)保存在 state 中,同时也会存储自上一个 watermarks 以来的所有版本(按时间区分)。
 
-Compared to processing-time temporal joins, the temporal table does not only keep the latest version (with respect to the defined primary key) of the build side records in the state
-but stores all versions (identified by time) since the last watermark.
+例如,在探针侧表新插入一条 event-time 时间为 `12:30:00` 的记录,它将和构建侧表时间点为 `12:30:00` 的版本根据[时态表的概念](temporal_tables.html)进行 Join 运算。
+因此,新插入的记录仅与时间戳小于等于 `12:30:00` 的记录进行 Join 计算(由主键决定哪些时间点的数据将参与计算)。
 
-For example, an incoming row with an event-time timestamp of `12:30:00` that is appended to the probe side table
-is joined with the version of the build side table at time `12:30:00` according to the [concept of temporal tables](temporal_tables.html).
-Thus, the incoming row is only joined with rows that have a timestamp lower or equal to `12:30:00` with
-applied updates according to the primary key until this point in time.
+通过定义事件时间(event time),[watermarks]({{ site.baseurl }}/zh/dev/event_time.html) 允许 Join 运算不断向前滚动,丢弃不再需要的构建侧快照。因为不再需要时间戳更低或相等的记录。
 
-By definition of event time, [watermarks]({{ site.baseurl }}/dev/event_time.html) allow the join operation to move
-forward in time and discard versions of the build table that are no longer necessary because no incoming row with
-lower or equal timestamp is expected.
+<a name="join-with-a-temporal-table"></a>
 
-Join with a Temporal Table
+时态表 Join
 --------------------------
 
-A join with a temporal table joins an arbitrary table (left input/probe side) with a temporal table (right input/build side),
-i.e., an external dimension table that changes over time. Please check the corresponding page for more information about [temporal tables](temporal_tables.html#temporal-table).
+时态表 Join 意味着对任意表(左输入/探针侧)和一个时态表(右输入/构建侧)执行的 Join 操作,即随时间变化的的扩展表。请参考相应的页面以获取更多有关[时态表](temporal_tables.html#temporal-table)的信息。
 
-<span class="label label-danger">Attention</span> Users can not use arbitrary tables as a temporal table, but need to use a table backed by a `LookupableTableSource`. A `LookupableTableSource` can only be used for temporal join as a temporal table. See the page for more details about [how to define LookupableTableSource](../sourceSinks.html#defining-a-tablesource-with-lookupable).
+<span class="label label-danger">注意</span> 不是任何表都能用作时态表,能作为时态表的表必须实现接口 `LookupableTableSource`。接口 `LookupableTableSource` 的实例只能作为时态表用于时态 Join 。查看此页面获取更多关于[如何实现接口 `LookupableTableSource`](../sourceSinks.html#defining-a-tablesource-with-lookupable) 的详细内容。

Review comment:
       fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636503135


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 6c369fc8eb4709738b70d9fe065c1ac088e181d5 (Sun May 31 17:37:03 UTC 2020)
   
    ✅no warnings
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458273723



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the build side table. In our example, the query is using the processing-time notion, so a newly appended order would always be joined with the most recent version of `LatestRates` when executing the operation. Note that the result is not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用 `processing-time` 作为处理时间,因而新增订单将始终与表 `LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458270785



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -300,25 +285,26 @@ FROM
   ON r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the current version of the build side table. In our example, the query is using the processing-time notion, so a newly appended order would always be joined with the most recent version of `LatestRates` when executing the operation. Note that the result is not deterministic for processing-time.
+探针侧表中的每个记录都将与构建侧表的当前版本所关联。 在此示例中,查询使用 `processing-time` 作为处理时间,因而新增订单将始终与表 `LatestRates` 的最新汇率执行 Join 操作。 注意,结果对于处理时间来说不是确定的。
+
+与[常规 Join](#regular-joins) 相比,尽管构建侧表的数据发生了变化,但时态表 Join 的变化前结果不会随之变化。而且时态表 Join 运算非常轻量级且不会保留任何状态。
 
-In contrast to [regular joins](#regular-joins), the previous results of the temporal table join will not be affected despite the changes on the build side. Also, the temporal table join operator is very lightweight and does not keep any state.
+与[时间区间 Join](#interval-joins) 相比,时态表 Join 没有定义决定哪些记录将被 Join 的时间窗口。
+探针侧的记录将总是与构建侧在对应 `processing time` 时间的最新数据执行 Join。因而构建侧的数据可能是任意旧的。
 
-Compared to [interval joins](#interval-joins), temporal table joins do not define a time window within which the records will be joined.
-Records from the probe side are always joined with the build side's latest version at processing time. Thus, records on the build side might be arbitrarily old.
+[时态表函数 Join](#join-with-a-temporal-table-function) 和时态表 Join都有类似的功能,但是有不同的 SQL 语法和 runtime 实现:

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694",
       "triggerID" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "triggerType" : "PUSH"
     }, {
       "hash" : "1b66bcc9096c203f1dc6942ee03e3f8c83943b80",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4736",
       "triggerID" : "1b66bcc9096c203f1dc6942ee03e3f8c83943b80",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 50f2204b9ec3efcdd2619d201b8c8a21cda1341f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694) 
   * 1b66bcc9096c203f1dc6942ee03e3f8c83943b80 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4736) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r448117194



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表]({%link dev/table/streaming/dynamic_tables.zh.md %})中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API]({%link dev/table/sql/tableApi.zh.md %}#joins) 和 [SQL]({%link dev/table/sql/queries.zh.md %}#joins) 中的 Join 章节。

Review comment:
       `tableApi.zh.md` has been moved from `dev/table/sql` to `dev/table`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r436320356



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值仅会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-### Event-time Temporal Joins
+### 基于 Event-time 临时 Join
 
-With an event-time time attribute (i.e., a rowtime attribute), it is possible to pass _past_ time attributes to the temporal table function.
-This allows for joining the two tables at a common point in time.
+将 event-time 作为时间属性时,可将 _past_ 时间属性作为参数传递给临时表函数。
+这允许对两个表中在相同时间点的记录执行 Join 操作。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591256



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧的表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录一起合并查询。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
 
-However, this operation has an important implication: it requires to keep both sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input tables are continuously growing.
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。
+因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。
 
-Time-windowed Joins
+时间窗口 Join

Review comment:
       https://github.com/apache/flink/blob/b6e2f9fb178649c305eb0881be57a46f9ce9911a/docs/dev/table/sql/queries.zh.md
   
   由于 `Time-windowed` 已被重命名为 `Interval Joins`,参考[此处](https://github.com/apache/flink/blob/b6e2f9fb178649c305eb0881be57a46f9ce9911a/docs/dev/table/sql/queries.zh.md)的翻译为“时间区间 Join”




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 2b6feb97e452779487c38f13c260aeb0a6e3f5c7 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864) 
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591702



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。

Review comment:
       fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443218642



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。

Review comment:
       已合并




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458830603



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +182,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join 中的 State 保留(在[查询配置]({%link dev/table/streaming/query_configuration.zh.md %})中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于处理时间的时态 Join

Review comment:
       Done

##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +182,43 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 时态 Join 中的 State 保留(在[查询配置]({%link dev/table/streaming/query_configuration.zh.md %})中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
+
+### 基于处理时间的时态 Join
 
-### Processing-time Temporal Joins
+如果将处理时间作为时间属性,将无法将 _过去_ 时间属性作为参数传递给时态表函数。
+根据定义,处理时间总会是当前时间戳。因此,基于处理时间的时态表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+可以将处理时间的时态 Join 视作简单的 `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。
+当来自构建侧的新插入的记录与旧值具有相同的 Key 时,旧值会被覆盖。
+探针侧的每条记录将总会根据 `HashMap` 的最新/当前状态来计算。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+### 基于事件时间的时态 Join

Review comment:
       Done




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] klion26 commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
klion26 commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r437141862



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,35 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会更难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+常规 Join

Review comment:
       这个地方我看没有修改,你可以在本地执行 `docs/build.sh -p` 之后,点击本文 144 行对应的链接看下效果,发现调到了页面的开始,这和英文版时不一样的。
   对于这些建议翻译后在本地自己校验一次,这样可以节省你的时间~




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443218549



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
       o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports append-only tables with time attributes. Since time attributes are quasi-monotonic increasing, Flink can remove old values from its state without affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 可以从状态中移除过期的数据,而不会影响结果的正确性。
 
-Join with a Temporal Table Function
+临时表函数 Join
 --------------------------
 
-A join with a temporal table function joins an append-only table (left input/probe side) with a temporal table (right input/build side),
-i.e., a table that changes over time and tracks its changes. Please check the corresponding page for more information about [temporal tables](temporal_tables.html).
+临时表函数 Join 连接了一个递增表(左输入/探针侧)和一个临时表(右输入/构建侧),即一个随时间变化且不断追踪其改动的表。请参考[临时表](temporal_tables.html)的相关章节查看更多细节。
 
-The following example shows an append-only table `Orders` that should be joined with the continuously changing currency rates table `RatesHistory`.
+下方示例展示了一个递增表 `Orders` 与一个不断改变的汇率表 `RatesHistory` 的 Join 操作。
 
-`Orders` is an append-only table that represents payments for the given `amount` and the given `currency`.
-For example at `10:15` there was an order for an amount of `2 Euro`.
+`Orders` 表示了包含支付数据(数量字段 `amount` 和货币字段 `currency`)的递增表。
+例如 `10:15` 对应行的记录代表了一笔 2 欧元支付记录。

Review comment:
       已合并




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r451581064



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表]({%link dev/table/streaming/dynamic_tables.zh.md %})中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API]({%link dev/table/sql/tableApi.zh.md %}#joins) 和 [SQL]({%link dev/table/sql/queries.zh.md %}#joins) 中的 Join 章节。

Review comment:
       Done, thanks for reviewing.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444610746



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表](dynamic_tables.html)中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API](../tableApi.html#joins) 和 [SQL]({{ site.baseurl }}/zh/dev/table/sql/queries.html#joins) 中的 Join 章节。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986) 
   * bc4b8b49834d751271c7f0976f62f91923217420 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444587139



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -63,18 +61,17 @@ WHERE o.id = s.orderId AND
       o.ordertime BETWEEN s.shiptime - INTERVAL '4' HOUR AND s.shiptime
 {% endhighlight %}
 
-Compared to a regular join operation, this kind of join only supports append-only tables with time attributes. Since time attributes are quasi-monotonic increasing, Flink can remove old values from its state without affecting the correctness of the result.
+与常规 Join 操作相比,时间窗口 Join 只支持带有时间属性的递增表。由于时间属性是单调递增的,Flink 可以从状态中移除过期的数据,而不会影响结果的正确性。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r458825956



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -22,37 +22,38 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Joins are a common and well-understood operation in batch data processing to connect the rows of two relations. However, the semantics of joins on [dynamic tables](dynamic_tables.html) are much less obvious or even confusing.
+Join 在批数据处理中是比较常见且广为人知的运算,一般用于连接两张关系表。然而在[动态表]({%link dev/table/streaming/dynamic_tables.zh.md %})中 Join 的语义会难以理解甚至让人困惑。
 
-Because of that, there are a couple of ways to actually perform a join using either Table API or SQL.
+因而,Flink 提供了几种基于 Table API 和 SQL 的 Join 方法。
 
-For more information regarding the syntax, please check the join sections in [Table API](../tableApi.html#joins) and [SQL]({{ site.baseurl }}/dev/table/sql/queries.html#joins).
+欲获取更多关于 Join 语法的细节,请参考 [Table API]({%link dev/table/tableApi.zh.md %}#joins) 和 [SQL]({%link dev/table/sql/queries.zh.md %}#joins) 中的 Join 章节。
 
 * This will be replaced by the TOC
 {:toc}
 
-Regular Joins
+<a name="regular-joins"></a>
+
+常规 Join
 -------------
 
-Regular joins are the most generic type of join in which any new records or changes to either side of the join input are visible and are affecting the whole join result.
-For example, if there is a new record on the left side, it will be joined with all of the previous and future records on the right side.
+常规 Join 是最常用的 Join 用法。在常规 Join 中,任何新记录或对 Join 两侧表的任何更改都是可见的,并会影响最终整个 Join 的结果。例如,如果 Join 左侧插入了一条新的记录,那么它将会与 Join 右侧过去与将来的所有记录进行 Join 运算。
 
 {% highlight sql %}
 SELECT * FROM Orders
 INNER JOIN Product
 ON Orders.productId = Product.id
 {% endhighlight %}
 
-These semantics allow for any kind of updating (insert, update, delete) input tables.
+上述语意允许对输入表进行任意类型的更新操作(insert, update, delete)。
+
+然而,常规 Join 隐含了一个重要的前提:即它需要在 Flink 的状态中永久保存 Join 两侧的数据。因而,如果 Join 操作中的一方或双方输入表持续增长的话,资源消耗也将会随之无限增长。
 
-However, this operation has an important implication: it requires to keep both sides of the join input in Flink's state forever.
-Thus, the resource usage will grow indefinitely as well, if one or both input tables are continuously growing.
+<a name="interval-joins"></a>
 
-Interval Joins
+时间区间 Join

Review comment:
       这里已经有了<a name="interval-joins"></a>




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     }, {
       "hash" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4349",
       "triggerID" : "bc4b8b49834d751271c7f0976f62f91923217420",
       "triggerType" : "PUSH"
     }, {
       "hash" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694",
       "triggerID" : "50f2204b9ec3efcdd2619d201b8c8a21cda1341f",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 50f2204b9ec3efcdd2619d201b8c8a21cda1341f Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=4694) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495) 
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 19b463b5eac22be3a724e9437fd0be0d9b3d5d3a UNKNOWN
   * 2b6feb97e452779487c38f13c260aeb0a6e3f5c7 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495) 
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 2b6feb97e452779487c38f13c260aeb0a6e3f5c7 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591375



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -140,26 +137,23 @@ FROM
 WHERE r.currency = o.currency
 {% endhighlight %}
 
-Each record from the probe side will be joined with the version of the build side table at the time of the correlated time attribute of the probe side record.
-In order to support updates (overwrites) of previous values on the build side table, the table must define a primary key.
+探针侧的每条记录都将与构建侧的表执行 Join 运算,构建侧的表中与探针侧对应时间属性的记录将参与运算。
+为了支持更新(包括覆盖)构建侧的表,该表必须定义主键。
 
-In our example, each record from `Orders` will be joined with the version of `Rates` at time `o.rowtime`. The `currency` field has been defined as the primary key of `Rates` before and is used to connect both tables in our example. If the query were using a processing-time notion, a newly appended order would always be joined with the most recent version of `Rates` when executing the operation.
+在示例中,`Orders` 表中的每一条记录都与时间点 `o.rowtime` 的 `Rates` 进行 Join 运算。`currency` 字段已被定义为 `Rates` 表的主键,在示例中该字段也被用于连接两个表。如果该查询采用的是 processing-time,则在执行时新增的订单将始终与最新的 `Rates` 执行 Join。
 
-In contrast to [regular joins](#regular-joins), this means that if there is a new record on the build side, it will not affect the previous results of the join.
-This again allows Flink to limit the number of elements that must be kept in the state.
+与[常规 Join](#regular-joins)相反,临时表函数 Join 意味着如果在构建侧新增一行记录将不会影响之前的结果。这同时使得 Flink 能够限制必须保存在 state 中的元素数量(因为不再需要保存之前的状态)。
 
-Compared to [time-windowed joins](#time-windowed-joins), temporal table joins do not define a time window within which bounds the records will be joined.
-Records from the probe side are always joined with the build side's version at the time specified by the time attribute. Thus, records on the build side might be arbitrarily old.
-As time passes, the previous and no longer needed versions of the record (for the given primary key) will be removed from the state.
+与[时间窗口 Join](#time-windowed-joins) 相比,临时表 Join 没有定义限制了每次参与 Join 运算的元素的时间范围。探针侧的记录总是会和构建侧中对应特定时间属性的数据进行 Join 操作。因而在构建侧的记录可以是任意时间之前的。随着时间流动,之前产生的不再需要的记录(已给定了主键)将从 state 中移除。
 
-Such behaviour makes a temporal table join a good candidate to express stream enrichment in relational terms.
+这种做法让临时表 Join 成为一个很好的用于表达不同流之间关联的方法。
 
-### Usage
+### 用法
 
-After [defining temporal table function](temporal_tables.html#defining-temporal-table-function), we can start using it.
-Temporal table functions can be used in the same way as normal table functions would be used.
+在 [定义临时表函数](temporal_tables.html#defining-temporal-table-function) 之后就可以使用了。
+临时表函数可以和普通表函数一样使用。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444591826



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join
 
-With a processing-time time attribute, it is impossible to pass _past_ time attributes as an argument to the temporal table function.
-By definition, it is always the current timestamp. Thus, invocations of a processing-time temporal table function will always return the latest known versions of the underlying table
-and any updates in the underlying history table will also immediately overwrite the current values.
+如果将 processing-time 作为时间属性,将无法将 _past_ 时间属性作为参数传递给临时表函数。
+根据定义,processing-time 总会是当前时间戳。因此,基于 processing-time 的临时表函数将始终返回基础表的最新已知版本,时态表函数的调用将始终返回基础表的最新已知版本,并且基础历史表中的任何更新也将立即覆盖当前值。
 
-Only the latest versions (with respect to the defined primary key) of the build side records are kept in the state.
-Updates of the build side will have no effect on previously emitted join results.
+只有最新版本的构建侧记录(是否最新由所定义的主键所决定)会被保存在 state 中。
+构建侧的更新不会对之前 Join 的结果产生影响。
 
-One can think about a processing-time temporal join as a simple `HashMap<K, V>` that stores all of the records from the build side.
-When a new record from the build side has the same key as some previous record, the old value is just simply overwritten.
-Every record from the probe side is always evaluated against the most recent/current state of the `HashMap`.
+可以将 processing-time 的临时 Join 视作简单的哈希Map `HashMap <K,V>`,HashMap 中存储来自构建侧的所有记录。

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r444588046



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -189,50 +183,42 @@ val result = orders
 </div>
 </div>
 
-**Note**: State retention defined in a [query configuration](query_configuration.html) is not yet implemented for temporal joins.
-This means that the required state to compute the query result might grow infinitely depending on the number of distinct primary keys for the history table.
+**注意**: 临时 Join中的 State 保留(在 [查询配置](query_configuration.html) 中定义)还未实现。这意味着计算的查询结果所需的状态可能会无限增长,具体数量取决于历史记录表的不重复主键个数。
 
-### Processing-time Temporal Joins
+### 基于 Processing-time 临时 Join

Review comment:
       Fixed




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] authuir commented on a change in pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
authuir commented on a change in pull request #12420:
URL: https://github.com/apache/flink/pull/12420#discussion_r443218585



##########
File path: docs/dev/table/streaming/joins.zh.md
##########
@@ -88,8 +85,8 @@ rowtime amount currency
 11:04        5 US Dollar
 {% endhighlight %}
 
-`RatesHistory` represents an ever changing append-only table of currency exchange rates with respect to `Yen` (which has a rate of `1`).
-For example, the exchange rate for the period from `09:00` to `10:45` of `Euro` to `Yen` was `114`. From `10:45` to `11:15` it was `116`.
+字段 `RatesHistory` 表示不断变化的汇率信息。汇率以日元为基准(即 `Yen` 永远为 1)。
+例如,`09:00` 到 `10:45` 间欧元对日元的汇率是 `114`,`10:45` 到 `11:15` 间为 `116`。

Review comment:
       已合并。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2495",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d0f0b15cc5289803cdbde65b26bc66f0542da5f1",
       "triggerType" : "PUSH"
     }, {
       "hash" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2865",
       "triggerID" : "19b463b5eac22be3a724e9437fd0be0d9b3d5d3a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=2864",
       "triggerID" : "2b6feb97e452779487c38f13c260aeb0a6e3f5c7",
       "triggerType" : "PUSH"
     }, {
       "hash" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874",
       "triggerID" : "62a088d537479bbf72b6ee8d2c1852d720eac913",
       "triggerType" : "PUSH"
     }, {
       "hash" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986",
       "triggerID" : "4dbc7b88a9fdf589c0c339378576cdda755fd77c",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * d0f0b15cc5289803cdbde65b26bc66f0542da5f1 UNKNOWN
   * 62a088d537479bbf72b6ee8d2c1852d720eac913 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3874) 
   * 4dbc7b88a9fdf589c0c339378576cdda755fd77c Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=3986) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #12420: [FLINK-16085][docs] Translate "Joins in Continuous Queries" page of "Streaming Concepts" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #12420:
URL: https://github.com/apache/flink/pull/12420#issuecomment-636504607


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "6c369fc8eb4709738b70d9fe065c1ac088e181d5",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 6c369fc8eb4709738b70d9fe065c1ac088e181d5 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org