You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/02/21 06:08:08 UTC

[GitHub] [flink] JingsongLi commented on a change in pull request #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md

JingsongLi commented on a change in pull request #11127: [FLINK-16081][docs] Translate /dev/table/index.zh.md
URL: https://github.com/apache/flink/pull/11127#discussion_r382409904
 
 

 ##########
 File path: docs/dev/table/index.zh.md
 ##########
 @@ -25,41 +25,41 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Apache Flink features two relational APIs - the Table API and SQL - for unified stream and batch processing. The Table API is a language-integrated query API for Scala and Java that allows the composition of queries from relational operators such as selection, filter, and join in a very intuitive way. Flink's SQL support is based on [Apache Calcite](https://calcite.apache.org) which implements the SQL standard. Queries specified in either interface have the same semantics and specify the same result regardless whether the input is a batch input (DataSet) or a stream input (DataStream).
+Apache Flink 有两种关系型 API 来做流批统一处理:Table API 和 SQL。Table API 是集成于 Java 和 Scala 的查询 API,它可以用一种非常直观的方式来组合使用例如选取、过滤、join 等关系型算子。Flink SQL 是基于 [Apache Calcite](https://calcite.apache.org) 来实现的标准 SQL。这两种 API 中的查询对于批(DataSet)流(DataStream)的输入有相同的语义,也会产生同样的计算结果。
 
-The Table API and the SQL interfaces are tightly integrated with each other as well as Flink's DataStream and DataSet APIs. You can easily switch between all APIs and libraries which build upon the APIs. For instance, you can extract patterns from a DataStream using the [CEP library]({{ site.baseurl }}/dev/libs/cep.html) and later use the Table API to analyze the patterns, or you might scan, filter, and aggregate a batch table using a SQL query before running a [Gelly graph algorithm]({{ site.baseurl }}/dev/libs/gelly) on the preprocessed data.
+Table API 和 SQL 两种 API 是紧密集成的,以及 DataStream 和 DataSet API。你可以在这些 API 之间,以及一些基于这些 API 的库之间轻松的切换。比如,你可以先用 [CEP]({{ site.baseurl }}/zh/dev/libs/cep.html) 从 DataStream 中做模式匹配,然后用 Table API 来分析匹配的结果;或者你可以用 SQL 来扫描、过滤、聚合一个批式的表,然后再跑一个 [Gelly 图算法]({{ site.baseurl }}/zh/dev/libs/gelly) 来处理已经预处理好的数据。
 
-**Please note that the Table API and SQL are not yet feature complete and are being actively developed. Not all operations are supported by every combination of \[Table API, SQL\] and \[stream, batch\] input.**
+**注意:Table API 和 SQL 现在还处于活跃开发阶段,还没有完全实现所有的特性。不是所有的 \[Table API,SQL\] 和 \[流,批\] 的组合都是支持的。**
 
-Dependency Structure
+依赖图
 --------------------
 
-Starting from Flink 1.9, Flink provides two different planner implementations for evaluating Table & SQL API programs: the Blink planner and the old planner that was available before Flink 1.9. Planners are responsible for
-translating relational operators into an executable, optimized Flink job. Both of the planners come with different optimization rules and runtime classes.
-They may also differ in the set of supported features.
+从1.9开始,Flink 提供了两个 table planner 实现来执行 Table API 和 SQL 程序:Blink planner 和 old planner,old planner 在1.9之前就已经存在了。
+planner 的作用主要是把关系型的操作翻译成可执行的、经过优化的 Flink job。这两个 planner 所使用的优化规则以及运行时都不一样。
 
 Review comment:
   Planner大写首字母

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services