You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/10/03 11:57:33 UTC

[GitHub] [flink] wangxlong opened a new pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

wangxlong opened a new pull request #13537:
URL: https://github.com/apache/flink/pull/13537


   
   ## What is the purpose of the change
   
    Translate page "intro_to_table_api" into Chinese
   
   ## Brief change log
   
    Translate page "intro_to_table_api" into Chinese
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): (no)
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: (no)
     - The serializers: (no)
     - The runtime per-record code paths (performance sensitive): (no)
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: (no)
     - The S3 file system connector: (no)
   
   ## Documentation
   
     - Does this pull request introduce a new feature? (no)
     - If yes, how is the feature documented? (not applicable / docs / JavaDocs / not documented)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703094638


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181",
       "triggerID" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ba413eda354b83280bdf5ce9f231e4e65f3d0216",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "ba413eda354b83280bdf5ce9f231e4e65f3d0216",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fbd56ff6167834a20d8ffcfd801dc2e6f3464657 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181) 
   * ba413eda354b83280bdf5ce9f231e4e65f3d0216 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangxlong commented on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
wangxlong commented on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-706665786


   Hi @dianfu  Could you help to have a review in your free time, Thank you~


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703094638


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181",
       "triggerID" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fbd56ff6167834a20d8ffcfd801dc2e6f3464657 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703094638


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fbd56ff6167834a20d8ffcfd801dc2e6f3464657 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703094638


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181",
       "triggerID" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ba413eda354b83280bdf5ce9f231e4e65f3d0216",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7457",
       "triggerID" : "ba413eda354b83280bdf5ce9f231e4e65f3d0216",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fbd56ff6167834a20d8ffcfd801dc2e6f3464657 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181) 
   * ba413eda354b83280bdf5ce9f231e4e65f3d0216 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7457) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703092256


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit fbd56ff6167834a20d8ffcfd801dc2e6f3464657 (Sat Oct 03 11:59:17 UTC 2020)
   
    ✅no warnings
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangxlong commented on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
wangxlong commented on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-707145382


   @dianfu Updated.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703094638


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181",
       "triggerID" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * fbd56ff6167834a20d8ffcfd801dc2e6f3464657 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] dianfu commented on a change in pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
dianfu commented on a change in pull request #13537:
URL: https://github.com/apache/flink/pull/13537#discussion_r503050214



##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:
 source_table = table_env.from_path("datagen")
-# or create a Table from a SQL query:
+# 或者使用 SQL 查询语句创建一张表:
 source_table = table_env.sql_query("SELECT * FROM datagen")
 
 result_table = source_table.select(source_table.id + 1, source_table.data)
 
-# 5. emit query result to sink table
-# emit a Table API result Table to a sink table:
+# 5. 将查询结果发送给 sink 表
+# 将 Table API 结果表数据发送给 sink 表:
 result_table.execute_insert("print").wait()
-# or emit results via SQL query:
+# 或者通过 SQL 查询语句来发送:
 table_env.execute_sql("INSERT INTO print SELECT * FROM datagen").wait()
 
 {% endhighlight %}
 
 {% top %}
 
-Create a TableEnvironment
+创建 TableEnvironment
 ---------------------------
 
-The `TableEnvironment` is a central concept of the Table API and SQL integration. The following code example shows how to create a TableEnvironment:
+`TableEnvironment` 是 Table API 和 SQL 集成的核心概念。下面代码示例展示了如何创建一个 TableEnvironment:
 
 {% highlight python %}
 
 from pyflink.table import EnvironmentSettings, StreamTableEnvironment, BatchTableEnvironment
 
-# create a blink streaming TableEnvironment
+# 创建 blink 流 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a blink batch TableEnvironment
+# 创建 blink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink streaming TableEnvironment
+# 创建 flink 流式 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_old_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink batch TableEnvironment
+# 创建 flink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_old_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
 {% endhighlight %}
 
-For more details about the different ways to create a `TableEnvironment`, please refer to the [TableEnvironment Documentation]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment).
+关于创建 `TableEnvironment` 的更多细节,请查阅 [TableEnvironment 文档]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment)。
 
-The `TableEnvironment` is responsible for:
+`TableEnvironment` 可以用来:
 
-* Creating `Table`s
-* Registering `Table`s as a temporary view
-* Executing SQL queries, see [SQL]({% link dev/table/sql/index.zh.md %}) for more details
-* Registering user-defined (scalar, table, or aggregation) functions, see [General User-defined Functions]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) and [Vectorized User-defined Functions]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %}) for more details
-* Configuring the job, see [Python Configuration]({% link dev/python/table-api-users-guide/python_config.zh.md %}) for more details
-* Managing Python dependencies, see [Dependency Management]({% link dev/python/table-api-users-guide/dependency_management.zh.md %}) for more details
-* Submitting the jobs for execution
+* 创建 `Table`
+* 将 `Table` 注册成临时表
+* 执行 SQL 查询,更多细节可查阅 [SQL]({% link dev/table/sql/index.zh.md %})
+* 注册用户自定义的 (标量,表值,或者聚合) 函数, 更多细节可查阅 [普通的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) 和 [向量化的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %})
+* 配置作业,更多细节可查阅 [Python 配置]({% link dev/python/table-api-users-guide/python_config.zh.md %})
+* 管理 Python 依赖,更多细节可查阅 [依赖管理]({% link dev/python/table-api-users-guide/dependency_management.zh.md %})
+* 提交 jobs 执行
 
-Currently there are 2 planners available: flink planner and blink planner.
+目前有2个可用的执行器 : flink 执行器 和 blink 执行器。
 
-You should explicitly set which planner to use in the current program.
-We recommend using the blink planner as much as possible. 
+你应该在当前程序中显式地设置使用哪个执行器。
+我们建议尽可能多的使用 blink 执行器。 

Review comment:
       ```suggestion
   我们建议尽可能使用 blink 执行器。 
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:
 source_table = table_env.from_path("datagen")
-# or create a Table from a SQL query:
+# 或者使用 SQL 查询语句创建一张表:
 source_table = table_env.sql_query("SELECT * FROM datagen")
 
 result_table = source_table.select(source_table.id + 1, source_table.data)
 
-# 5. emit query result to sink table
-# emit a Table API result Table to a sink table:
+# 5. 将查询结果发送给 sink 表
+# 将 Table API 结果表数据发送给 sink 表:
 result_table.execute_insert("print").wait()
-# or emit results via SQL query:
+# 或者通过 SQL 查询语句来发送:
 table_env.execute_sql("INSERT INTO print SELECT * FROM datagen").wait()
 
 {% endhighlight %}
 
 {% top %}
 
-Create a TableEnvironment
+创建 TableEnvironment
 ---------------------------
 
-The `TableEnvironment` is a central concept of the Table API and SQL integration. The following code example shows how to create a TableEnvironment:
+`TableEnvironment` 是 Table API 和 SQL 集成的核心概念。下面代码示例展示了如何创建一个 TableEnvironment:
 
 {% highlight python %}
 
 from pyflink.table import EnvironmentSettings, StreamTableEnvironment, BatchTableEnvironment
 
-# create a blink streaming TableEnvironment
+# 创建 blink 流 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a blink batch TableEnvironment
+# 创建 blink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink streaming TableEnvironment
+# 创建 flink 流式 TableEnvironment

Review comment:
       keep consistent with the other places?

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -384,32 +384,32 @@ The result is:
 8> +U(3,19)
 {% endhighlight %}
 
-In fact, this shows the change logs received by the print sink.
-The output format of a change log is:
+实际上,结果展示了打印接收器所接收到的变更日志。
+变更日志输出的格式为:
 {% highlight text %}
-{subtask id}> {message type}{string format of the value}
+{subtask id}> {消息类型}{值的字符串格式}
 {% endhighlight %}
-For example, "2> +I(4,11)" means this message comes from the 2nd subtask, and "+I" means it is an insert message. "(4, 11)" is the content of the message.
-In addition, "-U" means a retract record (i.e. update-before), which means this message should be deleted or retracted from the sink. 
-"+U" means this is an update record (i.e. update-after), which means this message should be updated or inserted by the sink.
+例如,"2> +I(4,11)" 表示这条消息来自第二个 subtask,同时 "+I" 表示这是一条插入的消息。"(4, 11)" 是这条消息的内容。
+另外,"-U" 表示这是一条回撤的消息 (即更新前),这意味着应该在 sink 中删除或回撤该消息。 

Review comment:
       ```suggestion
   另外,"-U" 表示这是一条撤回消息 (即更新前),这意味着应该从 sink 中删除或撤回该消息。 
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -455,18 +455,18 @@ table_env.execute_sql("""
     )
 """)
 
-# convert the sql table to Table API table
+# 将 sql 表转化成 Table API 表

Review comment:
       ```suggestion
   # 将 sql 表转换成 Table API 表
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -539,13 +539,13 @@ table_env.execute_sql("INSERT INTO sink_table SELECT * FROM table_source").wait(
 
 {% endhighlight %}
 
-### Emit Results to Multiple Sink Tables
+### 将数据发送到多张的 Sink 表中

Review comment:
       ```suggestion
   ### 将数据发送到多张 Sink 表中
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -565,21 +565,21 @@ table_env.execute_sql("""
     )
 """)
 
-# create a statement set
+# 创建 statement set
 statement_set = table_env.create_statement_set()
 
-# emit the "table" object to the "first_sink_table"
+# 发送 "table" 对象到 "first_sink_table"
 statement_set.add_insert("first_sink_table", table)
 
-# emit the "simple_source" to the "second_sink_table" via a insert sql query
+# 通过一条 sql 插入查询语句将数据从 "simple_source" 发送到 "second_sink_table"

Review comment:
       ```suggestion
   # 通过一条 sql 插入语句将数据从 "simple_source" 发送到 "second_sink_table"
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -244,32 +244,32 @@ The result is:
 2   3     6
 {% endhighlight %}
 
-### Create using a Catalog
+### 使用 Catalog 创建
 
-A `TableEnvironment` maintains a map of catalogs of tables which are created with an identifier.
+`TableEnvironment` 维护了一个使用标识符创建的表的 catalogs 映射。
 
-The tables in a catalog may either be temporary, and tied to the lifecycle of a single Flink session, or permanent, and visible across multiple Flink sessions.
+Catalog 中的表可以是临时的,并与单个 Flink 会话生命周期相关联,也可以是永久的,跨多个 Flink 会话可见。

Review comment:
       ```suggestion
   Catalog 中的表既可以是临时的,并与单个 Flink 会话生命周期相关联,也可以是永久的,跨多个 Flink 会话可见。
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -169,48 +169,48 @@ table.to_pandas()
 
 {% endhighlight %}
 
-The result is:
+结果为:
 
 {% highlight text %}
    id   data
 0   1     Hi
 1   2  Hello
 {% endhighlight %}
 
-By default the table schema is extracted from the data automatically. 
+默认情况下,表结构是从数据中自动提取的。
 
-If the automatically generated table schema isn't satisfactory, you can specify it manually:
+如果自动生成的表模式不符合你的要求,你也可以手动指定:
 
 {% highlight python %}
 
 table_without_schema = table_env.from_elements([(1, 'Hi'), (2, 'Hello')], ['id', 'data'])
-# by default the type of the "id" column is 64 bit int
+# 默认情况下,“id” 列的类型是 64 位整型
 default_type = table_without_schema.to_pandas()["id"].dtype
 print('By default the type of the "id" column is %s.' % default_type)
 
 from pyflink.table import DataTypes
 table = table_env.from_elements([(1, 'Hi'), (2, 'Hello')],
                                 DataTypes.ROW([DataTypes.FIELD("id", DataTypes.TINYINT()),
                                                DataTypes.FIELD("data", DataTypes.STRING())]))
-# now the type of the "id" column is 8 bit int
+# 现在 “id” 列的类型是 8 位整型
 type = table.to_pandas()["id"].dtype
 print('Now the type of the "id" column is %s.' % type)
 
 {% endhighlight %}
 
-The result is:
+结果为:
 
 {% highlight text %}
-By default the type of the "id" column is int64.
-Now the type of the "id" column is int8.
+默认情况下,“id” 列的类型是 64 位整型。
+现在 “id” 列的类型是 8 位整型。
 {% endhighlight %}
 
-### Create using a Connector
+### 使用连接器创建
 
-You can create a Table using connector DDL:
+你可以使用连接器 DDL 创建一张表:
 
 {% highlight python %}
-# create a blink stream TableEnvironment
+# 创建 blink 流式环境

Review comment:
       ```suggestion
   # 创建 blink 流 TableEnvironment
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -244,32 +244,32 @@ The result is:
 2   3     6
 {% endhighlight %}
 
-### Create using a Catalog
+### 使用 Catalog 创建
 
-A `TableEnvironment` maintains a map of catalogs of tables which are created with an identifier.
+`TableEnvironment` 维护了一个使用标识符创建的表的 catalogs 映射。
 
-The tables in a catalog may either be temporary, and tied to the lifecycle of a single Flink session, or permanent, and visible across multiple Flink sessions.
+Catalog 中的表可以是临时的,并与单个 Flink 会话生命周期相关联,也可以是永久的,跨多个 Flink 会话可见。
 
-The tables and views created via SQL DDL, e.g. "create table ..." and "create view ..." are also stored in a catalog.
+通过 SQL DDL 创建的表和视图, 例如 "create table ..." 和 "create view ..." 都是被存储在 catalog。
 
-You can directly access the tables in a catalog via SQL.
+你可以通过 SQL 直接访问 catalog 中的表。
 
-If you want to use tables from a catalog with the Table API, you can use the "from_path" method to create the Table API objects:
+如果你要用 Table API 来使用 catalog 中的表,你可以使用 "from_path" 方法来创建 Table API 对象:
 
 {% highlight python %}
 
-# prepare the catalog
-# register Table API tables in the catalog
+# 准备 catalog
+# 在 catalog 中注册 Table API 表

Review comment:
       ```suggestion
   # 将 Table API 表注册到 catalog 中
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:
 source_table = table_env.from_path("datagen")
-# or create a Table from a SQL query:
+# 或者使用 SQL 查询语句创建一张表:
 source_table = table_env.sql_query("SELECT * FROM datagen")
 
 result_table = source_table.select(source_table.id + 1, source_table.data)
 
-# 5. emit query result to sink table
-# emit a Table API result Table to a sink table:
+# 5. 将查询结果发送给 sink 表
+# 将 Table API 结果表数据发送给 sink 表:
 result_table.execute_insert("print").wait()
-# or emit results via SQL query:
+# 或者通过 SQL 查询语句来发送:
 table_env.execute_sql("INSERT INTO print SELECT * FROM datagen").wait()
 
 {% endhighlight %}
 
 {% top %}
 
-Create a TableEnvironment
+创建 TableEnvironment
 ---------------------------
 
-The `TableEnvironment` is a central concept of the Table API and SQL integration. The following code example shows how to create a TableEnvironment:
+`TableEnvironment` 是 Table API 和 SQL 集成的核心概念。下面代码示例展示了如何创建一个 TableEnvironment:
 
 {% highlight python %}
 
 from pyflink.table import EnvironmentSettings, StreamTableEnvironment, BatchTableEnvironment
 
-# create a blink streaming TableEnvironment
+# 创建 blink 流 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a blink batch TableEnvironment
+# 创建 blink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink streaming TableEnvironment
+# 创建 flink 流式 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_old_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink batch TableEnvironment
+# 创建 flink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_old_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
 {% endhighlight %}
 
-For more details about the different ways to create a `TableEnvironment`, please refer to the [TableEnvironment Documentation]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment).
+关于创建 `TableEnvironment` 的更多细节,请查阅 [TableEnvironment 文档]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment)。
 
-The `TableEnvironment` is responsible for:
+`TableEnvironment` 可以用来:
 
-* Creating `Table`s
-* Registering `Table`s as a temporary view
-* Executing SQL queries, see [SQL]({% link dev/table/sql/index.zh.md %}) for more details
-* Registering user-defined (scalar, table, or aggregation) functions, see [General User-defined Functions]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) and [Vectorized User-defined Functions]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %}) for more details
-* Configuring the job, see [Python Configuration]({% link dev/python/table-api-users-guide/python_config.zh.md %}) for more details
-* Managing Python dependencies, see [Dependency Management]({% link dev/python/table-api-users-guide/dependency_management.zh.md %}) for more details
-* Submitting the jobs for execution
+* 创建 `Table`
+* 将 `Table` 注册成临时表
+* 执行 SQL 查询,更多细节可查阅 [SQL]({% link dev/table/sql/index.zh.md %})
+* 注册用户自定义的 (标量,表值,或者聚合) 函数, 更多细节可查阅 [普通的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) 和 [向量化的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %})
+* 配置作业,更多细节可查阅 [Python 配置]({% link dev/python/table-api-users-guide/python_config.zh.md %})
+* 管理 Python 依赖,更多细节可查阅 [依赖管理]({% link dev/python/table-api-users-guide/dependency_management.zh.md %})
+* 提交 jobs 执行
 
-Currently there are 2 planners available: flink planner and blink planner.
+目前有2个可用的执行器 : flink 执行器 和 blink 执行器。
 
-You should explicitly set which planner to use in the current program.
-We recommend using the blink planner as much as possible. 
+你应该在当前程序中显式地设置使用哪个执行器。
+我们建议尽可能多的使用 blink 执行器。 
 
 {% top %}
 
-Create Tables
+创建表
 ---------------
 
-`Table` is a core component of the Python Table API. A `Table` is a logical representation of the intermediate result of a Table API Job.
+`Table` 是 Python Table API 的核心组件。一张 `Table` 是 Table API 作业中间结果的逻辑表示。
 
-A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine tables from different TableEnvironments in same query, e.g., to join or union them.
+一张 `Table` 总是绑定到特定的 `TableEnvironment`。不可能在同一个查询中合并来自不同 TableEnvironments 的表,例如 join 或者 union 它们。
 
-### Create using a List Object
+### 使用列表对象创建

Review comment:
       ```suggestion
   ### 通过列表对象创建
   ```
   翻译成“通过”会不会更通顺一些?

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:
 source_table = table_env.from_path("datagen")
-# or create a Table from a SQL query:
+# 或者使用 SQL 查询语句创建一张表:
 source_table = table_env.sql_query("SELECT * FROM datagen")
 
 result_table = source_table.select(source_table.id + 1, source_table.data)
 
-# 5. emit query result to sink table
-# emit a Table API result Table to a sink table:
+# 5. 将查询结果发送给 sink 表
+# 将 Table API 结果表数据发送给 sink 表:
 result_table.execute_insert("print").wait()
-# or emit results via SQL query:
+# 或者通过 SQL 查询语句来发送:
 table_env.execute_sql("INSERT INTO print SELECT * FROM datagen").wait()
 
 {% endhighlight %}
 
 {% top %}
 
-Create a TableEnvironment
+创建 TableEnvironment
 ---------------------------
 
-The `TableEnvironment` is a central concept of the Table API and SQL integration. The following code example shows how to create a TableEnvironment:
+`TableEnvironment` 是 Table API 和 SQL 集成的核心概念。下面代码示例展示了如何创建一个 TableEnvironment:
 
 {% highlight python %}
 
 from pyflink.table import EnvironmentSettings, StreamTableEnvironment, BatchTableEnvironment
 
-# create a blink streaming TableEnvironment
+# 创建 blink 流 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a blink batch TableEnvironment
+# 创建 blink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink streaming TableEnvironment
+# 创建 flink 流式 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_old_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink batch TableEnvironment
+# 创建 flink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_old_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
 {% endhighlight %}
 
-For more details about the different ways to create a `TableEnvironment`, please refer to the [TableEnvironment Documentation]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment).
+关于创建 `TableEnvironment` 的更多细节,请查阅 [TableEnvironment 文档]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment)。
 
-The `TableEnvironment` is responsible for:
+`TableEnvironment` 可以用来:
 
-* Creating `Table`s
-* Registering `Table`s as a temporary view
-* Executing SQL queries, see [SQL]({% link dev/table/sql/index.zh.md %}) for more details
-* Registering user-defined (scalar, table, or aggregation) functions, see [General User-defined Functions]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) and [Vectorized User-defined Functions]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %}) for more details
-* Configuring the job, see [Python Configuration]({% link dev/python/table-api-users-guide/python_config.zh.md %}) for more details
-* Managing Python dependencies, see [Dependency Management]({% link dev/python/table-api-users-guide/dependency_management.zh.md %}) for more details
-* Submitting the jobs for execution
+* 创建 `Table`
+* 将 `Table` 注册成临时表
+* 执行 SQL 查询,更多细节可查阅 [SQL]({% link dev/table/sql/index.zh.md %})
+* 注册用户自定义的 (标量,表值,或者聚合) 函数, 更多细节可查阅 [普通的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) 和 [向量化的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %})
+* 配置作业,更多细节可查阅 [Python 配置]({% link dev/python/table-api-users-guide/python_config.zh.md %})
+* 管理 Python 依赖,更多细节可查阅 [依赖管理]({% link dev/python/table-api-users-guide/dependency_management.zh.md %})
+* 提交 jobs 执行
 
-Currently there are 2 planners available: flink planner and blink planner.
+目前有2个可用的执行器 : flink 执行器 和 blink 执行器。
 
-You should explicitly set which planner to use in the current program.
-We recommend using the blink planner as much as possible. 
+你应该在当前程序中显式地设置使用哪个执行器。
+我们建议尽可能多的使用 blink 执行器。 
 
 {% top %}
 
-Create Tables
+创建表
 ---------------
 
-`Table` is a core component of the Python Table API. A `Table` is a logical representation of the intermediate result of a Table API Job.
+`Table` 是 Python Table API 的核心组件。一张 `Table` 是 Table API 作业中间结果的逻辑表示。
 
-A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine tables from different TableEnvironments in same query, e.g., to join or union them.
+一张 `Table` 总是绑定到特定的 `TableEnvironment`。不可能在同一个查询中合并来自不同 TableEnvironments 的表,例如 join 或者 union 它们。

Review comment:
       ```suggestion
   一个`Table` 实例总是与一个特定的 `TableEnvironment`相绑定。不支持在同一个查询中合并来自不同 TableEnvironments 的表,例如 join 或者 union 它们。
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -279,22 +279,22 @@ The result is:
 
 {% top %}
 
-Write Queries
+写查询
 ---------------
 
-### Write Table API Queries
+### Table API 查询
 
-The `Table` object offers many methods for applying relational operations. 
-These methods return new `Table` objects representing the result of applying the relational operations on the input `Table`. 
-These relational operations may be composed of multiple method calls, such as `table.group_by(...).select(...)`.
+`Table` 对象为应用关系操作提供了许多方法。

Review comment:
       ```suggestion
   `Table` 对象有许多方法,可以用于进行关系操作。
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -244,32 +244,32 @@ The result is:
 2   3     6
 {% endhighlight %}
 
-### Create using a Catalog
+### 使用 Catalog 创建
 
-A `TableEnvironment` maintains a map of catalogs of tables which are created with an identifier.
+`TableEnvironment` 维护了一个使用标识符创建的表的 catalogs 映射。
 
-The tables in a catalog may either be temporary, and tied to the lifecycle of a single Flink session, or permanent, and visible across multiple Flink sessions.
+Catalog 中的表可以是临时的,并与单个 Flink 会话生命周期相关联,也可以是永久的,跨多个 Flink 会话可见。
 
-The tables and views created via SQL DDL, e.g. "create table ..." and "create view ..." are also stored in a catalog.
+通过 SQL DDL 创建的表和视图, 例如 "create table ..." 和 "create view ..." 都是被存储在 catalog。

Review comment:
       ```suggestion
   通过 SQL DDL 创建的表和视图, 例如 "create table ..." 和 "create view ..." 都存储在 catalog中。
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -314,24 +314,24 @@ revenue.to_pandas()
 
 {% endhighlight %}
 
-The result is:
+结果为:
 
 {% highlight text %}
    name  rev_sum
 0  Jack       30
 {% endhighlight %}
 
-### Write SQL Queries
+### SQL 查询
 
-Flink's SQL integration is based on [Apache Calcite](https://calcite.apache.org), which implements the SQL standard. SQL queries are specified as Strings.
+Flink's SQL 是使用 [Apache Calcite](https://calcite.apache.org) 集成的,它实现了标准的 SQL。SQL 查询语句使用字符串来表达。

Review comment:
       ```suggestion
   Flink 的 SQL 是使用 [Apache Calcite](https://calcite.apache.org) 集成的,它实现了标准的 SQL。SQL 查询语句使用字符串来表达。
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:
 source_table = table_env.from_path("datagen")
-# or create a Table from a SQL query:
+# 或者使用 SQL 查询语句创建一张表:
 source_table = table_env.sql_query("SELECT * FROM datagen")
 
 result_table = source_table.select(source_table.id + 1, source_table.data)
 
-# 5. emit query result to sink table
-# emit a Table API result Table to a sink table:
+# 5. 将查询结果发送给 sink 表
+# 将 Table API 结果表数据发送给 sink 表:
 result_table.execute_insert("print").wait()
-# or emit results via SQL query:
+# 或者通过 SQL 查询语句来发送:
 table_env.execute_sql("INSERT INTO print SELECT * FROM datagen").wait()
 
 {% endhighlight %}
 
 {% top %}
 
-Create a TableEnvironment
+创建 TableEnvironment
 ---------------------------
 
-The `TableEnvironment` is a central concept of the Table API and SQL integration. The following code example shows how to create a TableEnvironment:
+`TableEnvironment` 是 Table API 和 SQL 集成的核心概念。下面代码示例展示了如何创建一个 TableEnvironment:
 
 {% highlight python %}
 
 from pyflink.table import EnvironmentSettings, StreamTableEnvironment, BatchTableEnvironment
 
-# create a blink streaming TableEnvironment
+# 创建 blink 流 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_blink_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a blink batch TableEnvironment
+# 创建 blink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_blink_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink streaming TableEnvironment
+# 创建 flink 流式 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_streaming_mode().use_old_planner().build()
 table_env = StreamTableEnvironment.create(environment_settings=env_settings)
 
-# create a flink batch TableEnvironment
+# 创建 flink 批 TableEnvironment
 env_settings = EnvironmentSettings.new_instance().in_batch_mode().use_old_planner().build()
 table_env = BatchTableEnvironment.create(environment_settings=env_settings)
 
 {% endhighlight %}
 
-For more details about the different ways to create a `TableEnvironment`, please refer to the [TableEnvironment Documentation]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment).
+关于创建 `TableEnvironment` 的更多细节,请查阅 [TableEnvironment 文档]({% link dev/python/table-api-users-guide/table_environment.zh.md %}#create-a-tableenvironment)。
 
-The `TableEnvironment` is responsible for:
+`TableEnvironment` 可以用来:
 
-* Creating `Table`s
-* Registering `Table`s as a temporary view
-* Executing SQL queries, see [SQL]({% link dev/table/sql/index.zh.md %}) for more details
-* Registering user-defined (scalar, table, or aggregation) functions, see [General User-defined Functions]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) and [Vectorized User-defined Functions]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %}) for more details
-* Configuring the job, see [Python Configuration]({% link dev/python/table-api-users-guide/python_config.zh.md %}) for more details
-* Managing Python dependencies, see [Dependency Management]({% link dev/python/table-api-users-guide/dependency_management.zh.md %}) for more details
-* Submitting the jobs for execution
+* 创建 `Table`
+* 将 `Table` 注册成临时表
+* 执行 SQL 查询,更多细节可查阅 [SQL]({% link dev/table/sql/index.zh.md %})
+* 注册用户自定义的 (标量,表值,或者聚合) 函数, 更多细节可查阅 [普通的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/python_udfs.zh.md %}) 和 [向量化的用户自定义函数]({% link dev/python/table-api-users-guide/udfs/vectorized_python_udfs.zh.md %})
+* 配置作业,更多细节可查阅 [Python 配置]({% link dev/python/table-api-users-guide/python_config.zh.md %})
+* 管理 Python 依赖,更多细节可查阅 [依赖管理]({% link dev/python/table-api-users-guide/dependency_management.zh.md %})
+* 提交 jobs 执行
 
-Currently there are 2 planners available: flink planner and blink planner.
+目前有2个可用的执行器 : flink 执行器 和 blink 执行器。
 
-You should explicitly set which planner to use in the current program.
-We recommend using the blink planner as much as possible. 
+你应该在当前程序中显式地设置使用哪个执行器。
+我们建议尽可能多的使用 blink 执行器。 
 
 {% top %}
 
-Create Tables
+创建表
 ---------------
 
-`Table` is a core component of the Python Table API. A `Table` is a logical representation of the intermediate result of a Table API Job.
+`Table` 是 Python Table API 的核心组件。一张 `Table` 是 Table API 作业中间结果的逻辑表示。
 
-A `Table` is always bound to a specific `TableEnvironment`. It is not possible to combine tables from different TableEnvironments in same query, e.g., to join or union them.
+一张 `Table` 总是绑定到特定的 `TableEnvironment`。不可能在同一个查询中合并来自不同 TableEnvironments 的表,例如 join 或者 union 它们。
 
-### Create using a List Object
+### 使用列表对象创建
 
-You can create a Table from a list object:
+你可以使用一个列表对象创建一张表:
 
 {% highlight python %}
 
-# create a blink batch TableEnvironment
+# 创建 blink 批处理环境

Review comment:
       ```suggestion
   # 创建 blink 批 TableEnvironment
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -384,32 +384,32 @@ The result is:
 8> +U(3,19)
 {% endhighlight %}
 
-In fact, this shows the change logs received by the print sink.
-The output format of a change log is:
+实际上,结果展示了打印接收器所接收到的变更日志。
+变更日志输出的格式为:
 {% highlight text %}
-{subtask id}> {message type}{string format of the value}
+{subtask id}> {消息类型}{值的字符串格式}
 {% endhighlight %}
-For example, "2> +I(4,11)" means this message comes from the 2nd subtask, and "+I" means it is an insert message. "(4, 11)" is the content of the message.
-In addition, "-U" means a retract record (i.e. update-before), which means this message should be deleted or retracted from the sink. 
-"+U" means this is an update record (i.e. update-after), which means this message should be updated or inserted by the sink.
+例如,"2> +I(4,11)" 表示这条消息来自第二个 subtask,同时 "+I" 表示这是一条插入的消息。"(4, 11)" 是这条消息的内容。
+另外,"-U" 表示这是一条回撤的消息 (即更新前),这意味着应该在 sink 中删除或回撤该消息。 
+"+U" 表示这是一条更新的记录 (即更新后),这意味着应该在 sink 中更新或插入该消息。
 
-So, we get this result from the change logs above:
+所以,我们从上面的变更日志中得到以下结果:
 
 {% highlight text %}
 (4, 11)
 (2, 15) 
 (3, 19)
 {% endhighlight %}
 
-### Mix the Table API and SQL
+### Table API 和 SQL 的混合使用
 
-The `Table` objects used in Table API and the tables used in SQL can be freely converted to each other.
+Table API 中的 `Table` 对象和 SQL 中的 tables 可以自由地相互转换。
 
-The following example shows how to use a `Table` object in SQL:
+下面例子展示了如何在 SQL 中使用 `Table` 对象:
 
 {% highlight python %}
 
-# create a sink table to emit results
+# 创建一张 sink 表来发送数据

Review comment:
       ```suggestion
   # 创建一张 sink 表来接收结果数据
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -455,18 +455,18 @@ table_env.execute_sql("""
     )
 """)
 
-# convert the sql table to Table API table
+# 将 sql 表转化成 Table API 表
 table = table_env.from_path("sql_source")
 
-# or create the table from a sql query
+# 或者从 sql 查询语句中创建表
 table = table_env.sql_query("SELECT * FROM sql_source")
 
-# emit the table
+# 将数据发送到该表中

Review comment:
       ```suggestion
   # 将表中的数据写出
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -419,27 +419,27 @@ table_env.execute_sql("""
     )
 """)
 
-# convert the Table API table to a SQL view
+# 将 Table API 表转换成 SQL 的视图
 table = table_env.from_elements([(1, 'Hi'), (2, 'Hello')], ['id', 'data'])
 table_env.create_temporary_view('table_api_table', table)
 
-# emit the Table API table
+# 将数据写入 Table API 表

Review comment:
       ```suggestion
   # 将 Table API 表的数据写入结果表
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -492,20 +492,20 @@ table.to_pandas()
 
 {% endhighlight %}
 
-The result is:
+结果为:
 
 {% highlight text %}
    id   data
 0   1     Hi
 1   2  Hello
 {% endhighlight %}
 
-<span class="label label-info">Note</span> "to_pandas" will trigger the materialization of the table and collect table content to the memory of the client, it's good practice to limit the number of rows collected via <a href="{{ site.pythondocs_baseurl }}/api/python/pyflink.table.html#pyflink.table.Table.limit">Table.limit</a>.
-<span class="label label-info">Note</span> "to_pandas" is not supported by the flink planner, and not all data types can be emitted to pandas DataFrames.
+<span class="label label-info">Note</span> "to_pandas" 将会触发表的物化同时将表的内容收集到客户端内存中,所以通过 <a href="{{ site.pythondocs_baseurl }}/api/python/pyflink.table.html#pyflink.table.Table.limit">Table.limit</a> 来限制收集数据的条数是一种很好的做法。

Review comment:
       ```suggestion
   <span class="label label-info">Note</span> "to_pandas" 将会触发表的物化,同时将表的内容收集到客户端内存中,所以通过 <a href="{{ site.pythondocs_baseurl }}/api/python/pyflink.table.html#pyflink.table.Table.limit">Table.limit</a> 来限制收集数据的条数是一种很好的做法。
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -565,21 +565,21 @@ table_env.execute_sql("""
     )
 """)
 
-# create a statement set
+# 创建 statement set
 statement_set = table_env.create_statement_set()
 
-# emit the "table" object to the "first_sink_table"
+# 发送 "table" 对象到 "first_sink_table"

Review comment:
       ```suggestion
   # 将 "table" 的数据写入 "first_sink_table"
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:

Review comment:
       ```suggestion
   # 通过 Table API 查询语句创建一张表:
   ```
   使用 -> 通过?其他地方也看一下是否有必要改动。

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -588,23 +588,23 @@ The result is:
 7> +I(2,Hello)
 {% endhighlight %}
 
-Explain Tables
+Explain 表
 -----------------
 
-The Table API provides a mechanism to explain the logical and optimized query plans used to compute a `Table`. 
-This is done through the `Table.explain()` or `StatementSet.explain()` methods. `Table.explain()`returns the plan of a `Table`. `StatementSet.explain()` returns the plan for multiple sinks. These methods return a string describing three things:
+Table API 提供了一种机制来翻译 `Table` 的逻辑和优化查询计划。 

Review comment:
       ```suggestion
   Table API 提供了一种机制来解释 `Table` 的逻辑查询计划和优化后的查询计划。 
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -588,23 +588,23 @@ The result is:
 7> +I(2,Hello)
 {% endhighlight %}
 
-Explain Tables
+Explain 表
 -----------------
 
-The Table API provides a mechanism to explain the logical and optimized query plans used to compute a `Table`. 
-This is done through the `Table.explain()` or `StatementSet.explain()` methods. `Table.explain()`returns the plan of a `Table`. `StatementSet.explain()` returns the plan for multiple sinks. These methods return a string describing three things:
+Table API 提供了一种机制来翻译 `Table` 的逻辑和优化查询计划。 
+这个通过 `Table.explain()` 或者 `StatementSet.explain()` 方法来完成。`Table.explain()` 返回 `Table` 的执行计划。`StatementSet.explain()` 返回多个 sink 的执行计划。这些方法返回一个字符串来描述三个方面:

Review comment:
       ```suggestion
   这是通过 `Table.explain()` 或者 `StatementSet.explain()` 方法来完成的。`Table.explain()` 可以返回一个 `Table` 的执行计划。`StatementSet.explain()` 则可以返回含有多个 sink 的作业的执行计划。这些方法会返回一个字符串,字符串描述了以下三个方面的信息:
   ```

##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算

Review comment:
       ```suggestion
   # 4. 查询 source 表,同时执行计算
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangxlong commented on a change in pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
wangxlong commented on a change in pull request #13537:
URL: https://github.com/apache/flink/pull/13537#discussion_r503322986



##########
File path: docs/dev/python/table-api-users-guide/intro_to_table_api.zh.md
##########
@@ -64,84 +64,84 @@ table_env.execute_sql("""
     )
 """)
 
-# 4. query from source table and perform caculations
-# create a Table from a Table API query:
+# 4. 从 source 表中查询同时执行计算
+# 使用 Table API 查询语句创建一张表:

Review comment:
       `使用`改为`通过`是会通顺些,我看下下其它地方,可以改动的都改动了。




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] dianfu closed pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
dianfu closed pull request #13537:
URL: https://github.com/apache/flink/pull/13537


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #13537: [FLINK-19478][docs-zh][python] Translate page "intro_to_table_api" into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #13537:
URL: https://github.com/apache/flink/pull/13537#issuecomment-703094638


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7181",
       "triggerID" : "fbd56ff6167834a20d8ffcfd801dc2e6f3464657",
       "triggerType" : "PUSH"
     }, {
       "hash" : "ba413eda354b83280bdf5ce9f231e4e65f3d0216",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7457",
       "triggerID" : "ba413eda354b83280bdf5ce9f231e4e65f3d0216",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * ba413eda354b83280bdf5ce9f231e4e65f3d0216 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=7457) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org