You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2019/05/05 10:31:24 UTC

[GitHub] [flink] Armstrongya commented on a change in pull request #8342: [FLINK-12360][chinese-translation]Translate Jobs and Scheduling Page …

Armstrongya commented on a change in pull request #8342: [FLINK-12360][chinese-translation]Translate Jobs and Scheduling Page …
URL: https://github.com/apache/flink/pull/8342#discussion_r281015000
 
 

 ##########
 File path: docs/internals/job_scheduling.zh.md
 ##########
 @@ -22,81 +22,52 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This document briefly describes how Flink schedules jobs and
-how it represents and tracks job status on the JobManager.
+这篇文档简要描述了 Flink 怎样调度作业, 怎样在 JobManager 里描述和追踪作业状态
 
 * This will be replaced by the TOC
 {:toc}
 
 
-## Scheduling
+## 调度
 
-Execution resources in Flink are defined through _Task Slots_. Each TaskManager will have one or more task slots,
-each of which can run one pipeline of parallel tasks. A pipeline consists of multiple successive tasks, such as the
-*n-th* parallel instance of a MapFunction together with the *n-th* parallel instance of a ReduceFunction.
-Note that Flink often executes successive tasks concurrently: For Streaming programs, that happens in any case,
-but also for batch programs, it happens frequently.
+Flink 通过 _Task Slots_ 来定义执行资源。每个 TaskManager 有一到多个 task slot,每个 task slot 可以运行一条由多个并行 task 组成的流水线。
+这样一条流水线由多个连续的 task 组成,比如并行度为 *n* 的 MapFunction 和 并行度为 *n* 的 ReduceFunction。需要注意的是 Flink 经常并发执行连续的 task,不仅在流式作业中到处都是,在批量作业中也很常见。
 
-The figure below illustrates that. Consider a program with a data source, a *MapFunction*, and a *ReduceFunction*.
-The source and MapFunction are executed with a parallelism of 4, while the ReduceFunction is executed with a
-parallelism of 3. A pipeline consists of the sequence Source - Map - Reduce. On a cluster with 2 TaskManagers with
-3 slots each, the program will be executed as described below.
+下图很好的阐释了这一点,一个由数据源、*MapFunction* 和 *ReduceFunction* 组成的 Flink 作业,其中数据源和 MapFunction 的并行度为 4 ,ReduceFunction 的并行度为 3 。流水线由一系列的 Source - Map - Reduce 组成,运行在 2 个 TaskManager 组成的集群上,每个 TaskManager 包含 3 个 slot,整个作业的运行如下图所示。
 
 <div style="text-align: center;">
 <img src="{{ site.baseurl }}/fig/slots.svg" alt="Assigning Pipelines of Tasks to Slots" height="250px" style="text-align: center;"/>
 </div>
 
-Internally, Flink defines through {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/scheduler/SlotSharingGroup.java "SlotSharingGroup" %}
-and {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/scheduler/CoLocationGroup.java "CoLocationGroup" %}
-which tasks may share a slot (permissive), respectively which tasks must be strictly placed into the same slot.
+Flink 内部通过 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/scheduler/SlotSharingGroup.java "SlotSharingGroup" %} 和 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobmanager/scheduler/CoLocationGroup.java "CoLocationGroup" %} 来定义哪些 task 可以共享一个 slot, 哪些 task 必须严格放到同一个 slot。
 
+## JobManager 数据结构
 
-## JobManager Data Structures
+在作业执行期间,JobManager 会持续跟踪各个 task,决定何时调度下一个或一组 task,处理已完成的 task 或执行失败的情况。
 
-During job execution, the JobManager keeps track of distributed tasks, decides when to schedule the next task (or set of tasks),
-and reacts to finished tasks or execution failures.
+JobManager 会接收到一个 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/ "JobGraph" %},用来描述由多个算子顶点 ({% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java "JobVertex" %}) 组成的数据流图,以及中间结果数据 ({% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/IntermediateDataSet.java "IntermediateDataSet" %})。每个算子都有自己的可配置属性,比如并行度和运行的代码。除此之外,JobGraph 还包含算子代码执行所必须的依赖库。
 
-The JobManager receives the {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/ "JobGraph" %},
-which is a representation of the data flow consisting of operators ({% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/JobVertex.java "JobVertex" %})
-and intermediate results ({% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/jobgraph/IntermediateDataSet.java "IntermediateDataSet" %}).
-Each operator has properties, like the parallelism and the code that it executes.
-In addition, the JobGraph has a set of attached libraries, that are necessary to execute the code of the operators.
 
-The JobManager transforms the JobGraph into an {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ "ExecutionGraph" %}.
-The ExecutionGraph is a parallel version of the JobGraph: For each JobVertex, it contains an {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java "ExecutionVertex" %} per parallel subtask. An operator with a parallelism of 100 will have one JobVertex and 100 ExecutionVertices.
-The ExecutionVertex tracks the state of execution of a particular subtask. All ExecutionVertices from one JobVertex are held in an
-{% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionJobVertex.java "ExecutionJobVertex" %},
-which tracks the status of the operator as a whole.
-Besides the vertices, the ExecutionGraph also contains the {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResult.java "IntermediateResult" %} and the {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResultPartition.java "IntermediateResultPartition" %}. The former tracks the state of the *IntermediateDataSet*, the latter the state of each of its partitions.
+JobManager 会将 JobGraph 转换成 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ "ExecutionGraph" %}。可以将 ExecutionGraph 理解为并行版本的 JobGraph,对于每一个顶点 JobVertex,它的每个并行子 task 都有一个 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionVertex.java "ExecutionVertex" %}。一个并行度为 100 的算子会有 1 个 JobVertext 和 100 个 ExecutionVertex。ExecutionVertex 会跟踪子 task 的执行状态。 同一个 JobVertext 的所有 ExecutionVertex 都通过 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionJobVertex.java "ExecutionJobVertex" %} 来持有,并跟踪整个算子的运行状态。ExecutionGraph 除了这些顶点,还包含中间数据结果和分片情况 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResult.java "IntermediateResult" %} 和 {% gh_link /flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/IntermediateResultPartition.java "IntermediateResultPartition" %}。前者跟踪中间结果的状态,后者跟踪每个分片的状态。
 
 <div style="text-align: center;">
 <img src="{{ site.baseurl }}/fig/job_and_execution_graph.svg" alt="JobGraph and ExecutionGraph" height="400px" style="text-align: center;"/>
 </div>
 
-Each ExecutionGraph has a job status associated with it.
-This job status indicates the current state of the job execution.
+每个 ExecutionGraph 都有一个与之相关的作业状态信息,用来描述当前的作业执行状态。
 
-A Flink job is first in the *created* state, then switches to *running* and upon completion of all work it switches to *finished*.
-In case of failures, a job switches first to *failing* where it cancels all running tasks.
-If all job vertices have reached a final state and the job is not restartable, then the job transitions to *failed*.
-If the job can be restarted, then it will enter the *restarting* state.
-Once the job has been completely restarted, it will reach the *created* state.
+Flink 作业刚开始会处于 *created* 状态,然后切换到 *running* 状态,当所有任务都执行完之后会切换到 *finished* 状态。如果遇到失败的话,作业首先切换到 *failing* 状态以便取消所有正在运行的 task。如果所有 job 节点都到达最终状态并且 job 无法重启, 那么 job 进入 *failed* 状态。如果作业可以重启,那么就会进入到 *restarting* 状态,当作业彻底重启之后会进入到 *created* 状态。
 
-In case that the user cancels the job, it will go into the *cancelling* state.
-This also entails the cancellation of all currently running tasks.
-Once all running tasks have reached a final state, the job transitions to the state *cancelled*.
+如果用户取消了 job 话,它会进入到 *cancelling* 状态,并取消所有正在运行的 task。当所有正在运行的 task 进入到最终状态的时候,job 进入 *cancelled* 状态。
 
-Unlike the states *finished*, *canceled* and *failed* which denote a globally terminal state and, thus, trigger the clean up of the job, the *suspended* state is only locally terminal.
-Locally terminal means that the execution of the job has been terminated on the respective JobManager but another JobManager of the Flink cluster can retrieve the job from the persistent HA store and restart it.
-Consequently, a job which reaches the *suspended* state won't be completely cleaned up.
+跟 *finished* 不同的是,*canceled* 和 *failed* 会导致全局的终结状态,并且触发作业的清理,*suspended* 状态只是一个局部的终结。
+局部的终结意味着作业有一部分 JobManager 被终结,但是另外一部分 JobManager 依然可以从高可用存储里获取作业并重启。因此一个处于 *suspended* 状态的作业不会被彻底清理掉。
 
 Review comment:
   @wuchong Thanks for your refined translation. It's more reasonable. Does this means my first pull request will merge to Flink codebase ? 
   ps: How to move my jira task issue (https://issues.apache.org/jira/browse/FLINK-12360) to subtask issue of "Support Chinese Documents for Apache Flink"  (https://issues.apache.org/jira/browse/FLINK-11529) ?

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services