You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/08/31 15:34:03 UTC

[GitHub] [flink] RocMarshal commented on a change in pull request #13235: [FLINK-19036][docs-zh] Translate page 'Application Profiling & Debugging' of 'Debugging & Monitoring' into Chinese

RocMarshal commented on a change in pull request #13235:
URL: https://github.com/apache/flink/pull/13235#discussion_r480207566



##########
File path: docs/monitoring/application_profiling.zh.md
##########
@@ -25,58 +25,56 @@ under the License.
 * ToC
 {:toc}
 
-## Overview of Custom Logging with Apache Flink
+<a name="overview-of-custom-logging-with-apache-flink"></a>
 
-Each standalone JobManager, TaskManager, HistoryServer, and ZooKeeper daemon redirects `stdout` and `stderr` to a file
-with a `.out` filename suffix and writes internal logging to a file with a `.log` suffix. Java options configured by the
-user in `env.java.opts`, `env.java.opts.jobmanager`, `env.java.opts.taskmanager`, `env.java.opts.historyserver` and
-`env.java.opts.client` can likewise define log files with
-use of the script variable `FLINK_LOG_PREFIX` and by enclosing the options in double quotes for late evaluation. Log files
-using `FLINK_LOG_PREFIX` are rotated along with the default `.out` and `.log` files.
+## 使用 Apache Flink 自定义日志概述
 
-## Profiling with Java Flight Recorder
+每个独立的 JobManager,TaskManager,HistoryServer,ZooKeeper 后台进程都将 `stdout` 和 `stderr` 重定向到 `.out` 文件名后缀的文件,并将其内部的日志记录写入到 `.log` 后缀的文件。用户可以在 `env.java.opts`,`env.java.opts.jobmanager`,`env.java.opts.taskmanager`,`env.java.opts.historyserver` 和 `env.java.opts.client` 配置项中配置 Java 选项(包括 log 相关的选项),同样也可以使用脚本变量 `FLINK_LOG_PREFIX` 定义日志文件,并将选项括在双引号中以供后期使用。日志文件将使用 `FLINK_LOG_PREFIX` 与默认的 `.out` 和 `.log` 后缀一起滚动。
 
-Java Flight Recorder is a profiling and event collection framework built into the Oracle JDK.
-[Java Mission Control](http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html)
-is an advanced set of tools that enables efficient and detailed analysis of the extensive of data collected by Java
-Flight Recorder. Example configuration:
+<a name="profiling-with-java-flight-recorder"></a>
+
+## 使用 Java Flight Recorder 分析
+
+Java Flight Recorder 是 Oracle JDK 内置的分析和事件收集框架。[Java Mission Control](http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html) 是一套先进的工具,可以对 Java Flight Recorder 收集的大量数据进行高效和详细的分析。配置示例:
 
 {% highlight yaml %}
 env.java.opts: "-XX:+UnlockCommercialFeatures -XX:+UnlockDiagnosticVMOptions -XX:+FlightRecorder -XX:+DebugNonSafepoints -XX:FlightRecorderOptions=defaultrecording=true,dumponexit=true,dumponexitpath=${FLINK_LOG_PREFIX}.jfr"
 {% endhighlight %}
 
-## Profiling with JITWatch
+<a name="profiling-with-jitwatch"></a>
+
+## 使用 JITWatch 分析
 
-[JITWatch](https://github.com/AdoptOpenJDK/jitwatch/wiki) is a log analyser and visualizer for the Java HotSpot JIT
-compiler used to inspect inlining decisions, hot methods, bytecode, and assembly. Example configuration:
+[JITWatch](https://github.com/AdoptOpenJDK/jitwatch/wiki) Java HotSpot JIT 编译器的日志分析器和可视化工具,用于检查内联决策,热方法,字节码和汇编。配置示例:
 
 {% highlight yaml %}
 env.java.opts: "-XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation -XX:LogFile=${FLINK_LOG_PREFIX}.jit -XX:+PrintAssembly"
 {% endhighlight %}
 
-## Analyzing Out of Memory Problems
+<a name="analyzing-out-of-memory-problems"></a>
 
-If you encounter `OutOfMemoryExceptions` with your Flink application, then it is a good idea to enable heap dumps on out of memory errors.
+## 分析内存不足问题(Out of Memory Problems)

Review comment:
       ```suggestion
   ## 分析内存溢出问题(Out of Memory Problems)
   ```

##########
File path: docs/monitoring/application_profiling.zh.md
##########
@@ -25,58 +25,56 @@ under the License.
 * ToC
 {:toc}
 
-## Overview of Custom Logging with Apache Flink
+<a name="overview-of-custom-logging-with-apache-flink"></a>
 
-Each standalone JobManager, TaskManager, HistoryServer, and ZooKeeper daemon redirects `stdout` and `stderr` to a file
-with a `.out` filename suffix and writes internal logging to a file with a `.log` suffix. Java options configured by the
-user in `env.java.opts`, `env.java.opts.jobmanager`, `env.java.opts.taskmanager`, `env.java.opts.historyserver` and
-`env.java.opts.client` can likewise define log files with
-use of the script variable `FLINK_LOG_PREFIX` and by enclosing the options in double quotes for late evaluation. Log files
-using `FLINK_LOG_PREFIX` are rotated along with the default `.out` and `.log` files.
+## 使用 Apache Flink 自定义日志概述
 
-## Profiling with Java Flight Recorder
+每个独立的 JobManager,TaskManager,HistoryServer,ZooKeeper 后台进程都将 `stdout` 和 `stderr` 重定向到 `.out` 文件名后缀的文件,并将其内部的日志记录写入到 `.log` 后缀的文件。用户可以在 `env.java.opts`,`env.java.opts.jobmanager`,`env.java.opts.taskmanager`,`env.java.opts.historyserver` 和 `env.java.opts.client` 配置项中配置 Java 选项(包括 log 相关的选项),同样也可以使用脚本变量 `FLINK_LOG_PREFIX` 定义日志文件,并将选项括在双引号中以供后期使用。日志文件将使用 `FLINK_LOG_PREFIX` 与默认的 `.out` 和 `.log` 后缀一起滚动。
 
-Java Flight Recorder is a profiling and event collection framework built into the Oracle JDK.
-[Java Mission Control](http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html)
-is an advanced set of tools that enables efficient and detailed analysis of the extensive of data collected by Java
-Flight Recorder. Example configuration:
+<a name="profiling-with-java-flight-recorder"></a>
+
+## 使用 Java Flight Recorder 分析
+
+Java Flight Recorder 是 Oracle JDK 内置的分析和事件收集框架。[Java Mission Control](http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html) 是一套先进的工具,可以对 Java Flight Recorder 收集的大量数据进行高效和详细的分析。配置示例:
 
 {% highlight yaml %}
 env.java.opts: "-XX:+UnlockCommercialFeatures -XX:+UnlockDiagnosticVMOptions -XX:+FlightRecorder -XX:+DebugNonSafepoints -XX:FlightRecorderOptions=defaultrecording=true,dumponexit=true,dumponexitpath=${FLINK_LOG_PREFIX}.jfr"
 {% endhighlight %}
 
-## Profiling with JITWatch
+<a name="profiling-with-jitwatch"></a>
+
+## 使用 JITWatch 分析
 
-[JITWatch](https://github.com/AdoptOpenJDK/jitwatch/wiki) is a log analyser and visualizer for the Java HotSpot JIT
-compiler used to inspect inlining decisions, hot methods, bytecode, and assembly. Example configuration:
+[JITWatch](https://github.com/AdoptOpenJDK/jitwatch/wiki) Java HotSpot JIT 编译器的日志分析器和可视化工具,用于检查内联决策,热方法,字节码和汇编。配置示例:
 
 {% highlight yaml %}
 env.java.opts: "-XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation -XX:LogFile=${FLINK_LOG_PREFIX}.jit -XX:+PrintAssembly"
 {% endhighlight %}
 
-## Analyzing Out of Memory Problems
+<a name="analyzing-out-of-memory-problems"></a>
 
-If you encounter `OutOfMemoryExceptions` with your Flink application, then it is a good idea to enable heap dumps on out of memory errors.
+## 分析内存不足问题(Out of Memory Problems)
+
+如果你的 Flink 应用程序遇到 `OutOfMemoryExceptions` ,那么启用在内存不足错误时堆转储是一个好主意。
 
 {% highlight yaml %}
 env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=${FLINK_LOG_PREFIX}.hprof"
 {% endhighlight %}
 
-The heap dump will allow you to analyze potential memory leaks in your user code.
-If the memory leak should be caused by Flink, then please reach out to the [dev mailing list](mailto:dev@flink.apache.org).
+堆转储将使你能够方便地分析用户代码中潜在的内存泄漏问题。如果内存泄漏是由 Flink 引起的,那么请联系[开发人员邮件列表](mailto:dev@flink.apache.org)。
+
+<a name="analyzing-memory--garbage-collection-behaviour"></a>
 
-## Analyzing Memory & Garbage Collection Behaviour
+## 分析内存和 Garbage Collection
 
-Memory usage and garbage collection can have a profound impact on your application.
-The effects can range from slight performance degradation to a complete cluster failure if the GC pauses are too long.
-If you want to better understand the memory and GC behaviour of your application, then you can enable memory logging on the `TaskManagers`.
+内存使用和 garbage collection 会对你的应用程序产生巨大的影响。如果 GC 停顿时间过长,其影响范围可能从轻微的性能下降到完全的集群故障。如果你想更好地理解应用程序的内存和 GC 行为,那么可以在 `TaskManagers` 上启用内存日志记录。
 
 {% highlight yaml %}
 taskmanager.debug.memory.log: true
 taskmanager.debug.memory.log-interval: 10000 // 10s interval
 {% endhighlight %}
 
-If you are interested in more detailed GC statistics, then you can activate the JVM's GC logging via:
+如果你对更详细的 GC 统计数据感兴趣,则可以通过以下方式激活 JVM 的 GC 日志记录:

Review comment:
       ```suggestion
   如果你想了解更详细的 GC 统计数据,可以通过以下方式激活 JVM 的 GC 日志记录:
   ```

##########
File path: docs/monitoring/application_profiling.zh.md
##########
@@ -25,58 +25,56 @@ under the License.
 * ToC
 {:toc}
 
-## Overview of Custom Logging with Apache Flink
+<a name="overview-of-custom-logging-with-apache-flink"></a>
 
-Each standalone JobManager, TaskManager, HistoryServer, and ZooKeeper daemon redirects `stdout` and `stderr` to a file
-with a `.out` filename suffix and writes internal logging to a file with a `.log` suffix. Java options configured by the
-user in `env.java.opts`, `env.java.opts.jobmanager`, `env.java.opts.taskmanager`, `env.java.opts.historyserver` and
-`env.java.opts.client` can likewise define log files with
-use of the script variable `FLINK_LOG_PREFIX` and by enclosing the options in double quotes for late evaluation. Log files
-using `FLINK_LOG_PREFIX` are rotated along with the default `.out` and `.log` files.
+## 使用 Apache Flink 自定义日志概述

Review comment:
       ```suggestion
   ## Apache Flink 自定义日志概述
   ```

##########
File path: docs/monitoring/application_profiling.zh.md
##########
@@ -25,58 +25,56 @@ under the License.
 * ToC
 {:toc}
 
-## Overview of Custom Logging with Apache Flink
+<a name="overview-of-custom-logging-with-apache-flink"></a>
 
-Each standalone JobManager, TaskManager, HistoryServer, and ZooKeeper daemon redirects `stdout` and `stderr` to a file
-with a `.out` filename suffix and writes internal logging to a file with a `.log` suffix. Java options configured by the
-user in `env.java.opts`, `env.java.opts.jobmanager`, `env.java.opts.taskmanager`, `env.java.opts.historyserver` and
-`env.java.opts.client` can likewise define log files with
-use of the script variable `FLINK_LOG_PREFIX` and by enclosing the options in double quotes for late evaluation. Log files
-using `FLINK_LOG_PREFIX` are rotated along with the default `.out` and `.log` files.
+## 使用 Apache Flink 自定义日志概述
 
-## Profiling with Java Flight Recorder
+每个独立的 JobManager,TaskManager,HistoryServer,ZooKeeper 后台进程都将 `stdout` 和 `stderr` 重定向到 `.out` 文件名后缀的文件,并将其内部的日志记录写入到 `.log` 后缀的文件。用户可以在 `env.java.opts`,`env.java.opts.jobmanager`,`env.java.opts.taskmanager`,`env.java.opts.historyserver` 和 `env.java.opts.client` 配置项中配置 Java 选项(包括 log 相关的选项),同样也可以使用脚本变量 `FLINK_LOG_PREFIX` 定义日志文件,并将选项括在双引号中以供后期使用。日志文件将使用 `FLINK_LOG_PREFIX` 与默认的 `.out` 和 `.log` 后缀一起滚动。

Review comment:
       ```suggestion
   每个独立的 JobManager,TaskManager,HistoryServer,ZooKeeper 守护进程都将 `stdout` 和 `stderr` 重定向到名称后缀为 `.out` 的文件,并将其内部的日志记录写入到 `.log` 后缀的文件。用户可以在 `env.java.opts`,`env.java.opts.jobmanager`,`env.java.opts.taskmanager`,`env.java.opts.historyserver` 和 `env.java.opts.client` 配置项中配置 Java 选项(包括 log 相关的选项),同样也可以使用脚本变量 `FLINK_LOG_PREFIX` 定义日志文件,并将选项括在双引号中以供后期使用。日志文件将使用 `FLINK_LOG_PREFIX` 与默认的 `.out` 和 `.log` 后缀一起滚动。
   ```

##########
File path: docs/monitoring/application_profiling.zh.md
##########
@@ -25,58 +25,56 @@ under the License.
 * ToC
 {:toc}
 
-## Overview of Custom Logging with Apache Flink
+<a name="overview-of-custom-logging-with-apache-flink"></a>
 
-Each standalone JobManager, TaskManager, HistoryServer, and ZooKeeper daemon redirects `stdout` and `stderr` to a file
-with a `.out` filename suffix and writes internal logging to a file with a `.log` suffix. Java options configured by the
-user in `env.java.opts`, `env.java.opts.jobmanager`, `env.java.opts.taskmanager`, `env.java.opts.historyserver` and
-`env.java.opts.client` can likewise define log files with
-use of the script variable `FLINK_LOG_PREFIX` and by enclosing the options in double quotes for late evaluation. Log files
-using `FLINK_LOG_PREFIX` are rotated along with the default `.out` and `.log` files.
+## 使用 Apache Flink 自定义日志概述
 
-## Profiling with Java Flight Recorder
+每个独立的 JobManager,TaskManager,HistoryServer,ZooKeeper 后台进程都将 `stdout` 和 `stderr` 重定向到 `.out` 文件名后缀的文件,并将其内部的日志记录写入到 `.log` 后缀的文件。用户可以在 `env.java.opts`,`env.java.opts.jobmanager`,`env.java.opts.taskmanager`,`env.java.opts.historyserver` 和 `env.java.opts.client` 配置项中配置 Java 选项(包括 log 相关的选项),同样也可以使用脚本变量 `FLINK_LOG_PREFIX` 定义日志文件,并将选项括在双引号中以供后期使用。日志文件将使用 `FLINK_LOG_PREFIX` 与默认的 `.out` 和 `.log` 后缀一起滚动。
 
-Java Flight Recorder is a profiling and event collection framework built into the Oracle JDK.
-[Java Mission Control](http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html)
-is an advanced set of tools that enables efficient and detailed analysis of the extensive of data collected by Java
-Flight Recorder. Example configuration:
+<a name="profiling-with-java-flight-recorder"></a>
+
+## 使用 Java Flight Recorder 分析
+
+Java Flight Recorder 是 Oracle JDK 内置的分析和事件收集框架。[Java Mission Control](http://www.oracle.com/technetwork/java/javaseproducts/mission-control/java-mission-control-1998576.html) 是一套先进的工具,可以对 Java Flight Recorder 收集的大量数据进行高效和详细的分析。配置示例:
 
 {% highlight yaml %}
 env.java.opts: "-XX:+UnlockCommercialFeatures -XX:+UnlockDiagnosticVMOptions -XX:+FlightRecorder -XX:+DebugNonSafepoints -XX:FlightRecorderOptions=defaultrecording=true,dumponexit=true,dumponexitpath=${FLINK_LOG_PREFIX}.jfr"
 {% endhighlight %}
 
-## Profiling with JITWatch
+<a name="profiling-with-jitwatch"></a>
+
+## 使用 JITWatch 分析
 
-[JITWatch](https://github.com/AdoptOpenJDK/jitwatch/wiki) is a log analyser and visualizer for the Java HotSpot JIT
-compiler used to inspect inlining decisions, hot methods, bytecode, and assembly. Example configuration:
+[JITWatch](https://github.com/AdoptOpenJDK/jitwatch/wiki) Java HotSpot JIT 编译器的日志分析器和可视化工具,用于检查内联决策,热方法,字节码和汇编。配置示例:
 
 {% highlight yaml %}
 env.java.opts: "-XX:+UnlockDiagnosticVMOptions -XX:+TraceClassLoading -XX:+LogCompilation -XX:LogFile=${FLINK_LOG_PREFIX}.jit -XX:+PrintAssembly"
 {% endhighlight %}
 
-## Analyzing Out of Memory Problems
+<a name="analyzing-out-of-memory-problems"></a>
 
-If you encounter `OutOfMemoryExceptions` with your Flink application, then it is a good idea to enable heap dumps on out of memory errors.
+## 分析内存不足问题(Out of Memory Problems)
+
+如果你的 Flink 应用程序遇到 `OutOfMemoryExceptions` ,那么启用在内存不足错误时堆转储是一个好主意。
 
 {% highlight yaml %}
 env.java.opts: "-XX:+HeapDumpOnOutOfMemoryError -XX:HeapDumpPath=${FLINK_LOG_PREFIX}.hprof"
 {% endhighlight %}
 
-The heap dump will allow you to analyze potential memory leaks in your user code.
-If the memory leak should be caused by Flink, then please reach out to the [dev mailing list](mailto:dev@flink.apache.org).
+堆转储将使你能够方便地分析用户代码中潜在的内存泄漏问题。如果内存泄漏是由 Flink 引起的,那么请联系[开发人员邮件列表](mailto:dev@flink.apache.org)。
+
+<a name="analyzing-memory--garbage-collection-behaviour"></a>
 
-## Analyzing Memory & Garbage Collection Behaviour
+## 分析内存和 Garbage Collection
 
-Memory usage and garbage collection can have a profound impact on your application.
-The effects can range from slight performance degradation to a complete cluster failure if the GC pauses are too long.
-If you want to better understand the memory and GC behaviour of your application, then you can enable memory logging on the `TaskManagers`.
+内存使用和 garbage collection 会对你的应用程序产生巨大的影响。如果 GC 停顿时间过长,其影响范围可能从轻微的性能下降到完全的集群故障。如果你想更好地理解应用程序的内存和 GC 行为,那么可以在 `TaskManagers` 上启用内存日志记录。

Review comment:
       ```suggestion
   内存使用和 garbage collection 会对你的应用程序产生巨大的影响。如果 GC 停顿时间过长,其影响力小到性能下降,大到集群全面瘫痪。如果你想更好地理解应用程序的内存和 GC 行为,可以在 `TaskManagers` 上启用内存日志记录。
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org