You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/02/25 06:49:27 UTC

[GitHub] [flink] PatrickRen opened a new pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

PatrickRen opened a new pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207
 
 
   ## What is the purpose of the change
   
   This pull request translates /opt/filesystems/s3.zh.md into Chinese, and fixes a typo in the original English document.
   
   
   ## Brief change log
   
    - Translate  /opt/filesystems/s3.zh.md into Chinese
    - Fix a typo in /opt/filesystems/s3.md
   
   
   ## Verifying this change
   
   This change is a trivial rework / code cleanup without any test coverage.
   
   
   ## Does this pull request potentially affect one of the following parts:
   
     - Dependencies (does it add or upgrade a dependency): no
     - The public API, i.e., is any changed class annotated with `@Public(Evolving)`: no
     - The serializers: no
     - The runtime per-record code paths (performance sensitive): no
     - Anything that affects deployment or recovery: JobManager (and its components), Checkpointing, Kubernetes/Yarn/Mesos, ZooKeeper: no
     - The S3 file system connector: no
   
   ## Documentation
   
     - Does this pull request introduce a new feature?  no
     - If yes, how is the feature documented? not applicable
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot commented on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot commented on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 5d4205e058b63f5e320edc6e21384c1be017316a UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] tillrohrmann closed pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
tillrohrmann closed pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207
 
 
   

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383736051
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
-For most use cases, you may use one of our `flink-s3-fs-hadoop` and `flink-s3-fs-presto` S3 filesystem plugins which are self-contained and easy to set up.
-For some cases, however, e.g., for using S3 as YARN's resource storage dir, it may be necessary to set up a specific Hadoop S3 filesystem implementation.
+在大部分使用场景下,可使用 `flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 两个独立且易于设置的 S3 文件系统插件。然而在某些情况下,例如使用 S3 作为 YARN 的资源存储目录时,可能需要配置 Hadoop S3 文件系统。
 
-### Hadoop/Presto S3 File Systems plugins
+### Hadoop/Presto S3 文件系统插件
 
-{% panel **Note:** You don't have to configure this manually if you are running [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html). %}
+{% panel **注意:** 如果您在使用 [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html),您无需手动对此进行配置。 %}
 
-Flink provides two file systems to talk to Amazon S3, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
-Both implementations are self-contained with no dependency footprint, so there is no need to add Hadoop to the classpath to use them.
+Flink 提供两种文件系统用来与 S3 交互:`flink-s3-fs-presto` 和 `flink-s3-fs-hadoop`。两种实现都是独立的且没有依赖项,因此使用时无需将 Hadoop 添加至 classpath。
 
-  - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is based on code from the [Presto project](https://prestodb.io/).
-  You can configure it the same way you can [configure the Presto file system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration) by placing adding the configurations to your `flink-conf.yaml`. Presto is the recommended file system for checkpointing to S3.
+  - `flink-s3-fs-presto`,通过 *s3://* 和 *s3p://* 两种 scheme 使用,基于 [Presto project](https://prestodb.io/)。
+  可以通过与[配置 Presto 文件系统](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration)相同的方法进行配置,即将配置添加到 `flink-conf.yaml` 文件中。推荐使用 Presto 文件系统来在 S3 中建立 checkpoint。
 
-  - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
-  The file system can be [configured exactly like Hadoop's s3a](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A) by placing adding the configurations to your `flink-conf.yaml`. It is the only S3 file system with support for the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html).
+  - `flink-s3-fs-hadoop`,通过 *s3://* 和 *s3a://* 两种 scheme 使用, 基于 [Hadoop Project](https://hadoop.apache.org/)。
+  文件系统可以与 [Hadoop S3A 完全相同的配置方法](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A)进行配置,即将配置添加到 `flink-conf.yaml` 文件中。它是唯一一个支持 [StreamingFileSink]({{ site.baseurl}}/zh/dev/connectors/streamfile_sink.html) 的文件系统。
 
 Review comment:
   Fixed in the latest commit. Thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "DELETED",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "status" : "SUCCESS",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150441360",
       "triggerID" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5556",
       "triggerID" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * cf7babee05efe8fe3395b3ce3961f2bf293fc645 Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150441360) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5556) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383732964
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
-For most use cases, you may use one of our `flink-s3-fs-hadoop` and `flink-s3-fs-presto` S3 filesystem plugins which are self-contained and easy to set up.
-For some cases, however, e.g., for using S3 as YARN's resource storage dir, it may be necessary to set up a specific Hadoop S3 filesystem implementation.
+在大部分使用场景下,可使用 `flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 两个独立且易于设置的 S3 文件系统插件。然而在某些情况下,例如使用 S3 作为 YARN 的资源存储目录时,可能需要配置 Hadoop S3 文件系统。
 
 Review comment:
   Yes you're right. I didn't see the "one of" in the English documentation. Thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "DELETED",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "status" : "PENDING",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150441360",
       "triggerID" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5556",
       "triggerID" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * cf7babee05efe8fe3395b3ce3961f2bf293fc645 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/150441360) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5556) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 5d4205e058b63f5e320edc6e21384c1be017316a Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150428435) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546) 
   * cf7babee05efe8fe3395b3ce3961f2bf293fc645 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "PENDING",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 5d4205e058b63f5e320edc6e21384c1be017316a Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/150428435) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383699395
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
 Review comment:
   URL needs to change to `{{ site.baseurl }}/zh/ops/state/state_backends.html#the-rocksdbstatebackend`

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "PENDING",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 5d4205e058b63f5e320edc6e21384c1be017316a Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/150428435) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383735973
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
-For most use cases, you may use one of our `flink-s3-fs-hadoop` and `flink-s3-fs-presto` S3 filesystem plugins which are self-contained and easy to set up.
-For some cases, however, e.g., for using S3 as YARN's resource storage dir, it may be necessary to set up a specific Hadoop S3 filesystem implementation.
+在大部分使用场景下,可使用 `flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 两个独立且易于设置的 S3 文件系统插件。然而在某些情况下,例如使用 S3 作为 YARN 的资源存储目录时,可能需要配置 Hadoop S3 文件系统。
 
-### Hadoop/Presto S3 File Systems plugins
+### Hadoop/Presto S3 文件系统插件
 
-{% panel **Note:** You don't have to configure this manually if you are running [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html). %}
+{% panel **注意:** 如果您在使用 [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html),您无需手动对此进行配置。 %}
 
-Flink provides two file systems to talk to Amazon S3, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
-Both implementations are self-contained with no dependency footprint, so there is no need to add Hadoop to the classpath to use them.
+Flink 提供两种文件系统用来与 S3 交互:`flink-s3-fs-presto` 和 `flink-s3-fs-hadoop`。两种实现都是独立的且没有依赖项,因此使用时无需将 Hadoop 添加至 classpath。
 
-  - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is based on code from the [Presto project](https://prestodb.io/).
-  You can configure it the same way you can [configure the Presto file system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration) by placing adding the configurations to your `flink-conf.yaml`. Presto is the recommended file system for checkpointing to S3.
+  - `flink-s3-fs-presto`,通过 *s3://* 和 *s3p://* 两种 scheme 使用,基于 [Presto project](https://prestodb.io/)。
+  可以通过与[配置 Presto 文件系统](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration)相同的方法进行配置,即将配置添加到 `flink-conf.yaml` 文件中。推荐使用 Presto 文件系统来在 S3 中建立 checkpoint。
 
-  - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
-  The file system can be [configured exactly like Hadoop's s3a](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A) by placing adding the configurations to your `flink-conf.yaml`. It is the only S3 file system with support for the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html).
+  - `flink-s3-fs-hadoop`,通过 *s3://* 和 *s3a://* 两种 scheme 使用, 基于 [Hadoop Project](https://hadoop.apache.org/)。
+  文件系统可以与 [Hadoop S3A 完全相同的配置方法](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A)进行配置,即将配置添加到 `flink-conf.yaml` 文件中。它是唯一一个支持 [StreamingFileSink]({{ site.baseurl}}/zh/dev/connectors/streamfile_sink.html) 的文件系统。
 
-Both `flink-s3-fs-hadoop` and `flink-s3-fs-presto` register default FileSystem
-wrappers for URIs with the *s3://* scheme, `flink-s3-fs-hadoop` also registers
-for *s3a://* and `flink-s3-fs-presto` also registers for *s3p://*, so you can
-use this to use both at the same time.
-For example, the job uses the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html) which only supports Hadoop, but uses Presto for checkpointing.
-In this case, it is advised to explicitly use *s3a://* as a scheme for the sink (Hadoop) and *s3p://* for checkpointing (Presto).
+`flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 都为 *s3://* scheme 注册了默认的文件系统包装器,`flink-s3-fs-hadoop` 另外注册了 *s3a://*,`flink-s3-fs-presto` 注册了 *s3p://*,因此二者可以同时使用。
+例如某作业使用了 [StreamingFileSink]({{ site.baseurl}}/zh/dev/connectors/streamfile_sink.html),它仅支持 Hadoop,但建立 checkpoint 使用 Presto。在这种情况下,建议明确地使用 *s3a://* 作为 sink (Hadoop) 的 scheme,checkpoint (Presto) 使用 *s3p://*。
 
-To use `flink-s3-fs-hadoop` or `flink-s3-fs-presto`, copy the respective JAR file from the `opt` directory to the `plugins` directory of your Flink distribution before starting Flink, e.g.
+在启动 Flink 之前,将对应的 JAR 文件从 `opt` 复制到 Flink 发行版的 `plugins` 目录下,以使用 `flink-s3-fs-hadoop` 或 `flink-s3-fs-presto`。
 
 {% highlight bash %}
 mkdir ./plugins/s3-fs-presto
 cp ./opt/flink-s3-fs-presto-{{ site.version }}.jar ./plugins/s3-fs-presto/
 {% endhighlight %}
 
-#### Configure Access Credentials
+#### 配置访问凭据
 
-After setting up the S3 FileSystem wrapper, you need to make sure that Flink is allowed to access your S3 buckets.
+在设置好 S3 文件系统包装器后,您需要确认 Flink 具有访问 S3 Bucket 的权限。
 
-##### Identity and Access Management (IAM) (Recommended)
+##### Identity and Access Management (IAM)(推荐使用)
 
-The recommended way of setting up credentials on AWS is via [Identity and Access Management (IAM)](http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html). You can use IAM features to securely give Flink instances the credentials that they need to access S3 buckets. Details about how to do this are beyond the scope of this documentation. Please refer to the AWS user guide. What you are looking for are [IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).
+建议通过 [Identity and Access Management (IAM)](http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) 来配置 AWS 凭据。可使用 IAM 功能为 Flink 实例安全地提供访问 S3 Bucket 所需的凭据。关于配置的细节超出了本文档的范围,请参考 AWS 用户手册中的 [IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) 部分。
 
-If you set this up correctly, you can manage access to S3 within AWS and don't need to distribute any access keys to Flink.
+如果配置正确,则可在 AWS 中管理对 S3 的访问,而无需为 Flink 分发任何访问密钥(Access Key)。
 
-##### Access Keys (Discouraged)
+##### 访问密钥(Access Key)(不推荐)
 
-Access to S3 can be granted via your **access and secret key pair**. Please note that this is discouraged since the [introduction of IAM roles](https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2).
+可以通过**访问密钥对(Access and secret key)**授予 S3 访问权限。请注意,根据 [Introduction of IAM roles](https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2),不推荐使用该方法。
 
 Review comment:
   Fixed in the latest commit. Thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383702964
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
-For most use cases, you may use one of our `flink-s3-fs-hadoop` and `flink-s3-fs-presto` S3 filesystem plugins which are self-contained and easy to set up.
-For some cases, however, e.g., for using S3 as YARN's resource storage dir, it may be necessary to set up a specific Hadoop S3 filesystem implementation.
+在大部分使用场景下,可使用 `flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 两个独立且易于设置的 S3 文件系统插件。然而在某些情况下,例如使用 S3 作为 YARN 的资源存储目录时,可能需要配置 Hadoop S3 文件系统。
 
-### Hadoop/Presto S3 File Systems plugins
+### Hadoop/Presto S3 文件系统插件
 
-{% panel **Note:** You don't have to configure this manually if you are running [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html). %}
+{% panel **注意:** 如果您在使用 [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html),您无需手动对此进行配置。 %}
 
-Flink provides two file systems to talk to Amazon S3, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
-Both implementations are self-contained with no dependency footprint, so there is no need to add Hadoop to the classpath to use them.
+Flink 提供两种文件系统用来与 S3 交互:`flink-s3-fs-presto` 和 `flink-s3-fs-hadoop`。两种实现都是独立的且没有依赖项,因此使用时无需将 Hadoop 添加至 classpath。
 
-  - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is based on code from the [Presto project](https://prestodb.io/).
-  You can configure it the same way you can [configure the Presto file system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration) by placing adding the configurations to your `flink-conf.yaml`. Presto is the recommended file system for checkpointing to S3.
+  - `flink-s3-fs-presto`,通过 *s3://* 和 *s3p://* 两种 scheme 使用,基于 [Presto project](https://prestodb.io/)。
+  可以通过与[配置 Presto 文件系统](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration)相同的方法进行配置,即将配置添加到 `flink-conf.yaml` 文件中。推荐使用 Presto 文件系统来在 S3 中建立 checkpoint。
 
-  - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
-  The file system can be [configured exactly like Hadoop's s3a](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A) by placing adding the configurations to your `flink-conf.yaml`. It is the only S3 file system with support for the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html).
+  - `flink-s3-fs-hadoop`,通过 *s3://* 和 *s3a://* 两种 scheme 使用, 基于 [Hadoop Project](https://hadoop.apache.org/)。
+  文件系统可以与 [Hadoop S3A 完全相同的配置方法](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A)进行配置,即将配置添加到 `flink-conf.yaml` 文件中。它是唯一一个支持 [StreamingFileSink]({{ site.baseurl}}/zh/dev/connectors/streamfile_sink.html) 的文件系统。
 
 Review comment:
   文件系统可以使用与 [Hadoop S3A 完全相同的配置方法](...)进行配置

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot commented on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot commented on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590712471
 
 
   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 5d4205e058b63f5e320edc6e21384c1be017316a (Tue Feb 25 06:52:08 UTC 2020)
   
    ✅no warnings
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383700814
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
-For most use cases, you may use one of our `flink-s3-fs-hadoop` and `flink-s3-fs-presto` S3 filesystem plugins which are self-contained and easy to set up.
-For some cases, however, e.g., for using S3 as YARN's resource storage dir, it may be necessary to set up a specific Hadoop S3 filesystem implementation.
+在大部分使用场景下,可使用 `flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 两个独立且易于设置的 S3 文件系统插件。然而在某些情况下,例如使用 S3 作为 YARN 的资源存储目录时,可能需要配置 Hadoop S3 文件系统。
 
 Review comment:
   I think it is more accurate to use ”或“ instead of “和”

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
Sxnan commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383706433
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
-For most use cases, you may use one of our `flink-s3-fs-hadoop` and `flink-s3-fs-presto` S3 filesystem plugins which are self-contained and easy to set up.
-For some cases, however, e.g., for using S3 as YARN's resource storage dir, it may be necessary to set up a specific Hadoop S3 filesystem implementation.
+在大部分使用场景下,可使用 `flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 两个独立且易于设置的 S3 文件系统插件。然而在某些情况下,例如使用 S3 作为 YARN 的资源存储目录时,可能需要配置 Hadoop S3 文件系统。
 
-### Hadoop/Presto S3 File Systems plugins
+### Hadoop/Presto S3 文件系统插件
 
-{% panel **Note:** You don't have to configure this manually if you are running [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html). %}
+{% panel **注意:** 如果您在使用 [Flink on EMR](https://docs.aws.amazon.com/emr/latest/ReleaseGuide/emr-flink.html),您无需手动对此进行配置。 %}
 
-Flink provides two file systems to talk to Amazon S3, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
-Both implementations are self-contained with no dependency footprint, so there is no need to add Hadoop to the classpath to use them.
+Flink 提供两种文件系统用来与 S3 交互:`flink-s3-fs-presto` 和 `flink-s3-fs-hadoop`。两种实现都是独立的且没有依赖项,因此使用时无需将 Hadoop 添加至 classpath。
 
-  - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is based on code from the [Presto project](https://prestodb.io/).
-  You can configure it the same way you can [configure the Presto file system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration) by placing adding the configurations to your `flink-conf.yaml`. Presto is the recommended file system for checkpointing to S3.
+  - `flink-s3-fs-presto`,通过 *s3://* 和 *s3p://* 两种 scheme 使用,基于 [Presto project](https://prestodb.io/)。
+  可以通过与[配置 Presto 文件系统](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration)相同的方法进行配置,即将配置添加到 `flink-conf.yaml` 文件中。推荐使用 Presto 文件系统来在 S3 中建立 checkpoint。
 
-  - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
-  The file system can be [configured exactly like Hadoop's s3a](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A) by placing adding the configurations to your `flink-conf.yaml`. It is the only S3 file system with support for the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html).
+  - `flink-s3-fs-hadoop`,通过 *s3://* 和 *s3a://* 两种 scheme 使用, 基于 [Hadoop Project](https://hadoop.apache.org/)。
+  文件系统可以与 [Hadoop S3A 完全相同的配置方法](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A)进行配置,即将配置添加到 `flink-conf.yaml` 文件中。它是唯一一个支持 [StreamingFileSink]({{ site.baseurl}}/zh/dev/connectors/streamfile_sink.html) 的文件系统。
 
-Both `flink-s3-fs-hadoop` and `flink-s3-fs-presto` register default FileSystem
-wrappers for URIs with the *s3://* scheme, `flink-s3-fs-hadoop` also registers
-for *s3a://* and `flink-s3-fs-presto` also registers for *s3p://*, so you can
-use this to use both at the same time.
-For example, the job uses the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html) which only supports Hadoop, but uses Presto for checkpointing.
-In this case, it is advised to explicitly use *s3a://* as a scheme for the sink (Hadoop) and *s3p://* for checkpointing (Presto).
+`flink-s3-fs-hadoop` 和 `flink-s3-fs-presto` 都为 *s3://* scheme 注册了默认的文件系统包装器,`flink-s3-fs-hadoop` 另外注册了 *s3a://*,`flink-s3-fs-presto` 注册了 *s3p://*,因此二者可以同时使用。
+例如某作业使用了 [StreamingFileSink]({{ site.baseurl}}/zh/dev/connectors/streamfile_sink.html),它仅支持 Hadoop,但建立 checkpoint 使用 Presto。在这种情况下,建议明确地使用 *s3a://* 作为 sink (Hadoop) 的 scheme,checkpoint (Presto) 使用 *s3p://*。
 
-To use `flink-s3-fs-hadoop` or `flink-s3-fs-presto`, copy the respective JAR file from the `opt` directory to the `plugins` directory of your Flink distribution before starting Flink, e.g.
+在启动 Flink 之前,将对应的 JAR 文件从 `opt` 复制到 Flink 发行版的 `plugins` 目录下,以使用 `flink-s3-fs-hadoop` 或 `flink-s3-fs-presto`。
 
 {% highlight bash %}
 mkdir ./plugins/s3-fs-presto
 cp ./opt/flink-s3-fs-presto-{{ site.version }}.jar ./plugins/s3-fs-presto/
 {% endhighlight %}
 
-#### Configure Access Credentials
+#### 配置访问凭据
 
-After setting up the S3 FileSystem wrapper, you need to make sure that Flink is allowed to access your S3 buckets.
+在设置好 S3 文件系统包装器后,您需要确认 Flink 具有访问 S3 Bucket 的权限。
 
-##### Identity and Access Management (IAM) (Recommended)
+##### Identity and Access Management (IAM)(推荐使用)
 
-The recommended way of setting up credentials on AWS is via [Identity and Access Management (IAM)](http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html). You can use IAM features to securely give Flink instances the credentials that they need to access S3 buckets. Details about how to do this are beyond the scope of this documentation. Please refer to the AWS user guide. What you are looking for are [IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html).
+建议通过 [Identity and Access Management (IAM)](http://docs.aws.amazon.com/IAM/latest/UserGuide/introduction.html) 来配置 AWS 凭据。可使用 IAM 功能为 Flink 实例安全地提供访问 S3 Bucket 所需的凭据。关于配置的细节超出了本文档的范围,请参考 AWS 用户手册中的 [IAM Roles](http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/iam-roles-for-amazon-ec2.html) 部分。
 
-If you set this up correctly, you can manage access to S3 within AWS and don't need to distribute any access keys to Flink.
+如果配置正确,则可在 AWS 中管理对 S3 的访问,而无需为 Flink 分发任何访问密钥(Access Key)。
 
-##### Access Keys (Discouraged)
+##### 访问密钥(Access Key)(不推荐)
 
-Access to S3 can be granted via your **access and secret key pair**. Please note that this is discouraged since the [introduction of IAM roles](https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2).
+可以通过**访问密钥对(Access and secret key)**授予 S3 访问权限。请注意,根据 [Introduction of IAM roles](https://blogs.aws.amazon.com/security/post/Tx1XG3FX6VMU6O5/A-safer-way-to-distribute-AWS-credentials-to-EC2),不推荐使用该方法。
 
 Review comment:
   (Access and secret key)-> (access and secret key)

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
PatrickRen commented on a change in pull request #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#discussion_r383735737
 
 

 ##########
 File path: docs/ops/filesystems/s3.zh.md
 ##########
 @@ -23,123 +23,113 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) provides cloud object storage for a variety of use cases. You can use S3 with Flink for **reading** and **writing data** as well in conjunction with the [streaming **state backends**]({{ site.baseurl}}/ops/state/state_backends.html).
+[Amazon Simple Storage Service](http://aws.amazon.com/s3/) (Amazon S3) 提供用于多种场景的云对象存储。S3 可与 Flink 一起使用以读取、写入数据,并可与 [流的 **State backends**]({{ site.baseurl}}/ops/state/state_backends.html) 相结合使用。
 
 * This will be replaced by the TOC
 {:toc}
 
-You can use S3 objects like regular files by specifying paths in the following format:
+通过以下格式指定路径,S3 对象可类似于普通文件使用:
 
 {% highlight plain %}
 s3://<your-bucket>/<endpoint>
 {% endhighlight %}
 
-The endpoint can either be a single file or a directory, for example:
+Endpoint 可以是一个文件或目录,例如:
 
 {% highlight java %}
-// Read from S3 bucket
+// 读取 S3 bucket
 env.readTextFile("s3://<bucket>/<endpoint>");
 
-// Write to S3 bucket
+// 写入 S3 bucket
 stream.writeAsText("s3://<bucket>/<endpoint>");
 
-// Use S3 as FsStatebackend
+// 使用 S3 作为 FsStatebackend
 env.setStateBackend(new FsStateBackend("s3://<your-bucket>/<endpoint>"));
 {% endhighlight %}
 
-Note that these examples are *not* exhaustive and you can use S3 in other places as well, including your [high availability setup](../jobmanager_high_availability.html) or the [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend); everywhere that Flink expects a FileSystem URI.
+注意这些例子并*不详尽*,S3 同样可以用在其他场景,包括 [JobManager 高可用配置](../jobmanager_high_availability.html) 或 [RocksDBStateBackend]({{ site.baseurl }}/ops/state/state_backends.html#the-rocksdbstatebackend),以及所有 Flink 需要使用文件系统 URI 的位置。
 
 Review comment:
   Fixed in the latest commit. Thanks!

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 5d4205e058b63f5e320edc6e21384c1be017316a Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150428435) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services

[GitHub] [flink] flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on issue #11207: [FLINK-16131] [docs] Translate /opt/filesystems/s3.zh.md into Chinese
URL: https://github.com/apache/flink/pull/11207#issuecomment-590718863
 
 
   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150428435",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546",
       "triggerID" : "5d4205e058b63f5e320edc6e21384c1be017316a",
       "triggerType" : "PUSH"
     }, {
       "hash" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "status" : "PENDING",
       "url" : "https://travis-ci.com/flink-ci/flink/builds/150441360",
       "triggerID" : "cf7babee05efe8fe3395b3ce3961f2bf293fc645",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 5d4205e058b63f5e320edc6e21384c1be017316a Travis: [SUCCESS](https://travis-ci.com/flink-ci/flink/builds/150428435) Azure: [SUCCESS](https://dev.azure.com/rmetzger/5bd3ef0a-4359-41af-abca-811b04098d2e/_build/results?buildId=5546) 
   * cf7babee05efe8fe3395b3ce3961f2bf293fc645 Travis: [PENDING](https://travis-ci.com/flink-ci/flink/builds/150441360) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>

----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.
 
For queries about this service, please contact Infrastructure at:
users@infra.apache.org


With regards,
Apache Git Services