You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by pn...@apache.org on 2019/08/01 08:31:14 UTC

[flink] 02/03: [FLINK-12998][docs] Document optional file systems libs to use plugins loading mechanism

This is an automated email from the ASF dual-hosted git repository.

pnowojski pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 4f8c573d19999a2f422e4fe66e357b41b482b336
Author: Aleksey Pak <al...@ververica.com>
AuthorDate: Thu Jul 18 14:22:47 2019 +0200

    [FLINK-12998][docs] Document optional file systems libs to use plugins loading mechanism
---
 docs/ops/filesystems/azure.md    | 11 +++---
 docs/ops/filesystems/azure.zh.md | 11 +++---
 docs/ops/filesystems/index.md    | 72 +++++++++++++++++++++++++++-------------
 docs/ops/filesystems/index.zh.md | 72 +++++++++++++++++++++++++++-------------
 docs/ops/filesystems/oss.md      | 12 ++++---
 docs/ops/filesystems/oss.zh.md   | 12 ++++---
 docs/ops/filesystems/s3.md       | 25 +++++++-------
 docs/ops/filesystems/s3.zh.md    | 25 +++++++-------
 8 files changed, 150 insertions(+), 90 deletions(-)

diff --git a/docs/ops/filesystems/azure.md b/docs/ops/filesystems/azure.md
index 36720c8..d721be5 100644
--- a/docs/ops/filesystems/azure.md
+++ b/docs/ops/filesystems/azure.md
@@ -38,7 +38,7 @@ wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path
 wasbs://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>
 {% endhighlight %}
 
-Below shows how to use Azure Blob Storage with Flink:
+See below for how to use Azure Blob Storage in a Flink job:
 
 {% highlight java %}
 // Read from Azure Blob storage
@@ -51,12 +51,13 @@ stream.writeAsText("wasb://<your-container>@$<your-azure-account>.blob.core.wind
 env.setStateBackend(new FsStateBackend("wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>"));
 {% endhighlight %}
 
-### Shaded Hadoop Azure Blob Storage file system 
+### Shaded Hadoop Azure Blob Storage file system
 
-To use `flink-azure-fs-hadoop,` copy the respective JAR file from the opt directory to the lib directory of your Flink distribution before starting Flink, e.g.
+To use `flink-azure-fs-hadoop,` copy the respective JAR file from the `opt` directory to the `plugins` directory of your Flink distribution before starting Flink, e.g.
 
 {% highlight bash %}
-cp ./opt/flink-azure-fs-hadoop-{{ site.version }}.jar ./lib/
+mkdir ./plugins/azure-fs-hadoop
+cp ./opt/flink-azure-fs-hadoop-{{ site.version }}.jar ./plugins/azure-fs-hadoop/
 {% endhighlight %}
 
 `flink-azure-fs-hadoop` registers default FileSystem wrappers for URIs with the *wasb://* and *wasbs://* (SSL encrypted access) scheme.
@@ -74,4 +75,4 @@ There are some required configurations that must be added to `flink-conf.yaml`:
 fs.azure.account.key.youraccount.blob.core.windows.net: Azure Blob Storage access key
 {% endhighlight %}
 
-{% top %} 
+{% top %}
diff --git a/docs/ops/filesystems/azure.zh.md b/docs/ops/filesystems/azure.zh.md
index 36720c8..d721be5 100644
--- a/docs/ops/filesystems/azure.zh.md
+++ b/docs/ops/filesystems/azure.zh.md
@@ -38,7 +38,7 @@ wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path
 wasbs://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>
 {% endhighlight %}
 
-Below shows how to use Azure Blob Storage with Flink:
+See below for how to use Azure Blob Storage in a Flink job:
 
 {% highlight java %}
 // Read from Azure Blob storage
@@ -51,12 +51,13 @@ stream.writeAsText("wasb://<your-container>@$<your-azure-account>.blob.core.wind
 env.setStateBackend(new FsStateBackend("wasb://<your-container>@$<your-azure-account>.blob.core.windows.net/<object-path>"));
 {% endhighlight %}
 
-### Shaded Hadoop Azure Blob Storage file system 
+### Shaded Hadoop Azure Blob Storage file system
 
-To use `flink-azure-fs-hadoop,` copy the respective JAR file from the opt directory to the lib directory of your Flink distribution before starting Flink, e.g.
+To use `flink-azure-fs-hadoop,` copy the respective JAR file from the `opt` directory to the `plugins` directory of your Flink distribution before starting Flink, e.g.
 
 {% highlight bash %}
-cp ./opt/flink-azure-fs-hadoop-{{ site.version }}.jar ./lib/
+mkdir ./plugins/azure-fs-hadoop
+cp ./opt/flink-azure-fs-hadoop-{{ site.version }}.jar ./plugins/azure-fs-hadoop/
 {% endhighlight %}
 
 `flink-azure-fs-hadoop` registers default FileSystem wrappers for URIs with the *wasb://* and *wasbs://* (SSL encrypted access) scheme.
@@ -74,4 +75,4 @@ There are some required configurations that must be added to `flink-conf.yaml`:
 fs.azure.account.key.youraccount.blob.core.windows.net: Azure Blob Storage access key
 {% endhighlight %}
 
-{% top %} 
+{% top %}
diff --git a/docs/ops/filesystems/index.md b/docs/ops/filesystems/index.md
index eb4087d..88f5133 100644
--- a/docs/ops/filesystems/index.md
+++ b/docs/ops/filesystems/index.md
@@ -25,7 +25,7 @@ under the License.
 -->
 
 Apache Flink uses file systems to consume and persistently store data, both for the results of applications and for fault tolerance and recovery.
-These are some of most of the popular file systems, including *local*, *hadoop-compatible*, *S3*, *MapR FS*, *OpenStack Swift FS*, *Aliyun OSS* and *Azure Blob Storage*.
+These are some of most of the popular file systems, including *local*, *hadoop-compatible*, *Amazon S3*, *MapR FS*, *OpenStack Swift FS*, *Aliyun OSS* and *Azure Blob Storage*.
 
 The file system used for a particular file is determined by its URI scheme.
 For example, `file:///home/user/text.txt` refers to a file in the local file system, while `hdfs://namenode:50010/data/user/text.txt` is a file in a specific HDFS cluster.
@@ -35,32 +35,54 @@ File system instances are instantiated once per process and then cached/pooled,
 * This will be replaced by the TOC
 {:toc}
 
-## Built-in File Systems
+## Local File System
 
-Flink ships with implementations for the following file systems:
+Flink has built-in support for the file system of the local machine, including any NFS or SAN drives mounted into that local file system.
+It can be used by default without additional configuration. Local files are referenced with the *file://* URI scheme.
 
-  - **local**: This file system is used when the scheme is *"file://"*, and it represents the file system of the local machine, including any NFS or SAN drives mounted into that local file system.
+## Pluggable File Systems
 
-  - **S3**: Flink directly provides file systems to talk to Amazon S3 with two alternative implementations, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`. Both implementations are self-contained with no dependency footprint.
-    
-  - **MapR FS**: The MapR file system *"maprfs://"* is automatically available when the MapR libraries are in the classpath.  
-  
-  - **OpenStack Swift FS**: Flink directly provides a file system to talk to the OpenStack Swift file system, registered under the scheme *"swift://"*. 
-  The implementation of `flink-swift-fs-hadoop` is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
-  To use it when using Flink as a library, add the respective maven dependency (`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`
-  When starting a Flink application from the Flink binaries, copy or move the respective jar file from the `opt` folder to the `lib` folder.
+The Apache Flink project supports the following file systems:
+
+  - [**Amazon S3**](./s3.html) object storage is supported by two alternative implementations: `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
+  Both implementations are self-contained with no dependency footprint.
+
+  - **MapR FS** file system adapter is already supported in the main Flink distribution under the *maprfs://* URI scheme.
+  You must provide the MapR libraries in the classpath (for example in `lib` directory).
+
+  - **OpenStack Swift FS** is supported by `flink-swift-fs-hadoop` and registered under the *swift://* URI scheme.
+  The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
+  To use it when using Flink as a library, add the respective maven dependency (`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`).
   
-  - **Azure Blob Storage**: 
-    Flink directly provides a file system to work with Azure Blob Storage. 
-    This filesystem is registered under the scheme *"wasb(s)://"*.
-    The implementation is self-contained with no dependency footprint.
+  - **[Aliyun Object Storage Service](./oss.html)** is supported by `flink-oss-fs-hadoop` and registered under the *oss://* URI scheme.
+  The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
 
-## HDFS and Hadoop File System support 
+  - **[Azure Blob Storage](./azure.html)** is supported by `flink-azure-fs-hadoop` and registered under the *wasb(s)://* URI schemes.
+  The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
+
+Except **MapR FS**, you can use any of them as plugins. 
+
+To use a pluggable file systems, copy the corresponding JAR file from the `opt` directory to a directory under `plugins` directory
+of your Flink distribution before starting Flink, e.g.
+
+{% highlight bash %}
+mkdir ./plugins/s3-fs-hadoop
+cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar ./plugins/s3-fs-hadoop/
+{% endhighlight %}
+
+<span class="label label-danger">Attention</span> The plugin mechanism for file systems was introduced in Flink version `1.9` to
+support dedicated Java class loaders per plugin and to move away from the class shading mechanism.
+You can still use the provided file systems (or your own implementations) via the old mechanism by copying the corresponding
+JAR file into `lib` directory.
+
+It's encouraged to use the plugin-based loading mechanism for file systems that support it. Loading file systems components from the `lib`
+directory may be not supported in future Flink versions.
+
+## HDFS and Hadoop File System support
 
 For all schemes where Flink cannot find a directly supported file system, it falls back to Hadoop.
 All Hadoop file systems are automatically available when `flink-runtime` and the Hadoop libraries are on the classpath.
 
-
 This way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-compatible file systems (HCFS).
 
   - **hdfs**
@@ -83,17 +105,21 @@ fs.hdfs.hadoopconf: /path/to/etc/hadoop
 This registers `/path/to/etc/hadoop` as Hadoop's configuration directory and is where Flink will look for the `core-site.xml` and `hdfs-site.xml` files.
 
 
-## Adding new File System Implementations
+## Adding a new pluggable File System implementation
 
-File systems are represented via the `org.apache.flink.core.fs.FileSystem` class, which captures the ways to access and modify files and objects in that file system. 
-Implementations are discovered by Flink through Java's service abstraction, making it easy to add new file system implementations.
+File systems are represented via the `org.apache.flink.core.fs.FileSystem` class, which captures the ways to access and modify files and objects in that file system.
 
 To add a new file system:
 
   - Add the File System implementation, which is a subclass of `org.apache.flink.core.fs.FileSystem`.
   - Add a factory that instantiates that file system and declares the scheme under which the FileSystem is registered. This must be a subclass of `org.apache.flink.core.fs.FileSystemFactory`.
-  - Add a service entry. Create a file `META-INF/services/org.apache.flink.core.fs.FileSystemFactory` which contains the class name of your file system factory class.
+  - Add a service entry. Create a file `META-INF/services/org.apache.flink.core.fs.FileSystemFactory` which contains the class name of your file system factory class
+  (see the [Java Service Loader docs](https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html) for more details).
+
+During plugins discovery, the file system factory class will be loaded by a dedicated Java class loader to avoid class conflicts with other plugins and Flink components.
+The same class loader should be used during file system instantiation and the file system operation calls.
 
-See the [Java Service Loader docs](https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html) for more details on how service loaders work.
+<span class="label label-warning">Warning</span> In practice, it means you should avoid using `Thread.currentThread().getContextClassLoader()` class loader
+in your implementation. 
 
 {% top %}
diff --git a/docs/ops/filesystems/index.zh.md b/docs/ops/filesystems/index.zh.md
index 414c82f..88f5133 100644
--- a/docs/ops/filesystems/index.zh.md
+++ b/docs/ops/filesystems/index.zh.md
@@ -25,7 +25,7 @@ under the License.
 -->
 
 Apache Flink uses file systems to consume and persistently store data, both for the results of applications and for fault tolerance and recovery.
-These are some of most of the popular file systems, including *local*, *hadoop-compatible*, *S3*, *MapR FS*, *OpenStack Swift FS*, *Aliyun OSS* and *Azure Blob Storage*.
+These are some of most of the popular file systems, including *local*, *hadoop-compatible*, *Amazon S3*, *MapR FS*, *OpenStack Swift FS*, *Aliyun OSS* and *Azure Blob Storage*.
 
 The file system used for a particular file is determined by its URI scheme.
 For example, `file:///home/user/text.txt` refers to a file in the local file system, while `hdfs://namenode:50010/data/user/text.txt` is a file in a specific HDFS cluster.
@@ -35,32 +35,54 @@ File system instances are instantiated once per process and then cached/pooled,
 * This will be replaced by the TOC
 {:toc}
 
-## Built-in File Systems
+## Local File System
 
-Flink ships with implementations for the following file systems:
+Flink has built-in support for the file system of the local machine, including any NFS or SAN drives mounted into that local file system.
+It can be used by default without additional configuration. Local files are referenced with the *file://* URI scheme.
 
-  - **local**: This file system is used when the scheme is *"file://"*, and it represents the file system of the local machine, including any NFS or SAN drives mounted into that local file system.
+## Pluggable File Systems
 
-  - **S3**: Flink directly provides file systems to talk to Amazon S3 with two alternative implementations, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`. Both implementations are self-contained with no dependency footprint.
-    
-  - **MapR FS**: The MapR file system *"maprfs://"* is automatically available when the MapR libraries are in the classpath.
-  
-  - **OpenStack Swift FS**: Flink directly provides a file system to talk to the OpenStack Swift file system, registered under the scheme *"swift://"*. 
-  The implementation of `flink-swift-fs-hadoop` is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
-  To use it when using Flink as a library, add the respective maven dependency (`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`
-  When starting a Flink application from the Flink binaries, copy or move the respective jar file from the `opt` folder to the `lib` folder.
+The Apache Flink project supports the following file systems:
+
+  - [**Amazon S3**](./s3.html) object storage is supported by two alternative implementations: `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
+  Both implementations are self-contained with no dependency footprint.
+
+  - **MapR FS** file system adapter is already supported in the main Flink distribution under the *maprfs://* URI scheme.
+  You must provide the MapR libraries in the classpath (for example in `lib` directory).
+
+  - **OpenStack Swift FS** is supported by `flink-swift-fs-hadoop` and registered under the *swift://* URI scheme.
+  The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
+  To use it when using Flink as a library, add the respective maven dependency (`org.apache.flink:flink-swift-fs-hadoop:{{ site.version }}`).
   
-  - **Azure Blob Storage**: 
-    Flink directly provides a file system to work with Azure Blob Storage. 
-    This filesystem is registered under the scheme *"wasb(s)://"*.
-    The implementation is self-contained with no dependency footprint.
+  - **[Aliyun Object Storage Service](./oss.html)** is supported by `flink-oss-fs-hadoop` and registered under the *oss://* URI scheme.
+  The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
 
-## HDFS and Hadoop File System support 
+  - **[Azure Blob Storage](./azure.html)** is supported by `flink-azure-fs-hadoop` and registered under the *wasb(s)://* URI schemes.
+  The implementation is based on the [Hadoop Project](https://hadoop.apache.org/) but is self-contained with no dependency footprint.
+
+Except **MapR FS**, you can use any of them as plugins. 
+
+To use a pluggable file systems, copy the corresponding JAR file from the `opt` directory to a directory under `plugins` directory
+of your Flink distribution before starting Flink, e.g.
+
+{% highlight bash %}
+mkdir ./plugins/s3-fs-hadoop
+cp ./opt/flink-s3-fs-hadoop-{{ site.version }}.jar ./plugins/s3-fs-hadoop/
+{% endhighlight %}
+
+<span class="label label-danger">Attention</span> The plugin mechanism for file systems was introduced in Flink version `1.9` to
+support dedicated Java class loaders per plugin and to move away from the class shading mechanism.
+You can still use the provided file systems (or your own implementations) via the old mechanism by copying the corresponding
+JAR file into `lib` directory.
+
+It's encouraged to use the plugin-based loading mechanism for file systems that support it. Loading file systems components from the `lib`
+directory may be not supported in future Flink versions.
+
+## HDFS and Hadoop File System support
 
 For all schemes where Flink cannot find a directly supported file system, it falls back to Hadoop.
 All Hadoop file systems are automatically available when `flink-runtime` and the Hadoop libraries are on the classpath.
 
-
 This way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-compatible file systems (HCFS).
 
   - **hdfs**
@@ -83,17 +105,21 @@ fs.hdfs.hadoopconf: /path/to/etc/hadoop
 This registers `/path/to/etc/hadoop` as Hadoop's configuration directory and is where Flink will look for the `core-site.xml` and `hdfs-site.xml` files.
 
 
-## Adding new File System Implementations
+## Adding a new pluggable File System implementation
 
-File systems are represented via the `org.apache.flink.core.fs.FileSystem` class, which captures the ways to access and modify files and objects in that file system. 
-Implementations are discovered by Flink through Java's service abstraction, making it easy to add new file system implementations.
+File systems are represented via the `org.apache.flink.core.fs.FileSystem` class, which captures the ways to access and modify files and objects in that file system.
 
 To add a new file system:
 
   - Add the File System implementation, which is a subclass of `org.apache.flink.core.fs.FileSystem`.
   - Add a factory that instantiates that file system and declares the scheme under which the FileSystem is registered. This must be a subclass of `org.apache.flink.core.fs.FileSystemFactory`.
-  - Add a service entry. Create a file `META-INF/services/org.apache.flink.core.fs.FileSystemFactory` which contains the class name of your file system factory class.
+  - Add a service entry. Create a file `META-INF/services/org.apache.flink.core.fs.FileSystemFactory` which contains the class name of your file system factory class
+  (see the [Java Service Loader docs](https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html) for more details).
+
+During plugins discovery, the file system factory class will be loaded by a dedicated Java class loader to avoid class conflicts with other plugins and Flink components.
+The same class loader should be used during file system instantiation and the file system operation calls.
 
-See the [Java Service Loader docs](https://docs.oracle.com/javase/8/docs/api/java/util/ServiceLoader.html) for more details on how service loaders work.
+<span class="label label-warning">Warning</span> In practice, it means you should avoid using `Thread.currentThread().getContextClassLoader()` class loader
+in your implementation. 
 
 {% top %}
diff --git a/docs/ops/filesystems/oss.md b/docs/ops/filesystems/oss.md
index 0c98c43..e2af733 100644
--- a/docs/ops/filesystems/oss.md
+++ b/docs/ops/filesystems/oss.md
@@ -37,7 +37,7 @@ You can use OSS objects like regular files by specifying paths in the following
 oss://<your-bucket>/<object-name>
 {% endhighlight %}
 
-Below shows how to use OSS with Flink:
+Below shows how to use OSS in a Flink job:
 
 {% highlight java %}
 // Read from OSS bucket
@@ -50,17 +50,19 @@ stream.writeAsText("oss://<your-bucket>/<object-name>")
 env.setStateBackend(new FsStateBackend("oss://<your-bucket>/<object-name>"));
 {% endhighlight %}
 
-### Shaded Hadoop OSS file system 
+### Shaded Hadoop OSS file system
 
-To use `flink-oss-fs-hadoop,` copy the respective JAR file from the opt directory to the lib directory of your Flink distribution before starting Flink, e.g.
+To use `flink-oss-fs-hadoop,` copy the respective JAR file from the `opt` directory to a directory in `plugins` directory of your Flink distribution before starting Flink, e.g.
 
 {% highlight bash %}
-cp ./opt/flink-oss-fs-hadoop-{{ site.version }}.jar ./lib/
+mkdir ./plugins/oss-fs-hadoop
+cp ./opt/flink-oss-fs-hadoop-{{ site.version }}.jar ./plugins/oss-fs-hadoop/
 {% endhighlight %}
 
-`flink-oss-fs-hadoop` registers default FileSystem wrappers for URIs with the oss:// scheme.
+`flink-oss-fs-hadoop` registers default FileSystem wrappers for URIs with the *oss://* scheme.
 
 #### Configurations setup
+
 After setting up the OSS FileSystem wrapper, you need to add some configurations to make sure that Flink is allowed to access your OSS buckets.
 
 To allow for easy adoption, you can use the same configuration keys in `flink-conf.yaml` as in Hadoop's `core-site.xml`
diff --git a/docs/ops/filesystems/oss.zh.md b/docs/ops/filesystems/oss.zh.md
index d6834d1..f310d20 100644
--- a/docs/ops/filesystems/oss.zh.md
+++ b/docs/ops/filesystems/oss.zh.md
@@ -37,7 +37,7 @@ You can use OSS objects like regular files by specifying paths in the following
 oss://<your-bucket>/<object-name>
 {% endhighlight %}
 
-Below shows how to use OSS with Flink:
+Below shows how to use OSS in a Flink job:
 
 {% highlight java %}
 // Read from OSS bucket
@@ -50,17 +50,19 @@ stream.writeAsText("oss://<your-bucket>/<object-name>")
 env.setStateBackend(new FsStateBackend("oss://<your-bucket>/<object-name>"));
 {% endhighlight %}
 
-### Shaded Hadoop OSS file system 
+### Shaded Hadoop OSS file system
 
-To use `flink-oss-fs-hadoop,` copy the respective JAR file from the opt directory to the lib directory of your Flink distribution before starting Flink, e.g.
+To use `flink-oss-fs-hadoop,` copy the respective JAR file from the `opt` directory to a directory in `plugins` directory of your Flink distribution before starting Flink, e.g.
 
 {% highlight bash %}
-cp ./opt/flink-oss-fs-hadoop-{{ site.version }}.jar ./lib/
+mkdir ./plugins/oss-fs-hadoop
+cp ./opt/flink-oss-fs-hadoop-{{ site.version }}.jar ./plugins/oss-fs-hadoop/
 {% endhighlight %}
 
-`flink-oss-fs-hadoop` registers default FileSystem wrappers for URIs with the oss:// scheme.
+`flink-oss-fs-hadoop` registers default FileSystem wrappers for URIs with the *oss://* scheme.
 
 #### Configurations setup
+
 After setting up the OSS FileSystem wrapper, you need to add some configurations to make sure that Flink is allowed to access your OSS buckets.
 
 To allow for easy adoption, you can use the same configuration keys in `flink-conf.yaml` as in Hadoop's `core-site.xml`
diff --git a/docs/ops/filesystems/s3.md b/docs/ops/filesystems/s3.md
index f25e266..f601b46 100644
--- a/docs/ops/filesystems/s3.md
+++ b/docs/ops/filesystems/s3.md
@@ -59,23 +59,24 @@ For some cases, however, e.g., for using S3 as YARN's resource storage dir, it m
 Flink provides two file systems to talk to Amazon S3, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
 Both implementations are self-contained with no dependency footprint, so there is no need to add Hadoop to the classpath to use them.
 
-  - `flink-s3-fs-presto`, registered under the scheme *"s3://"* and *"s3p://"*, is based on code from the [Presto project](https://prestodb.io/).
+  - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is based on code from the [Presto project](https://prestodb.io/).
   You can configure it the same way you can [configure the Presto file system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration) by placing adding the configurations to your `flink-conf.yaml`. Presto is the recommended file system for checkpointing to S3.
-      
-  - `flink-s3-fs-hadoop`, registered under *"s3://"* and *"s3a://"*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
+
+  - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
   The file system can be [configured exactly like Hadoop's s3a](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A) by placing adding the configurations to your `flink-conf.yaml`. Shaded Hadoop is the only S3 file system with support for the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html).
-    
+
 Both `flink-s3-fs-hadoop` and `flink-s3-fs-presto` register default FileSystem
-wrappers for URIs with the `s3://` scheme, `flink-s3-fs-hadoop` also registers
-for `s3a://` and `flink-s3-fs-presto` also registers for `s3p://`, so you can
+wrappers for URIs with the *s3://* scheme, `flink-s3-fs-hadoop` also registers
+for *s3a://* and `flink-s3-fs-presto` also registers for *s3p://*, so you can
 use this to use both at the same time.
 For example, the job uses the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html) which only supports Hadoop, but uses Presto for checkpointing.
-In this case, it is advised to use explicitly *"s3a://"* as a scheme for the sink (Hadoop) and *"s3p://"* for checkpointing (Presto).
-    
-To use either `flink-s3-fs-hadoop` or `flink-s3-fs-presto`, copy the respective JAR file from the `opt` directory to the `lib` directory of your Flink distribution before starting Flink, e.g.
+In this case, it is advised to explicitly use *s3a://* as a scheme for the sink (Hadoop) and *s3p://* for checkpointing (Presto).
+
+To use `flink-s3-fs-hadoop` or `flink-s3-fs-presto`, copy the respective JAR file from the `opt` directory to the `plugins` directory of your Flink distribution before starting Flink, e.g.
 
 {% highlight bash %}
-cp ./opt/flink-s3-fs-presto-{{ site.version }}.jar ./lib/
+mkdir ./plugins/s3-fs-presto
+cp ./opt/flink-s3-fs-presto-{{ site.version }}.jar ./plugins/s3-fs-presto/
 {% endhighlight %}
 
 #### Configure Access Credentials
@@ -102,7 +103,7 @@ s3.secret-key: your-secret-key
 ## Configure Non-S3 Endpoint
 
 The S3 Filesystems also support using S3 compliant object stores such as [IBM's Cloud Object Storage](https://www.ibm.com/cloud/object-storage) and [Minio](https://min.io/).
-To do so, configure your endpoint in `flink-conf.yaml`. 
+To do so, configure your endpoint in `flink-conf.yaml`.
 
 {% highlight yaml %}
 s3.endpoint: your-endpoint-hostname
@@ -133,4 +134,4 @@ The `s3.entropy.key` defines the string in paths that is replaced by the random
 If a file system operation does not pass the *"inject entropy"* write option, the entropy key substring is simply removed.
 The `s3.entropy.length` defines the number of random alphanumeric characters used for entropy.
 
-{% top %}
\ No newline at end of file
+{% top %}
diff --git a/docs/ops/filesystems/s3.zh.md b/docs/ops/filesystems/s3.zh.md
index f25e266..f601b46 100644
--- a/docs/ops/filesystems/s3.zh.md
+++ b/docs/ops/filesystems/s3.zh.md
@@ -59,23 +59,24 @@ For some cases, however, e.g., for using S3 as YARN's resource storage dir, it m
 Flink provides two file systems to talk to Amazon S3, `flink-s3-fs-presto` and `flink-s3-fs-hadoop`.
 Both implementations are self-contained with no dependency footprint, so there is no need to add Hadoop to the classpath to use them.
 
-  - `flink-s3-fs-presto`, registered under the scheme *"s3://"* and *"s3p://"*, is based on code from the [Presto project](https://prestodb.io/).
+  - `flink-s3-fs-presto`, registered under the scheme *s3://* and *s3p://*, is based on code from the [Presto project](https://prestodb.io/).
   You can configure it the same way you can [configure the Presto file system](https://prestodb.io/docs/0.187/connector/hive.html#amazon-s3-configuration) by placing adding the configurations to your `flink-conf.yaml`. Presto is the recommended file system for checkpointing to S3.
-      
-  - `flink-s3-fs-hadoop`, registered under *"s3://"* and *"s3a://"*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
+
+  - `flink-s3-fs-hadoop`, registered under *s3://* and *s3a://*, based on code from the [Hadoop Project](https://hadoop.apache.org/).
   The file system can be [configured exactly like Hadoop's s3a](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#S3A) by placing adding the configurations to your `flink-conf.yaml`. Shaded Hadoop is the only S3 file system with support for the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html).
-    
+
 Both `flink-s3-fs-hadoop` and `flink-s3-fs-presto` register default FileSystem
-wrappers for URIs with the `s3://` scheme, `flink-s3-fs-hadoop` also registers
-for `s3a://` and `flink-s3-fs-presto` also registers for `s3p://`, so you can
+wrappers for URIs with the *s3://* scheme, `flink-s3-fs-hadoop` also registers
+for *s3a://* and `flink-s3-fs-presto` also registers for *s3p://*, so you can
 use this to use both at the same time.
 For example, the job uses the [StreamingFileSink]({{ site.baseurl}}/dev/connectors/streamfile_sink.html) which only supports Hadoop, but uses Presto for checkpointing.
-In this case, it is advised to use explicitly *"s3a://"* as a scheme for the sink (Hadoop) and *"s3p://"* for checkpointing (Presto).
-    
-To use either `flink-s3-fs-hadoop` or `flink-s3-fs-presto`, copy the respective JAR file from the `opt` directory to the `lib` directory of your Flink distribution before starting Flink, e.g.
+In this case, it is advised to explicitly use *s3a://* as a scheme for the sink (Hadoop) and *s3p://* for checkpointing (Presto).
+
+To use `flink-s3-fs-hadoop` or `flink-s3-fs-presto`, copy the respective JAR file from the `opt` directory to the `plugins` directory of your Flink distribution before starting Flink, e.g.
 
 {% highlight bash %}
-cp ./opt/flink-s3-fs-presto-{{ site.version }}.jar ./lib/
+mkdir ./plugins/s3-fs-presto
+cp ./opt/flink-s3-fs-presto-{{ site.version }}.jar ./plugins/s3-fs-presto/
 {% endhighlight %}
 
 #### Configure Access Credentials
@@ -102,7 +103,7 @@ s3.secret-key: your-secret-key
 ## Configure Non-S3 Endpoint
 
 The S3 Filesystems also support using S3 compliant object stores such as [IBM's Cloud Object Storage](https://www.ibm.com/cloud/object-storage) and [Minio](https://min.io/).
-To do so, configure your endpoint in `flink-conf.yaml`. 
+To do so, configure your endpoint in `flink-conf.yaml`.
 
 {% highlight yaml %}
 s3.endpoint: your-endpoint-hostname
@@ -133,4 +134,4 @@ The `s3.entropy.key` defines the string in paths that is replaced by the random
 If a file system operation does not pass the *"inject entropy"* write option, the entropy key substring is simply removed.
 The `s3.entropy.length` defines the number of random alphanumeric characters used for entropy.
 
-{% top %}
\ No newline at end of file
+{% top %}