You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/12/09 14:34:17 UTC

[GitHub] [flink] XComp commented on a change in pull request #14346: [FLINK-20354] Rework standalone docs pages

XComp commented on a change in pull request #14346:
URL: https://github.com/apache/flink/pull/14346#discussion_r539307215



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -204,191 +198,25 @@ You can provide the following additional command line arguments to the cluster e
 
 If the main function of the user job main class accepts arguments, you can also pass them at the end of the `docker run` command.
 
-## Customize Flink image
-
-When you run the Flink containers, there may be a need to customize them.
-The next chapters describe some how-tos of what you can usually customize.
-
-### Configure options
-
-When you run Flink image, you can also change its configuration options by setting the environment variable `FLINK_PROPERTIES`:
-
-```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-```
-
-The [`jobmanager.rpc.address`]({% link deployment/config.md %}#jobmanager-rpc-address) option must be configured, others are optional to set.
-
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence over configurations in `flink-conf.yaml`.
-
-### Provide custom configuration
-
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
-
-* **either mount a volume** with the custom configuration files to this path `/opt/flink/conf` when you run the Flink image:
-
-    ```sh
-    docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf \
-        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-    ```
-
-* or add them to your **custom Flink image**, build and run it:
-
-    *Dockerfile*:
-
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
-    ```
-
-<span class="label label-warning">Warning!</span> The mounted volume must contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry point script can modify it in certain cases.
-
-### Using plugins
-
-As described in the [plugins]({% link deployment/filesystems/plugins.md %}) documentation page: in order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker container for them to work.
-
-If you want to enable plugins provided with Flink (in the `opt/` directory of the Flink distribution), you can pass the environment variable `ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{site.version}}.jar`
-
-```sh
-    docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-```
-
-There are also more [advanced ways](#advanced-customization) for customizing the Flink image.
-
-### Switch memory allocator
-
-Flink introduced `jemalloc` as default memory allocator to resolve memory fragmentation problem (please refer to [FLINK-19125](https://issues.apache.org/jira/browse/FLINK-19125)).
-
-You could switch back to use `glibc` as memory allocator to restore the old behavior or if any unexpected memory consumption or problem observed
-(and please report the issue via JIRA or mailing list if you found any), by passing `disable-jemalloc` parameter:
-
-```sh
-    docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
-```
-
-### Advanced customization
-
-There are several ways in which you can further customize the Flink image:
-
-* install custom software (e.g. python)
-* enable (symlink) optional libraries or plugins from `/opt/flink/opt` into `/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. Hadoop)
-* add other plugins to `/opt/flink/plugins`
-
-See also: [How to provide dependencies in the classpath]({% link index.md %}#how-to-provide-dependencies-in-the-classpath).
-
-You can customize the Flink image in several ways:
-
-* **override the container entry point** with a custom script where you can run any bootstrap actions.
-At the end you can call the standard `/docker-entrypoint.sh` script of the Flink image with the same arguments
-as described in [how to run the Flink image](#how-to-run-flink-image).
-
-  The following example creates a custom entry point script which enables more libraries and plugins.
-  The custom script, custom library and plugin are provided from a mounted volume.
-  Then it runs the standard entry point script of the Flink image:
-
-    ```sh
-    # create custom_lib.jar
-    # create custom_plugin.jar
-
-    echo "
-    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/.  # enable an optional library
-    ln -fs /mnt/custom_lib.jar /opt/flink/lib/.  # enable a custom library
-
-    mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.  # enable an optional plugin
-
-    mkdir -p /opt/flink/plugins/custom_plugin
-    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/.  # enable a custom plugin
-
-    /docker-entrypoint.sh <jobmanager|standalone-job|taskmanager>
-    " > custom_entry_point_script.sh
-
-    chmod 755 custom_entry_point_script.sh
-
-    docker run \
-        --mount type=bind,src=$(pwd),target=/mnt
-        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} /mnt/custom_entry_point_script.sh
-    ```
-
-* **extend the Flink image** by writing a custom `Dockerfile` and build a custom image:
 
-    *Dockerfile*:
+### Session Mode on Docker
 
-    ```dockerfile
-    FROM flink
+Local deployment in the session mode has already been described in the [introduction](#starting-a-session-cluster-on-docker) above.
 
-    RUN set -ex; apt-get update; apt-get -y install python
 
-    ADD /host/path/to/flink-conf.yaml /container/local/path/to/custom/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /container/local/path/to/custom/conf/log4j.properties
-
-    RUN ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/.
-
-    RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    RUN ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.
-
-    ENV VAR_NAME value
-    ```
-
-    **Commands for building**:
-
-    ```sh
-    docker build -t custom_flink_image .
-    # optional push to your docker image registry if you have it,
-    # e.g. to distribute the custom image to your cluster
-    docker push custom_flink_image
-    ```
-  
-### Enabling Python
-
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
-
-# install python3 and pip3
-RUN apt-get update -y && \
-apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
-
-Build the image named as **pyflink:latest**:
-
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
-
-{% top %}
-
-## Flink with Docker Compose
+### Flink with Docker Compose

Review comment:
       I would suggest moving the Docker Compose and Docker Swarm section into references section considering that the page is actually focusing on Docker.

##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -112,10 +72,44 @@ docker run \
     flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} taskmanager
 ```
 
+The web interface is now available at [localhost:8081](http://localhost:8081).
+
+
+Submission of a job is now possible like this (assuming you have a local distribution of Flink available)
+
+```sh
+./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
+```
+
+To shut down the cluster, either terminate (e.g. with CTRL-C) the JobManager and TaskManager processes, or use `docker ps` to identify and `docker stop` to terminate the containers.
+
+## Deployment Modes Supported on Docker

Review comment:
       I also added unsupported deployment modes on the Mesos page without noticing that the section's name is "Deployment Modes Supported on ...". We went along with it in k8s because we thought it would make it more explicit having a section for each of the three deployment modes and explicitly mentioning that that specific mode is not supported. Considering that we have two options now:
   1. Do the same here and rename the headline from "Deployment Modes Supported on Docker" to "Deployment Modes". This would mean that we still have to fix the headline on the Mesos and k8s page accordingly.
   2. We stick to the approach you used on this page. That would mean that we would have to remove the respective sections from the k8s and Mesos docs.
   
   Which one do you prefer?

##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -204,191 +198,25 @@ You can provide the following additional command line arguments to the cluster e
 
 If the main function of the user job main class accepts arguments, you can also pass them at the end of the `docker run` command.
 
-## Customize Flink image
-
-When you run the Flink containers, there may be a need to customize them.
-The next chapters describe some how-tos of what you can usually customize.
-
-### Configure options
-
-When you run Flink image, you can also change its configuration options by setting the environment variable `FLINK_PROPERTIES`:
-
-```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: host
-taskmanager.numberOfTaskSlots: 3
-blob.server.port: 6124
-"
-docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-```
-
-The [`jobmanager.rpc.address`]({% link deployment/config.md %}#jobmanager-rpc-address) option must be configured, others are optional to set.
-
-The environment variable `FLINK_PROPERTIES` should contain a list of Flink cluster configuration options separated by new line,
-the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence over configurations in `flink-conf.yaml`.
-
-### Provide custom configuration
-
-The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in the `/opt/flink/conf` directory in the Flink image.
-To provide a custom location for the Flink configuration files, you can
-
-* **either mount a volume** with the custom configuration files to this path `/opt/flink/conf` when you run the Flink image:
-
-    ```sh
-    docker run \
-        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf \
-        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-    ```
-
-* or add them to your **custom Flink image**, build and run it:
-
-    *Dockerfile*:
-
-    ```dockerfile
-    FROM flink
-    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
-    ```
-
-<span class="label label-warning">Warning!</span> The mounted volume must contain all necessary configuration files.
-The `flink-conf.yaml` file must have write permission so that the Docker entry point script can modify it in certain cases.
-
-### Using plugins
-
-As described in the [plugins]({% link deployment/filesystems/plugins.md %}) documentation page: in order to use plugins they must be
-copied to the correct location in the Flink installation in the Docker container for them to work.
-
-If you want to enable plugins provided with Flink (in the `opt/` directory of the Flink distribution), you can pass the environment variable `ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
-The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{site.version}}.jar`
-
-```sh
-    docker run \
-        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
-        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
-```
-
-There are also more [advanced ways](#advanced-customization) for customizing the Flink image.
-
-### Switch memory allocator
-
-Flink introduced `jemalloc` as default memory allocator to resolve memory fragmentation problem (please refer to [FLINK-19125](https://issues.apache.org/jira/browse/FLINK-19125)).
-
-You could switch back to use `glibc` as memory allocator to restore the old behavior or if any unexpected memory consumption or problem observed
-(and please report the issue via JIRA or mailing list if you found any), by passing `disable-jemalloc` parameter:
-
-```sh
-    docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
-```
-
-### Advanced customization
-
-There are several ways in which you can further customize the Flink image:
-
-* install custom software (e.g. python)
-* enable (symlink) optional libraries or plugins from `/opt/flink/opt` into `/opt/flink/lib` or `/opt/flink/plugins`
-* add other libraries to `/opt/flink/lib` (e.g. Hadoop)
-* add other plugins to `/opt/flink/plugins`
-
-See also: [How to provide dependencies in the classpath]({% link index.md %}#how-to-provide-dependencies-in-the-classpath).
-
-You can customize the Flink image in several ways:
-
-* **override the container entry point** with a custom script where you can run any bootstrap actions.
-At the end you can call the standard `/docker-entrypoint.sh` script of the Flink image with the same arguments
-as described in [how to run the Flink image](#how-to-run-flink-image).
-
-  The following example creates a custom entry point script which enables more libraries and plugins.
-  The custom script, custom library and plugin are provided from a mounted volume.
-  Then it runs the standard entry point script of the Flink image:
-
-    ```sh
-    # create custom_lib.jar
-    # create custom_plugin.jar
-
-    echo "
-    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/.  # enable an optional library
-    ln -fs /mnt/custom_lib.jar /opt/flink/lib/.  # enable a custom library
-
-    mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.  # enable an optional plugin
-
-    mkdir -p /opt/flink/plugins/custom_plugin
-    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/.  # enable a custom plugin
-
-    /docker-entrypoint.sh <jobmanager|standalone-job|taskmanager>
-    " > custom_entry_point_script.sh
-
-    chmod 755 custom_entry_point_script.sh
-
-    docker run \
-        --mount type=bind,src=$(pwd),target=/mnt
-        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} /mnt/custom_entry_point_script.sh
-    ```
-
-* **extend the Flink image** by writing a custom `Dockerfile` and build a custom image:
 
-    *Dockerfile*:
+### Session Mode on Docker
 
-    ```dockerfile
-    FROM flink
+Local deployment in the session mode has already been described in the [introduction](#starting-a-session-cluster-on-docker) above.
 
-    RUN set -ex; apt-get update; apt-get -y install python
 
-    ADD /host/path/to/flink-conf.yaml /container/local/path/to/custom/conf/flink-conf.yaml
-    ADD /host/path/to/log4j.properties /container/local/path/to/custom/conf/log4j.properties
-
-    RUN ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/.
-
-    RUN mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    RUN ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.
-
-    ENV VAR_NAME value
-    ```
-
-    **Commands for building**:
-
-    ```sh
-    docker build -t custom_flink_image .
-    # optional push to your docker image registry if you have it,
-    # e.g. to distribute the custom image to your cluster
-    docker push custom_flink_image
-    ```
-  
-### Enabling Python
-
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
-
-# install python3 and pip3
-RUN apt-get update -y && \
-apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
-
-Build the image named as **pyflink:latest**:
-
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
-
-{% top %}
-
-## Flink with Docker Compose
+### Flink with Docker Compose
 
 [Docker Compose](https://docs.docker.com/compose/) is a way to run a group of Docker containers locally.
-The next chapters show examples of configuration files to run Flink.
+The next sections show examples of configuration files to run Flink.
 
-### Usage
+#### Usage
 
 * Create the `yaml` files with the container configuration, check examples for:
-    * [Session cluster](#session-cluster-with-docker-compose)
-    * [Job cluster](#job-cluster-with-docker-compose)
+  * [Session cluster](#session-cluster-with-docker-compose)
+  * [Application cluster](#application-cluster-with-docker-compose)
 
-    See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization)
-    for usage in the configuration files.
+  See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization)
+  for usage in the configuration files.
 
 * Launch a cluster in the foreground

Review comment:
       I'm not sure whether we have to have this docker-compose tutorial here. I'd say that the `docker-compose` actions can be found in the original `docker-compose` docs. Maybe, we can link them instead.

##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -442,7 +270,7 @@ The next chapters show examples of configuration files to run Flink.
     docker exec -t -i "${JM_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} /job.jar
     ```
 
-### Session Cluster with Docker Compose
+#### Session Cluster with Docker Compose

Review comment:
       Can't we just have a code block for each of the yaml configurations? Not sure whether the section are necessary here.

##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -590,11 +416,203 @@ docker service create \
     taskmanager
 ```
 
-The *job artifacts* must be available in the *JobManager* container, as outlined [here](#start-a-job-cluster).
+The *job artifacts* must be available in the *JobManager* container, as outlined [here](#application-mode-on-docker).
 See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments) to pass them
 to the `flink-jobmanager` container.
 
 The example assumes that you run the swarm locally and expects the *job artifacts* to be in `/host/path/to/job/artifacts`.
 It also mounts the host path with the artifacts as a volume to the container's path `/opt/flink/usrlib`.
 
 {% top %}
+
+## Flink on Docker Reference
+
+### Image tags
+
+The [Flink Docker repository](https://hub.docker.com/_/flink/) is hosted on Docker Hub and serves images of Flink version 1.2.1 and later.
+The source for these images can be found in the [Apache flink-docker](https://github.com/apache/flink-docker) repository.
+
+Images for each supported combination of Flink and Scala versions are available, and
+[tag aliases](https://hub.docker.com/_/flink?tab=tags) are provided for convenience.
+
+For example, you can use the following aliases:
+
+* `flink:latest` → `flink:<latest-flink>-scala_<latest-scala>`
+* `flink:1.11` → `flink:1.11.<latest-flink-1.11>-scala_2.11`
+
+<span class="label label-info">Note</span> It is recommended to always use an explicit version tag of the docker image that specifies both the needed Flink and Scala
+versions (for example `flink:1.11-scala_2.12`).
+This will avoid some class conflicts that can occur if the Flink and/or Scala versions used in the application are different
+from the versions provided by the docker image.
+
+<span class="label label-info">Note</span> Prior to Flink 1.5 version, Hadoop dependencies were always bundled with Flink.
+You can see that certain tags include the version of Hadoop, e.g. (e.g. `-hadoop28`).
+Beginning with Flink 1.5, image tags that omit the Hadoop version correspond to Hadoop-free releases of Flink
+that do not include a bundled Hadoop distribution.
+
+
+### Passing configuration via environment variables
+
+When you run Flink image, you can also change its configuration options by setting the environment variable `FLINK_PROPERTIES`:
+
+```sh
+FLINK_PROPERTIES="jobmanager.rpc.address: host
+taskmanager.numberOfTaskSlots: 3
+blob.server.port: 6124
+"
+docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
+```
+
+The [`jobmanager.rpc.address`]({% link deployment/config.md %}#jobmanager-rpc-address) option must be configured, others are optional to set.
+
+The environment variable `FLINK_PROPERTIES` should contain a list of Flink cluster configuration options separated by new line,
+the same way as in the `flink-conf.yaml`. `FLINK_PROPERTIES` takes precedence over configurations in `flink-conf.yaml`.
+
+### Provide custom configuration
+
+The configuration files (`flink-conf.yaml`, logging, hosts etc) are located in the `/opt/flink/conf` directory in the Flink image.
+To provide a custom location for the Flink configuration files, you can
+
+* **either mount a volume** with the custom configuration files to this path `/opt/flink/conf` when you run the Flink image:
+
+    ```sh
+    docker run \
+        --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf \
+        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
+    ```
+
+* or add them to your **custom Flink image**, build and run it:
+
+  *Dockerfile*:
+
+    ```dockerfile
+    FROM flink
+    ADD /host/path/to/flink-conf.yaml /opt/flink/conf/flink-conf.yaml
+    ADD /host/path/to/log4j.properties /opt/flink/conf/log4j.properties
+    ```
+
+<span class="label label-warning">Warning!</span> The mounted volume must contain all necessary configuration files.
+The `flink-conf.yaml` file must have write permission so that the Docker entry point script can modify it in certain cases.
+
+### Using filesystem plugins
+
+As described in the [plugins]({% link deployment/filesystems/plugins.md %}) documentation page: in order to use plugins they must be
+copied to the correct location in the Flink installation in the Docker container for them to work.
+
+If you want to enable plugins provided with Flink (in the `opt/` directory of the Flink distribution), you can pass the environment variable `ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
+The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{site.version}}.jar`
+
+```sh
+    docker run \
+        --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
+        flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
+```
+
+There are also more [advanced ways](#advanced-customization) for customizing the Flink image.
+
+### Enabling Python

Review comment:
       Should we point to the Python CLI that is going to get introduced by @shuiqiangchen in [this PR](https://github.com/XComp/flink/pull/1)?

##########
File path: docs/deployment/resource-providers/standalone/kubernetes.md
##########
@@ -23,7 +23,7 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a *Flink Job* and *Session cluster* on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy a *Flink application cluster* and *Session cluster* on [Kubernetes](https://kubernetes.io).

Review comment:
       Couldn't we refactor the documentation here as well sticking to the sections we used for all the other resource provider pages? I have the feeling that it would work here as well 🤔 

##########
File path: docs/deployment/resource-providers/standalone/index.md
##########
@@ -24,153 +24,203 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page provides instructions on how to run Flink in a *fully distributed fashion* on a *static* (but possibly heterogeneous) cluster.

Review comment:
       I don't like the "Overview" entry in the menu panel on the left as we're not giving an overview but introduce the basic (or native) deployment of a Standalone cluster. What is your opinion on the naming?

##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -442,7 +270,7 @@ The next chapters show examples of configuration files to run Flink.
     docker exec -t -i "${JM_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} /job.jar
     ```
 
-### Session Cluster with Docker Compose
+#### Session Cluster with Docker Compose

Review comment:
       FYI: What about using collapsible blocks to hide long configuration files:
   ```
   <p>
     <a class="btn" data-toggle="collapse" href="#session-cluster-yml" role="button" aria-expanded="false" aria-controls="session-cluster-yml">
       docker-compose.yml
     </a>
   </p>
   <div class="collapse" id="session-cluster-yml">
     {% highlight yaml %}
   version: "2.2"
   services:
     jobmanager:
       image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
       ports:
         - "8081:8081"
       command: jobmanager
       environment:
         - |
           FLINK_PROPERTIES=
           jobmanager.rpc.address: jobmanager
   
     taskmanager:
       image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
       depends_on:
         - jobmanager
       command: taskmanager
       scale: 1
       environment:
         - |
           FLINK_PROPERTIES=
           jobmanager.rpc.address: jobmanager
           taskmanager.numberOfTaskSlots: 2
     {% endhighlight %}
   </div>
   ```
   I got inspired by [this blog post](https://www.iditect.com/how-to/52576626.html).
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org