You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by rm...@apache.org on 2020/12/15 06:29:27 UTC

[flink] 03/03: [FLINK-20354] Rework standalone docs page

This is an automated email from the ASF dual-hosted git repository.

rmetzger pushed a commit to branch release-1.12
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 6351fbb4bb731238a56d8823549bd48bf061fac3
Author: Robert Metzger <rm...@apache.org>
AuthorDate: Thu Dec 3 09:12:13 2020 +0100

    [FLINK-20354] Rework standalone docs page
    
    This closes #14346
---
 docs/deployment/cli.md                             |   2 +-
 docs/deployment/cli.zh.md                          |   2 +-
 docs/deployment/repls/python_shell.md              |   2 +-
 docs/deployment/repls/python_shell.zh.md           |   2 +-
 .../resource-providers/native_kubernetes.md        |   2 +-
 .../resource-providers/native_kubernetes.zh.md     |   2 +-
 .../resource-providers/standalone/docker.md        | 383 +++++++++++----------
 .../resource-providers/standalone/docker.zh.md     | 381 ++++++++++----------
 .../resource-providers/standalone/index.md         | 268 ++++++++------
 .../resource-providers/standalone/kubernetes.md    | 220 +++++++-----
 .../resource-providers/standalone/kubernetes.zh.md | 219 +++++++-----
 .../resource-providers/standalone/local.md         | 169 ---------
 .../resource-providers/standalone/local.zh.md      | 178 ----------
 docs/redirects/local_setup_tutorial.md             |   2 +-
 docs/redirects/local_setup_tutorial.zh.md          |   2 +-
 docs/redirects/setup_quickstart.md                 |   2 +-
 docs/redirects/setup_quickstart.zh.md              |   2 +-
 docs/redirects/tutorials_flink_on_windows.md       |   2 +-
 docs/redirects/tutorials_flink_on_windows.zh.md    |   2 +-
 docs/redirects/tutorials_local_setup.md            |   2 +-
 docs/redirects/tutorials_local_setup.zh.md         |   2 +-
 docs/redirects/windows.md                          |   2 +-
 docs/redirects/windows.zh.md                       |   2 +-
 docs/redirects/windows_local_setup.md              |   2 +-
 docs/redirects/windows_local_setup.zh.md           |   2 +-
 25 files changed, 831 insertions(+), 1023 deletions(-)

diff --git a/docs/deployment/cli.md b/docs/deployment/cli.md
index 2d61574..adcbdd1 100644
--- a/docs/deployment/cli.md
+++ b/docs/deployment/cli.md
@@ -36,7 +36,7 @@ It connects to the running JobManager specified in `conf/flink-config.yaml`.
 A prerequisite for the commands listed in this section to work is to have a running Flink deployment 
 like [Kubernetes]({% link deployment/resource-providers/native_kubernetes.md %}), 
 [YARN]({% link deployment/resource-providers/yarn.md %}) or any other option available. Feel free to 
-[start a Flink cluster locally]({% link deployment/resource-providers/standalone/local.md %}#start-a-local-flink-cluster) 
+[start a Flink cluster locally]({% link deployment/resource-providers/standalone/index.md %}#starting-a-standalone-cluster-session-mode) 
 to try the commands on your own machine.
  
 ### Submitting a Job
diff --git a/docs/deployment/cli.zh.md b/docs/deployment/cli.zh.md
index 3b67466..45dd16d 100644
--- a/docs/deployment/cli.zh.md
+++ b/docs/deployment/cli.zh.md
@@ -35,7 +35,7 @@ It connects to the running JobManager specified in `conf/flink-config.yaml`.
 A prerequisite for the commands listed in this section to work is to have a running Flink deployment 
 like [Kubernetes]({% link deployment/resource-providers/native_kubernetes.zh.md %}), 
 [YARN]({% link deployment/resource-providers/yarn.zh.md %}) or any other option available. Feel free to 
-[start a Flink cluster locally]({% link deployment/resource-providers/standalone/local.zh.md %}#start-a-local-flink-cluster) 
+[start a Flink cluster locally]({% link deployment/resource-providers/standalone/index.zh.md %}#starting-a-standalone-cluster-session-mode) 
 to try the commands on your own machine.
  
 ### Submitting a Job
diff --git a/docs/deployment/repls/python_shell.md b/docs/deployment/repls/python_shell.md
index ddc7bc5..6084170 100644
--- a/docs/deployment/repls/python_shell.md
+++ b/docs/deployment/repls/python_shell.md
@@ -24,7 +24,7 @@ under the License.
 
 Flink comes with an integrated interactive Python Shell.
 It can be used in a local setup as well as in a cluster setup.
-See the [local setup page]({% link deployment/resource-providers/standalone/local.md %}) for more information about how to setup a local Flink.
+See the [standalone resource provider page]({% link deployment/resource-providers/standalone/index.md %}) for more information about how to setup a local Flink.
 You can also [build a local setup from source]({% link flinkDev/building.md %}).
 
 <span class="label label-info">Note</span> The Python Shell will run the command “python”. Please refer to the Python Table API [installation guide]({% link dev/python/installation.md %}) on how to set up the Python execution environments.
diff --git a/docs/deployment/repls/python_shell.zh.md b/docs/deployment/repls/python_shell.zh.md
index 72b79b5..5d2d6c6 100644
--- a/docs/deployment/repls/python_shell.zh.md
+++ b/docs/deployment/repls/python_shell.zh.md
@@ -24,7 +24,7 @@ under the License.
 
 Flink附带了一个集成的交互式Python Shell。
 它既能够运行在本地启动的local模式,也能够运行在集群启动的cluster模式下。
-本地安装Flink,请看[本地安装]({% link deployment/resource-providers/standalone/local.zh.md %})页面。
+本地安装Flink,请看[本地安装]({% link deployment/resource-providers/standalone/index.zh.md %})页面。
 您也可以从源码安装Flink,请看[从源码构建 Flink]({% link flinkDev/building.zh.md %})页面。
 
 <span class="label label-info">注意</span> Python Shell会调用“python”命令。关于Python执行环境的要求,请参考Python Table API[环境安装]({% link dev/python/installation.zh.md %})。
diff --git a/docs/deployment/resource-providers/native_kubernetes.md b/docs/deployment/resource-providers/native_kubernetes.md
index 008738b..32c8d6f 100644
--- a/docs/deployment/resource-providers/native_kubernetes.md
+++ b/docs/deployment/resource-providers/native_kubernetes.md
@@ -74,7 +74,7 @@ Congratulations! You have successfully run a Flink application by deploying Flin
 
 {% top %}
 
-## Deployment Modes Supported by Flink on Kubernetes
+## Deployment Modes
 
 For production use, we recommend deploying Flink Applications in the [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.
 
diff --git a/docs/deployment/resource-providers/native_kubernetes.zh.md b/docs/deployment/resource-providers/native_kubernetes.zh.md
index cdb0741..67f5cca 100644
--- a/docs/deployment/resource-providers/native_kubernetes.zh.md
+++ b/docs/deployment/resource-providers/native_kubernetes.zh.md
@@ -74,7 +74,7 @@ Congratulations! You have successfully run a Flink application by deploying Flin
 
 {% top %}
 
-## Deployment Modes Supported by Flink on Kubernetes
+## Deployment Modes
 
 For production use, we recommend deploying Flink Applications in the [Application Mode]({% link deployment/index.zh.md %}#application-mode), as these modes provide a better isolation for the Applications.
 
diff --git a/docs/deployment/resource-providers/standalone/docker.md b/docs/deployment/resource-providers/standalone/docker.md
index 5c17c4a..21613e2 100644
--- a/docs/deployment/resource-providers/standalone/docker.md
+++ b/docs/deployment/resource-providers/standalone/docker.md
@@ -23,88 +23,48 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Docker](https://www.docker.com) is a popular container runtime.
-There are Docker images for Apache Flink available [on Docker Hub](https://hub.docker.com/_/flink).
-You can use the docker images to deploy a *Session* or *Job cluster* in a containerized environment, e.g.,
-[standalone Kubernetes]({% link deployment/resource-providers/standalone/kubernetes.md %}) or [native Kubernetes]({% link deployment/resource-providers/native_kubernetes.md %}).
-
 * This will be replaced by the TOC
 {:toc}
 
-## Docker Hub Flink images
-
-The [Flink Docker repository](https://hub.docker.com/_/flink/) is hosted on
-Docker Hub and serves images of Flink version 1.2.1 and later.
-The source for these images can be found in the [Apache flink-docker](https://github.com/apache/flink-docker) repository.
-
-### Image tags
-
-Images for each supported combination of Flink and Scala versions are available, and
-[tag aliases](https://hub.docker.com/_/flink?tab=tags) are provided for convenience.
 
-For example, you can use the following aliases:
+## Getting Started
 
-* `flink:latest` → `flink:<latest-flink>-scala_<latest-scala>`
-* `flink:1.11` → `flink:1.11.<latest-flink-1.11>-scala_2.11`
+This *Getting Started* section guides you through the local setup (on one machine, but in separate containers) of a Flink cluster using Docker containers.
 
-<span class="label label-info">Note</span> It is recommended to always use an explicit version tag of the docker image that specifies both the needed Flink and Scala
-versions (for example `flink:1.11-scala_2.12`).
-This will avoid some class conflicts that can occur if the Flink and/or Scala versions used in the application are different
-from the versions provided by the docker image.
+### Introduction
 
-<span class="label label-info">Note</span> Prior to Flink 1.5 version, Hadoop dependencies were always bundled with Flink.
-You can see that certain tags include the version of Hadoop, e.g. (e.g. `-hadoop28`).
-Beginning with Flink 1.5, image tags that omit the Hadoop version correspond to Hadoop-free releases of Flink
-that do not include a bundled Hadoop distribution.
-
-## How to run a Flink image
-
-The Flink image contains a regular Flink distribution with its default configuration and a standard entry point script.
-You can run its entry point in the following modes:
-* [JobManager]({% link concepts/glossary.md %}#flink-jobmanager) for [a Session cluster](#start-a-session-cluster)
-* [JobManager]({% link concepts/glossary.md %}#flink-jobmanager) for [a Job cluster](#start-a-job-cluster)
-* [TaskManager]({% link concepts/glossary.md %}#flink-taskmanager) for any cluster
-
-This allows you to deploy a standalone cluster (Session or Job) in any containerised environment, for example:
-* manually in a local Docker setup,
-* [in a Kubernetes cluster]({% link deployment/resource-providers/standalone/kubernetes.md %}),
-* [with Docker Compose](#flink-with-docker-compose),
-* [with Docker swarm](#flink-with-docker-swarm).
-
-<span class="label label-info">Note</span> [The native Kubernetes]({% link deployment/resource-providers/native_kubernetes.md %}) also runs the same image by default
-and deploys *TaskManagers* on demand so that you do not have to do it manually.
-
-The next chapters describe how to start a single Flink Docker container for various purposes.
+[Docker](https://www.docker.com) is a popular container runtime.
+There are Docker images for Apache Flink available [on Docker Hub](https://hub.docker.com/_/flink).
+You can use the Docker images to deploy a *Session* or *Application cluster* on Docker. This page focuses on the setup of Flink on Docker, Docker Swarm and Docker Compose.
 
-Once you've started Flink on Docker, you can access the Flink Webfrontend on [localhost:8081](http://localhost:8081/#/overview) or submit jobs like this `./bin/flink run ./examples/streaming/TopSpeedWindowing.jar`.
+Deployment into managed containerized environments, such as [standalone Kubernetes]({% link deployment/resource-providers/standalone/kubernetes.md %}) or [native Kubernetes]({% link deployment/resource-providers/native_kubernetes.md %}), are described on separate pages.
 
-We recommend using [Docker Compose]({% link deployment/resource-providers/standalone/docker.md %}#session-cluster-with-docker-compose) or [Docker Swarm]({% link deployment/resource-providers/standalone/docker.md %}#session-cluster-with-docker-swarm) for deploying Flink as a Session Cluster to ease system configuration.
 
-### Start a Session Cluster
+### Starting a Session Cluster on Docker
 
-A *Flink Session cluster* can be used to run multiple jobs. Each job needs to be submitted to the cluster after it has been deployed.
-To deploy a *Flink Session cluster* with Docker, you need to start a *JobManager* container. To enable communication between the containers, we first set a required Flink configuration property and create a network:
+A *Flink Session cluster* can be used to run multiple jobs. Each job needs to be submitted to the cluster after the cluster has been deployed.
+To deploy a *Flink Session cluster* with Docker, you need to start a JobManager container. To enable communication between the containers, we first set a required Flink configuration property and create a network:
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
-docker network create flink-network
+$ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
+$ docker network create flink-network
 ```
 
 Then we launch the JobManager:
 
 ```sh
-docker run \
+$ docker run \
     --rm \
     --name=jobmanager \
     --network flink-network \
-    -p 8081:8081 \
+    --publish 8081:8081 \
     --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
     flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} jobmanager
 ```
 
-and one or more *TaskManager* containers:
+and one or more TaskManager containers:
 
 ```sh
-docker run \
+$ docker run \
     --rm \
     --name=taskmanager \
     --network flink-network \
@@ -112,10 +72,44 @@ docker run \
     flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} taskmanager
 ```
 
+The web interface is now available at [localhost:8081](http://localhost:8081).
+
+
+Submission of a job is now possible like this (assuming you have a local distribution of Flink available):
+
+```sh
+$ ./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
+```
+
+To shut down the cluster, either terminate (e.g. with `CTRL-C`) the JobManager and TaskManager processes, or use `docker ps` to identify and `docker stop` to terminate the containers.
+
+## Deployment Modes
+
+The Flink image contains a regular Flink distribution with its default configuration and a standard entry point script.
+You can run its entry point in the following modes:
+* [JobManager]({% link concepts/glossary.md %}#flink-jobmanager) for [a Session cluster](#starting-a-session-cluster-on-docker)
+* [JobManager]({% link concepts/glossary.md %}#flink-jobmanager) for [a Application cluster](#application-mode-on-docker)
+* [TaskManager]({% link concepts/glossary.md %}#flink-taskmanager) for any cluster
+
+This allows you to deploy a standalone cluster (Session or Application Mode) in any containerised environment, for example:
+* manually in a local Docker setup,
+* [in a Kubernetes cluster]({% link deployment/resource-providers/standalone/kubernetes.md %}),
+* [with Docker Compose](#flink-with-docker-compose),
+* [with Docker swarm](#flink-with-docker-swarm).
+
+<span class="label label-info">Note</span> [The native Kubernetes]({% link deployment/resource-providers/native_kubernetes.md %}) also runs the same image by default
+and deploys TaskManagers on demand so that you do not have to do it manually.
 
-### Start a Job Cluster
+The next chapters describe how to start a single Flink Docker container for various purposes.
+
+Once you've started Flink on Docker, you can access the Flink Webfrontend on [localhost:8081](http://localhost:8081/#/overview) or submit jobs like this `./bin/flink run ./examples/streaming/TopSpeedWindowing.jar`.
 
-A *Flink Job cluster* is a dedicated cluster which runs a single job.
+We recommend using [Docker Compose](#flink-with-docker-compose) or [Docker Swarm](#flink-with-docker-swarm) for deploying Flink in Session Mode to ease system configuration.
+
+
+### Application Mode on Docker
+
+A *Flink Application cluster* is a dedicated cluster which runs a single job.
 In this case, you deploy the cluster with the job as one step, thus, there is no extra job submission needed.
 
 The *job artifacts* are included into the class path of Flink's JVM process within the container and consist of:
@@ -123,20 +117,20 @@ The *job artifacts* are included into the class path of Flink's JVM process with
 * all other necessary dependencies or resources, not included into Flink.
 
 To deploy a cluster for a single job with Docker, you need to
-* make *job artifacts* available locally *in all containers* under `/opt/flink/usrlib`,
-* start a *JobManager* container in the *Job Cluster* mode
-* start the required number of *TaskManager* containers.
+* make *job artifacts* available locally in all containers under `/opt/flink/usrlib`,
+* start a JobManager container in the *Application cluster* mode
+* start the required number of TaskManager containers.
 
 To make the **job artifacts available** locally in the container, you can
 
 * **either mount a volume** (or multiple volumes) with the artifacts to `/opt/flink/usrlib` when you start
-the *JobManager* and *TaskManagers*:
+  the JobManager and TaskManagers:
 
     ```sh
-    FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
-    docker network create flink-network
+    $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
+    $ docker network create flink-network
 
-    docker run \
+    $ docker run \
         --mount type=bind,src=/host/path/to/job/artifacts1,target=/opt/flink/usrlib/artifacts1 \
         --mount type=bind,src=/host/path/to/job/artifacts2,target=/opt/flink/usrlib/artifacts2 \
         --rm \
@@ -149,16 +143,15 @@ the *JobManager* and *TaskManagers*:
         [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
         [job arguments]
 
-    docker run \
+    $ docker run \
         --mount type=bind,src=/host/path/to/job/artifacts1,target=/opt/flink/usrlib/artifacts1 \
         --mount type=bind,src=/host/path/to/job/artifacts2,target=/opt/flink/usrlib/artifacts2 \
         --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} taskmanager
     ```
 
-* **or extend the Flink image** by writing a custom `Dockerfile`, build it and use it for starting the *JobManager* and *TaskManagers*:
+* **or extend the Flink image** by writing a custom `Dockerfile`, build it and use it for starting the JobManager and TaskManagers:
 
-    *Dockerfile*:
 
     ```dockerfile
     FROM flink
@@ -167,18 +160,18 @@ the *JobManager* and *TaskManagers*:
     ```
 
     ```sh
-    docker build -t flink_with_job_artifacts .
-    docker run \
+    $ docker build --tag flink_with_job_artifacts .
+    $ docker run \
         flink_with_job_artifacts standalone-job \
         --job-classname com.job.ClassName \
         [--job-id <job id>] \
         [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
         [job arguments]
 
-    docker run flink_with_job_artifacts taskmanager
+    $ docker run flink_with_job_artifacts taskmanager
     ```
 
-The `standalone-job` argument starts a *JobManager* container in the *Job Cluster* mode.
+The `standalone-job` argument starts a JobManager container in the Application Mode.
 
 #### JobManager additional command line arguments
 
@@ -204,21 +197,53 @@ You can provide the following additional command line arguments to the cluster e
 
 If the main function of the user job main class accepts arguments, you can also pass them at the end of the `docker run` command.
 
-## Customize Flink image
+### Per-Job Mode on Docker
+
+[Per-Job Mode]({% link deployment/index.md %}#per-job-mode) is not supported by Flink on Docker.
+
+### Session Mode on Docker
 
-When you run the Flink containers, there may be a need to customize them.
-The next chapters describe some how-tos of what you can usually customize.
+Local deployment in the Session Mode has already been described in the [Getting Started](#starting-a-session-cluster-on-docker) section above.
 
-### Configure options
+
+{% top %}
+
+## Flink on Docker Reference
+
+### Image tags
+
+The [Flink Docker repository](https://hub.docker.com/_/flink/) is hosted on Docker Hub and serves images of Flink version 1.2.1 and later.
+The source for these images can be found in the [Apache flink-docker](https://github.com/apache/flink-docker) repository.
+
+Images for each supported combination of Flink and Scala versions are available, and
+[tag aliases](https://hub.docker.com/_/flink?tab=tags) are provided for convenience.
+
+For example, you can use the following aliases:
+
+* `flink:latest` → `flink:<latest-flink>-scala_<latest-scala>`
+* `flink:1.11` → `flink:1.11.<latest-flink-1.11>-scala_2.11`
+
+<span class="label label-info">Note</span> It is recommended to always use an explicit version tag of the docker image that specifies both the needed Flink and Scala
+versions (for example `flink:1.11-scala_2.12`).
+This will avoid some class conflicts that can occur if the Flink and/or Scala versions used in the application are different
+from the versions provided by the docker image.
+
+<span class="label label-info">Note</span> Prior to Flink 1.5 version, Hadoop dependencies were always bundled with Flink.
+You can see that certain tags include the version of Hadoop, e.g. (e.g. `-hadoop28`).
+Beginning with Flink 1.5, image tags that omit the Hadoop version correspond to Hadoop-free releases of Flink
+that do not include a bundled Hadoop distribution.
+
+
+### Passing configuration via environment variables
 
 When you run Flink image, you can also change its configuration options by setting the environment variable `FLINK_PROPERTIES`:
 
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: host
+$ FLINK_PROPERTIES="jobmanager.rpc.address: host
 taskmanager.numberOfTaskSlots: 3
 blob.server.port: 6124
 "
-docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
+$ docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
 ```
 
 The [`jobmanager.rpc.address`]({% link deployment/config.md %}#jobmanager-rpc-address) option must be configured, others are optional to set.
@@ -234,14 +259,13 @@ To provide a custom location for the Flink configuration files, you can
 * **either mount a volume** with the custom configuration files to this path `/opt/flink/conf` when you run the Flink image:
 
     ```sh
-    docker run \
+    $ docker run \
         --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf \
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
     ```
 
 * or add them to your **custom Flink image**, build and run it:
 
-    *Dockerfile*:
 
     ```dockerfile
     FROM flink
@@ -252,22 +276,43 @@ To provide a custom location for the Flink configuration files, you can
 <span class="label label-warning">Warning!</span> The mounted volume must contain all necessary configuration files.
 The `flink-conf.yaml` file must have write permission so that the Docker entry point script can modify it in certain cases.
 
-### Using plugins
+### Using filesystem plugins
 
-As described in the [plugins]({% link deployment/filesystems/plugins.md %}) documentation page: in order to use plugins they must be
+As described in the [plugins]({% link deployment/filesystems/plugins.md %}) documentation page: In order to use plugins they must be
 copied to the correct location in the Flink installation in the Docker container for them to work.
 
 If you want to enable plugins provided with Flink (in the `opt/` directory of the Flink distribution), you can pass the environment variable `ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
 The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{site.version}}.jar`
 
 ```sh
-    docker run \
+    $ docker run \
         --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
 ```
 
 There are also more [advanced ways](#advanced-customization) for customizing the Flink image.
 
+### Enabling Python
+
+To build a custom image which has Python and PyFlink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink:{{site.version}}
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink[=={{site.version}}]
+{% endhighlight %}
+
+Build the image named as **pyflink:latest**:
+
+{% highlight bash %}
+$ docker build --tag pyflink:latest .
+{% endhighlight %}
+
 ### Switch memory allocator
 
 Flink introduced `jemalloc` as default memory allocator to resolve memory fragmentation problem (please refer to [FLINK-19125](https://issues.apache.org/jira/browse/FLINK-19125)).
@@ -276,7 +321,7 @@ You could switch back to use `glibc` as memory allocator to restore the old beha
 (and please report the issue via JIRA or mailing list if you found any), by passing `disable-jemalloc` parameter:
 
 ```sh
-    docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
+    $ docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
 ```
 
 ### Advanced customization
@@ -288,13 +333,11 @@ There are several ways in which you can further customize the Flink image:
 * add other libraries to `/opt/flink/lib` (e.g. Hadoop)
 * add other plugins to `/opt/flink/plugins`
 
-See also: [How to provide dependencies in the classpath]({% link index.md %}#how-to-provide-dependencies-in-the-classpath).
-
 You can customize the Flink image in several ways:
 
 * **override the container entry point** with a custom script where you can run any bootstrap actions.
-At the end you can call the standard `/docker-entrypoint.sh` script of the Flink image with the same arguments
-as described in [how to run the Flink image](#how-to-run-flink-image).
+  At the end you can call the standard `/docker-entrypoint.sh` script of the Flink image with the same arguments
+  as described in [supported deployment modes](#deployment-modes).
 
   The following example creates a custom entry point script which enables more libraries and plugins.
   The custom script, custom library and plugin are provided from a mounted volume.
@@ -304,29 +347,32 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # create custom_lib.jar
     # create custom_plugin.jar
 
-    echo "
-    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/.  # enable an optional library
-    ln -fs /mnt/custom_lib.jar /opt/flink/lib/.  # enable a custom library
+    $ echo "
+    # enable an optional library
+    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/
+    # enable a custom library
+    ln -fs /mnt/custom_lib.jar /opt/flink/lib/
 
     mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.  # enable an optional plugin
+    # enable an optional plugin
+    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/  
 
     mkdir -p /opt/flink/plugins/custom_plugin
-    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/.  # enable a custom plugin
+    # enable a custom plugin
+    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/
 
     /docker-entrypoint.sh <jobmanager|standalone-job|taskmanager>
     " > custom_entry_point_script.sh
 
-    chmod 755 custom_entry_point_script.sh
+    $ chmod 755 custom_entry_point_script.sh
 
-    docker run \
+    $ docker run \
         --mount type=bind,src=$(pwd),target=/mnt
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} /mnt/custom_entry_point_script.sh
     ```
 
 * **extend the Flink image** by writing a custom `Dockerfile` and build a custom image:
 
-    *Dockerfile*:
 
     ```dockerfile
     FROM flink
@@ -344,120 +390,97 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     ENV VAR_NAME value
     ```
 
-    **Commands for building**:
+  **Commands for building**:
 
     ```sh
-    docker build -t custom_flink_image .
+    $ docker build --tag custom_flink_image .
     # optional push to your docker image registry if you have it,
     # e.g. to distribute the custom image to your cluster
-    docker push custom_flink_image
+    $ docker push custom_flink_image
     ```
-  
-### Enabling Python
 
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
 
-# install python3 and pip3
-RUN apt-get update -y && \
-apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
-
-Build the image named as **pyflink:latest**:
-
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
-
-{% top %}
-
-## Flink with Docker Compose
+### Flink with Docker Compose
 
 [Docker Compose](https://docs.docker.com/compose/) is a way to run a group of Docker containers locally.
-The next chapters show examples of configuration files to run Flink.
+The next sections show examples of configuration files to run Flink.
 
-### Usage
+#### Usage
 
 * Create the `yaml` files with the container configuration, check examples for:
-    * [Session cluster](#session-cluster-with-docker-compose)
-    * [Job cluster](#job-cluster-with-docker-compose)
+  * [Application cluster](#app-cluster-yml)
+  * [Session cluster](#session-cluster-yml)
 
-    See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization)
-    for usage in the configuration files.
+  See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization)
+  for usage in the configuration files.
 
-* Launch a cluster in the foreground
+* Launch a cluster in the foreground (use `-d` for background)
 
     ```sh
-    docker-compose up
+    $ docker-compose up
     ```
 
-* Launch a cluster in the background
+* Scale the cluster up or down to `N` TaskManagers
 
     ```sh
-    docker-compose up -d
+    $ docker-compose scale taskmanager=<N>
     ```
 
-* Scale the cluster up or down to *N TaskManagers*
+* Access the JobManager container
 
     ```sh
-    docker-compose scale taskmanager=<N>
-    ```
-
-* Access the *JobManager* container
-
-    ```sh
-    docker exec -it $(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}) /bin/sh
+    $ docker exec -it $(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}) /bin/sh
     ```
 
 * Kill the cluster
 
     ```sh
-    docker-compose kill
+    $ docker-compose kill
     ```
 
 * Access Web UI
 
-    When the cluster is running, you can visit the web UI at [http://localhost:8081](http://localhost:8081).
-    You can also use the web UI to submit a job to a *Session cluster*.
+  When the cluster is running, you can visit the web UI at [http://localhost:8081](http://localhost:8081).
+  You can also use the web UI to submit a job to a *Session cluster*.
 
 * To submit a job to a *Session cluster* via the command line, you can either
 
   * use [Flink CLI]({% link deployment/cli.md %}) on the host if it is installed:
 
     ```sh
-    flink run -d -c ${JOB_CLASS_NAME} /job.jar
+    $ ./bin/flink run --detached --class ${JOB_CLASS_NAME} /job.jar
     ```
 
-  * or copy the JAR to the *JobManager* container and submit the job using the [CLI]({% link deployment/cli.md %}) from there, for example:
+  * or copy the JAR to the JobManager container and submit the job using the [CLI]({% link deployment/cli.md %}) from there, for example:
 
     ```sh
-    JOB_CLASS_NAME="com.job.ClassName"
-    JM_CONTAINER=$(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}))
-    docker cp path/to/jar "${JM_CONTAINER}":/job.jar
-    docker exec -t -i "${JM_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} /job.jar
+    $ JOB_CLASS_NAME="com.job.ClassName"
+    $ JM_CONTAINER=$(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}))
+    $ docker cp path/to/jar "${JM_CONTAINER}":/job.jar
+    $ docker exec -t -i "${JM_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} /job.jar
     ```
 
-### Session Cluster with Docker Compose
+Here, we provide the <a id="app-cluster-yml">docker-compose.yml</a> for *Application Cluster*.
 
-**docker-compose.yml:**
+Note: For the Application Mode cluster, the artifacts must be available in the Flink containers, check details [here](#application-mode-on-docker).
+See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments)
+in the `command` for the `jobmanager` service.
 
-```yaml
+{% highlight yaml %}
 version: "2.2"
 services:
   jobmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
     ports:
       - "8081:8081"
-    command: jobmanager
+    command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
+        parallelism.default: 2
 
   taskmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
@@ -465,36 +488,32 @@ services:
       - jobmanager
     command: taskmanager
     scale: 1
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
         taskmanager.numberOfTaskSlots: 2
-```
+        parallelism.default: 2
+{% endhighlight %}
 
-### Job Cluster with Docker Compose
 
-The artifacts must be available in the Flink containers, check details [here](#start-a-job-cluster).
-See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments)
-in the `command` for the `jobmanager` service.
+As well as the <a id="session-cluster-yml">configuration file</a> for *Session Cluster*:
 
-**docker-compose.yml:**
 
-```yaml
+{% highlight yaml %}
 version: "2.2"
 services:
   jobmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
     ports:
       - "8081:8081"
-    command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
-    volumes:
-      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    command: jobmanager
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
-        parallelism.default: 2
 
   taskmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
@@ -502,51 +521,47 @@ services:
       - jobmanager
     command: taskmanager
     scale: 1
-    volumes:
-      - /host/path/to/job/artifacts:/opt/flink/usrlib
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
         taskmanager.numberOfTaskSlots: 2
-        parallelism.default: 2
-```
+{% endhighlight %}
 
-{% top %}
 
-## Flink with Docker Swarm
+### Flink with Docker Swarm
 
 The [Docker swarm](https://docs.docker.com/engine/swarm) is a container orchestration tool, that
 allows you to manage multiple containers deployed across multiple host machines.
 
-The following chapters contain examples of how to configure and start *JobManager* and *TaskManager* containers.
+The following chapters contain examples of how to configure and start JobManager and TaskManager containers.
 You can adjust them accordingly to start a cluster.
 See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization) for usage in the provided scripts.
 
 The port `8081` is exposed for the Flink Web UI access.
 If you run the swarm locally, you can visit the web UI at [http://localhost:8081](http://localhost:8081) after starting the cluster.
 
-### Session Cluster with Docker Swarm
+#### Session Cluster with Docker Swarm
 
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: flink-session-jobmanager
+$ FLINK_PROPERTIES="jobmanager.rpc.address: flink-session-jobmanager
 taskmanager.numberOfTaskSlots: 2
 "
 
 # Create overlay network
-docker network create -d overlay flink-session
+$ docker network create -d overlay flink-session
 
 # Create the JobManager service
-docker service create \
+$ docker service create \
   --name flink-session-jobmanager \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
-  -p 8081:8081 \
+  --publish 8081:8081 \
   --network flink-session \
   flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} \
     jobmanager
 
 # Create the TaskManager service (scale this out as needed)
-docker service create \
+$ docker service create \
   --name flink-session-taskmanager \
   --replicas 2 \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
@@ -555,22 +570,22 @@ docker service create \
     taskmanager
 ```
 
-### Job Cluster with Docker Swarm
+#### Application Cluster with Docker Swarm
 
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: flink-jobmanager
+$ FLINK_PROPERTIES="jobmanager.rpc.address: flink-jobmanager
 taskmanager.numberOfTaskSlots: 2
 "
 
 # Create overlay network
-docker network create -d overlay flink-job
+$ docker network create -d overlay flink-job
 
 # Create the JobManager service
-docker service create \
+$ docker service create \
   --name flink-jobmanager \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
   --mount type=bind,source=/host/path/to/job/artifacts,target=/opt/flink/usrlib \
-  -p 8081:8081 \
+  --publish 8081:8081 \
   --network flink-job \
   flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} \
     standalone-job \
@@ -580,7 +595,7 @@ docker service create \
     [job arguments]
 
 # Create the TaskManager service (scale this out as needed)
-docker service create \
+$ docker service create \
   --name flink-job-taskmanager \
   --replicas 2 \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
@@ -590,7 +605,7 @@ docker service create \
     taskmanager
 ```
 
-The *job artifacts* must be available in the *JobManager* container, as outlined [here](#start-a-job-cluster).
+The *job artifacts* must be available in the JobManager container, as outlined [here](#application-mode-on-docker).
 See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments) to pass them
 to the `flink-jobmanager` container.
 
diff --git a/docs/deployment/resource-providers/standalone/docker.zh.md b/docs/deployment/resource-providers/standalone/docker.zh.md
index 681eb9c..ab9a35b 100644
--- a/docs/deployment/resource-providers/standalone/docker.zh.md
+++ b/docs/deployment/resource-providers/standalone/docker.zh.md
@@ -23,88 +23,48 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Docker](https://www.docker.com) is a popular container runtime.
-There are Docker images for Apache Flink available [on Docker Hub](https://hub.docker.com/_/flink).
-You can use the docker images to deploy a *Session* or *Job cluster* in a containerized environment, e.g.,
-[standalone Kubernetes]({% link deployment/resource-providers/standalone/kubernetes.zh.md %}) or [native Kubernetes]({% link deployment/resource-providers/native_kubernetes.zh.md %}).
-
 * This will be replaced by the TOC
 {:toc}
 
-## Docker Hub Flink images
-
-The [Flink Docker repository](https://hub.docker.com/_/flink/) is hosted on
-Docker Hub and serves images of Flink version 1.2.1 and later.
-The source for these images can be found in the [Apache flink-docker](https://github.com/apache/flink-docker) repository.
-
-### Image tags
-
-Images for each supported combination of Flink and Scala versions are available, and
-[tag aliases](https://hub.docker.com/_/flink?tab=tags) are provided for convenience.
-
-For example, you can use the following aliases:
-
-* `flink:latest` → `flink:<latest-flink>-scala_<latest-scala>`
-* `flink:1.11` → `flink:1.11.<latest-flink-1.11>-scala_2.11`
-
-<span class="label label-info">Note</span>It is recommended to always use an explicit version tag of the docker image that specifies both the needed Flink and Scala
-versions (for example `flink:1.11-scala_2.12`).
-This will avoid some class conflicts that can occur if the Flink and/or Scala versions used in the application are different
-from the versions provided by the docker image.
-
-<span class="label label-info">Note</span> Prior to Flink 1.5 version, Hadoop dependencies were always bundled with Flink.
-You can see that certain tags include the version of Hadoop, e.g. (e.g. `-hadoop28`).
-Beginning with Flink 1.5, image tags that omit the Hadoop version correspond to Hadoop-free releases of Flink
-that do not include a bundled Hadoop distribution.
 
-## How to run a Flink image
+## Getting Started
 
-The Flink image contains a regular Flink distribution with its default configuration and a standard entry point script.
-You can run its entry point in the following modes:
-* [JobManager]({% link concepts/glossary.zh.md %}#flink-jobmanager) for [a Session cluster](#start-a-session-cluster)
-* [JobManager]({% link concepts/glossary.zh.md %}#flink-jobmanager) for [a Job cluster](#start-a-job-cluster)
-* [TaskManager]({% link concepts/glossary.zh.md %}#flink-taskmanager) for any cluster
+This *Getting Started* section guides you through the local setup (on one machine, but in separate containers) of a Flink cluster using Docker containers.
 
-This allows you to deploy a standalone cluster (Session or Job) in any containerised environment, for example:
-* manually in a local Docker setup,
-* [in a Kubernetes cluster]({% link deployment/resource-providers/standalone/kubernetes.zh.md %}),
-* [with Docker Compose](#flink-with-docker-compose),
-* [with Docker swarm](#flink-with-docker-swarm).
+### Introduction
 
-<span class="label label-info">Note</span> [The native Kubernetes]({% link deployment/resource-providers/native_kubernetes.zh.md %}) also runs the same image by default
-and deploys *TaskManagers* on demand so that you do not have to do it manually.
-
-The next chapters describe how to start a single Flink Docker container for various purposes.
+[Docker](https://www.docker.com) is a popular container runtime.
+There are Docker images for Apache Flink available [on Docker Hub](https://hub.docker.com/_/flink).
+You can use the Docker images to deploy a *Session* or *Application cluster* on Docker. This page focuses on the setup of Flink on Docker, Docker Swarm and Docker Compose.
 
-Once you've started Flink on Docker, you can access the Flink Webfrontend on [localhost:8081](http://localhost:8081/#/overview) or submit jobs like this `./bin/flink run ./examples/streaming/TopSpeedWindowing.jar`.
+Deployment into managed containerized environments, such as [standalone Kubernetes]({% link deployment/resource-providers/standalone/kubernetes.zh.md %}) or [native Kubernetes]({% link deployment/resource-providers/native_kubernetes.zh.md %}), are described on separate pages.
 
-We recommend using [Docker Compose]({% link deployment/resource-providers/standalone/docker.zh.md %}#session-cluster-with-docker-compose) or [Docker Swarm]({% link deployment/resource-providers/standalone/docker.zh.md %}#session-cluster-with-docker-swarm) for deploying Flink as a Session Cluster to ease system configuration.
 
-### Start a Session Cluster
+### Starting a Session Cluster on Docker
 
-A *Flink Session cluster* can be used to run multiple jobs. Each job needs to be submitted to the cluster after it has been deployed.
-To deploy a *Flink Session cluster* with Docker, you need to start a *JobManager* container. To enable communication between the containers, we first set a required Flink configuration property and create a network:
+A *Flink Session cluster* can be used to run multiple jobs. Each job needs to be submitted to the cluster after the cluster has been deployed.
+To deploy a *Flink Session cluster* with Docker, you need to start a JobManager container. To enable communication between the containers, we first set a required Flink configuration property and create a network:
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
-docker network create flink-network
+$ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
+$ docker network create flink-network
 ```
 
 Then we launch the JobManager:
 
 ```sh
-docker run \
+$ docker run \
     --rm \
     --name=jobmanager \
     --network flink-network \
-    -p 8081:8081 \
+    --publish 8081:8081 \
     --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
     flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} jobmanager
 ```
 
-and one or more *TaskManager* containers:
+and one or more TaskManager containers:
 
 ```sh
-docker run \
+$ docker run \
     --rm \
     --name=taskmanager \
     --network flink-network \
@@ -112,10 +72,44 @@ docker run \
     flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} taskmanager
 ```
 
+The web interface is now available at [localhost:8081](http://localhost:8081).
+
+
+Submission of a job is now possible like this (assuming you have a local distribution of Flink available):
+
+```sh
+$ ./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
+```
+
+To shut down the cluster, either terminate (e.g. with `CTRL-C`) the JobManager and TaskManager processes, or use `docker ps` to identify and `docker stop` to terminate the containers.
+
+## Deployment Modes
+
+The Flink image contains a regular Flink distribution with its default configuration and a standard entry point script.
+You can run its entry point in the following modes:
+* [JobManager]({% link concepts/glossary.zh.md %}#flink-jobmanager) for [a Session cluster](#starting-a-session-cluster-on-docker)
+* [JobManager]({% link concepts/glossary.zh.md %}#flink-jobmanager) for [a Application cluster](#application-mode-on-docker)
+* [TaskManager]({% link concepts/glossary.zh.md %}#flink-taskmanager) for any cluster
+
+This allows you to deploy a standalone cluster (Session or Application Mode) in any containerised environment, for example:
+* manually in a local Docker setup,
+* [in a Kubernetes cluster]({% link deployment/resource-providers/standalone/kubernetes.zh.md %}),
+* [with Docker Compose](#flink-with-docker-compose),
+* [with Docker swarm](#flink-with-docker-swarm).
+
+<span class="label label-info">Note</span> [The native Kubernetes]({% link deployment/resource-providers/native_kubernetes.zh.md %}) also runs the same image by default
+and deploys TaskManagers on demand so that you do not have to do it manually.
+
+The next chapters describe how to start a single Flink Docker container for various purposes.
+
+Once you've started Flink on Docker, you can access the Flink Webfrontend on [localhost:8081](http://localhost:8081/#/overview) or submit jobs like this `./bin/flink run ./examples/streaming/TopSpeedWindowing.jar`.
+
+We recommend using [Docker Compose](#flink-with-docker-compose) or [Docker Swarm](#flink-with-docker-swarm) for deploying Flink in Session Mode to ease system configuration.
 
-### Start a Job Cluster
 
-A *Flink Job cluster* is a dedicated cluster which runs a single job.
+### Application Mode on Docker
+
+A *Flink Application cluster* is a dedicated cluster which runs a single job.
 In this case, you deploy the cluster with the job as one step, thus, there is no extra job submission needed.
 
 The *job artifacts* are included into the class path of Flink's JVM process within the container and consist of:
@@ -123,20 +117,20 @@ The *job artifacts* are included into the class path of Flink's JVM process with
 * all other necessary dependencies or resources, not included into Flink.
 
 To deploy a cluster for a single job with Docker, you need to
-* make *job artifacts* available locally *in all containers* under `/opt/flink/usrlib`,
-* start a *JobManager* container in the *Job Cluster* mode
-* start the required number of *TaskManager* containers.
+* make *job artifacts* available locally in all containers under `/opt/flink/usrlib`,
+* start a JobManager container in the *Application cluster* mode
+* start the required number of TaskManager containers.
 
 To make the **job artifacts available** locally in the container, you can
 
 * **either mount a volume** (or multiple volumes) with the artifacts to `/opt/flink/usrlib` when you start
-the *JobManager* and *TaskManagers*:
+  the JobManager and TaskManagers:
 
     ```sh
-    FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
-    docker network create flink-network
+    $ FLINK_PROPERTIES="jobmanager.rpc.address: jobmanager"
+    $ docker network create flink-network
 
-    docker run \
+    $ docker run \
         --mount type=bind,src=/host/path/to/job/artifacts1,target=/opt/flink/usrlib/artifacts1 \
         --mount type=bind,src=/host/path/to/job/artifacts2,target=/opt/flink/usrlib/artifacts2 \
         --rm \
@@ -149,16 +143,15 @@ the *JobManager* and *TaskManagers*:
         [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
         [job arguments]
 
-    docker run \
+    $ docker run \
         --mount type=bind,src=/host/path/to/job/artifacts1,target=/opt/flink/usrlib/artifacts1 \
         --mount type=bind,src=/host/path/to/job/artifacts2,target=/opt/flink/usrlib/artifacts2 \
         --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} taskmanager
     ```
 
-* **or extend the Flink image** by writing a custom `Dockerfile`, build it and use it for starting the *JobManager* and *TaskManagers*:
+* **or extend the Flink image** by writing a custom `Dockerfile`, build it and use it for starting the JobManager and TaskManagers:
 
-    *Dockerfile*:
 
     ```dockerfile
     FROM flink
@@ -167,18 +160,18 @@ the *JobManager* and *TaskManagers*:
     ```
 
     ```sh
-    docker build -t flink_with_job_artifacts .
-    docker run \
+    $ docker build --tag flink_with_job_artifacts .
+    $ docker run \
         flink_with_job_artifacts standalone-job \
         --job-classname com.job.ClassName \
         [--job-id <job id>] \
         [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] \
         [job arguments]
 
-    docker run flink_with_job_artifacts taskmanager
+    $ docker run flink_with_job_artifacts taskmanager
     ```
 
-The `standalone-job` argument starts a *JobManager* container in the *Job Cluster* mode.
+The `standalone-job` argument starts a JobManager container in the Application Mode.
 
 #### JobManager additional command line arguments
 
@@ -204,21 +197,53 @@ You can provide the following additional command line arguments to the cluster e
 
 If the main function of the user job main class accepts arguments, you can also pass them at the end of the `docker run` command.
 
-## Customize Flink image
+### Per-Job Mode on Docker
+
+[Per-Job Mode]({% link deployment/index.zh.md %}#per-job-mode) is not supported by Flink on Docker.
+
+### Session Mode on Docker
+
+Local deployment in the Session Mode has already been described in the [Getting Started](#starting-a-session-cluster-on-docker) section above.
+
+
+{% top %}
+
+## Flink on Docker Reference
+
+### Image tags
+
+The [Flink Docker repository](https://hub.docker.com/_/flink/) is hosted on Docker Hub and serves images of Flink version 1.2.1 and later.
+The source for these images can be found in the [Apache flink-docker](https://github.com/apache/flink-docker) repository.
+
+Images for each supported combination of Flink and Scala versions are available, and
+[tag aliases](https://hub.docker.com/_/flink?tab=tags) are provided for convenience.
+
+For example, you can use the following aliases:
 
-When you run the Flink containers, there may be a need to customize them.
-The next chapters describe some how-tos of what you can usually customize.
+* `flink:latest` → `flink:<latest-flink>-scala_<latest-scala>`
+* `flink:1.11` → `flink:1.11.<latest-flink-1.11>-scala_2.11`
 
-### Configure options
+<span class="label label-info">Note</span> It is recommended to always use an explicit version tag of the docker image that specifies both the needed Flink and Scala
+versions (for example `flink:1.11-scala_2.12`).
+This will avoid some class conflicts that can occur if the Flink and/or Scala versions used in the application are different
+from the versions provided by the docker image.
+
+<span class="label label-info">Note</span> Prior to Flink 1.5 version, Hadoop dependencies were always bundled with Flink.
+You can see that certain tags include the version of Hadoop, e.g. (e.g. `-hadoop28`).
+Beginning with Flink 1.5, image tags that omit the Hadoop version correspond to Hadoop-free releases of Flink
+that do not include a bundled Hadoop distribution.
+
+
+### Passing configuration via environment variables
 
 When you run Flink image, you can also change its configuration options by setting the environment variable `FLINK_PROPERTIES`:
 
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: host
+$ FLINK_PROPERTIES="jobmanager.rpc.address: host
 taskmanager.numberOfTaskSlots: 3
 blob.server.port: 6124
 "
-docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
+$ docker run --env FLINK_PROPERTIES=${FLINK_PROPERTIES} flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
 ```
 
 The [`jobmanager.rpc.address`]({% link deployment/config.zh.md %}#jobmanager-rpc-address) option must be configured, others are optional to set.
@@ -234,14 +259,13 @@ To provide a custom location for the Flink configuration files, you can
 * **either mount a volume** with the custom configuration files to this path `/opt/flink/conf` when you run the Flink image:
 
     ```sh
-    docker run \
+    $ docker run \
         --mount type=bind,src=/host/path/to/custom/conf,target=/opt/flink/conf \
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
     ```
 
 * or add them to your **custom Flink image**, build and run it:
 
-    *Dockerfile*:
 
     ```dockerfile
     FROM flink
@@ -252,22 +276,43 @@ To provide a custom location for the Flink configuration files, you can
 <span class="label label-warning">Warning!</span> The mounted volume must contain all necessary configuration files.
 The `flink-conf.yaml` file must have write permission so that the Docker entry point script can modify it in certain cases.
 
-### Using plugins
+### Using filesystem plugins
 
-As described in the [plugins]({% link deployment/filesystems/plugins.zh.md %}) documentation page: in order to use plugins they must be
+As described in the [plugins]({% link deployment/filesystems/plugins.zh.md %}) documentation page: In order to use plugins they must be
 copied to the correct location in the Flink installation in the Docker container for them to work.
 
 If you want to enable plugins provided with Flink (in the `opt/` directory of the Flink distribution), you can pass the environment variable `ENABLE_BUILT_IN_PLUGINS` when you run the Flink image.
 The `ENABLE_BUILT_IN_PLUGINS` should contain a list of plugin jar file names separated by `;`. A valid plugin name is for example `flink-s3-fs-hadoop-{{site.version}}.jar`
 
 ```sh
-    docker run \
+    $ docker run \
         --env ENABLE_BUILT_IN_PLUGINS=flink-plugin1.jar;flink-plugin2.jar \
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} <jobmanager|standalone-job|taskmanager>
 ```
 
 There are also more [advanced ways](#advanced-customization) for customizing the Flink image.
 
+### Enabling Python
+
+To build a custom image which has Python and PyFlink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink:{{site.version}}
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink[=={{site.version}}]
+{% endhighlight %}
+
+Build the image named as **pyflink:latest**:
+
+{% highlight bash %}
+$ docker build --tag pyflink:latest .
+{% endhighlight %}
+
 ### Switch memory allocator
 
 Flink introduced `jemalloc` as default memory allocator to resolve memory fragmentation problem (please refer to [FLINK-19125](https://issues.apache.org/jira/browse/FLINK-19125)).
@@ -276,7 +321,7 @@ You could switch back to use `glibc` as memory allocator to restore the old beha
 (and please report the issue via JIRA or mailing list if you found any), by passing `disable-jemalloc` parameter:
 
 ```sh
-    docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
+    $ docker run <jobmanager|standalone-job|taskmanager> disable-jemalloc
 ```
 
 ### Advanced customization
@@ -288,13 +333,11 @@ There are several ways in which you can further customize the Flink image:
 * add other libraries to `/opt/flink/lib` (e.g. Hadoop)
 * add other plugins to `/opt/flink/plugins`
 
-See also: [How to provide dependencies in the classpath]({% link index.zh.md %}#how-to-provide-dependencies-in-the-classpath).
-
 You can customize the Flink image in several ways:
 
 * **override the container entry point** with a custom script where you can run any bootstrap actions.
-At the end you can call the standard `/docker-entrypoint.sh` script of the Flink image with the same arguments
-as described in [how to run the Flink image](#how-to-run-flink-image).
+  At the end you can call the standard `/docker-entrypoint.sh` script of the Flink image with the same arguments
+  as described in [supported deployment modes](#deployment-modes).
 
   The following example creates a custom entry point script which enables more libraries and plugins.
   The custom script, custom library and plugin are provided from a mounted volume.
@@ -304,29 +347,32 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # create custom_lib.jar
     # create custom_plugin.jar
 
-    echo "
-    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/.  # enable an optional library
-    ln -fs /mnt/custom_lib.jar /opt/flink/lib/.  # enable a custom library
+    $ echo "
+    # enable an optional library
+    ln -fs /opt/flink/opt/flink-queryable-state-runtime-*.jar /opt/flink/lib/
+    # enable a custom library
+    ln -fs /mnt/custom_lib.jar /opt/flink/lib/
 
     mkdir -p /opt/flink/plugins/flink-s3-fs-hadoop
-    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/.  # enable an optional plugin
+    # enable an optional plugin
+    ln -fs /opt/flink/opt/flink-s3-fs-hadoop-*.jar /opt/flink/plugins/flink-s3-fs-hadoop/  
 
     mkdir -p /opt/flink/plugins/custom_plugin
-    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/.  # enable a custom plugin
+    # enable a custom plugin
+    ln -fs /mnt/custom_plugin.jar /opt/flink/plugins/custom_plugin/
 
     /docker-entrypoint.sh <jobmanager|standalone-job|taskmanager>
     " > custom_entry_point_script.sh
 
-    chmod 755 custom_entry_point_script.sh
+    $ chmod 755 custom_entry_point_script.sh
 
-    docker run \
+    $ docker run \
         --mount type=bind,src=$(pwd),target=/mnt
         flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} /mnt/custom_entry_point_script.sh
     ```
 
 * **extend the Flink image** by writing a custom `Dockerfile` and build a custom image:
 
-    *Dockerfile*:
 
     ```dockerfile
     FROM flink
@@ -344,99 +390,97 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     ENV VAR_NAME value
     ```
 
-    **Commands for building**:
+  **Commands for building**:
 
     ```sh
-    docker build -t custom_flink_image .
+    $ docker build --tag custom_flink_image .
     # optional push to your docker image registry if you have it,
     # e.g. to distribute the custom image to your cluster
-    docker push custom_flink_image
+    $ docker push custom_flink_image
     ```
 
-{% top %}
 
-## Flink with Docker Compose
+### Flink with Docker Compose
 
 [Docker Compose](https://docs.docker.com/compose/) is a way to run a group of Docker containers locally.
-The next chapters show examples of configuration files to run Flink.
+The next sections show examples of configuration files to run Flink.
 
-### Usage
+#### Usage
 
 * Create the `yaml` files with the container configuration, check examples for:
-    * [Session cluster](#session-cluster-with-docker-compose)
-    * [Job cluster](#job-cluster-with-docker-compose)
+  * [Application cluster](#app-cluster-yml)
+  * [Session cluster](#session-cluster-yml)
 
-    See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization)
-    for usage in the configuration files.
+  See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization)
+  for usage in the configuration files.
 
-* Launch a cluster in the foreground
+* Launch a cluster in the foreground (use `-d` for background)
 
     ```sh
-    docker-compose up
+    $ docker-compose up
     ```
 
-* Launch a cluster in the background
+* Scale the cluster up or down to `N` TaskManagers
 
     ```sh
-    docker-compose up -d
+    $ docker-compose scale taskmanager=<N>
     ```
 
-* Scale the cluster up or down to *N TaskManagers*
+* Access the JobManager container
 
     ```sh
-    docker-compose scale taskmanager=<N>
-    ```
-
-* Access the *JobManager* container
-
-    ```sh
-    docker exec -it $(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}) /bin/sh
+    $ docker exec -it $(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}) /bin/sh
     ```
 
 * Kill the cluster
 
     ```sh
-    docker-compose kill
+    $ docker-compose kill
     ```
 
 * Access Web UI
 
-    When the cluster is running, you can visit the web UI at [http://localhost:8081](http://localhost:8081).
-    You can also use the web UI to submit a job to a *Session cluster*.
+  When the cluster is running, you can visit the web UI at [http://localhost:8081](http://localhost:8081).
+  You can also use the web UI to submit a job to a *Session cluster*.
 
 * To submit a job to a *Session cluster* via the command line, you can either
 
   * use [Flink CLI]({% link deployment/cli.zh.md %}) on the host if it is installed:
 
     ```sh
-    flink run -d -c ${JOB_CLASS_NAME} /job.jar
+    $ ./bin/flink run --detached --class ${JOB_CLASS_NAME} /job.jar
     ```
 
-  * or copy the JAR to the *JobManager* container and submit the job using the [CLI]({% link deployment/cli.zh.md %}) from there, for example:
+  * or copy the JAR to the JobManager container and submit the job using the [CLI]({% link deployment/cli.zh.md %}) from there, for example:
 
     ```sh
-    JOB_CLASS_NAME="com.job.ClassName"
-    JM_CONTAINER=$(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}))
-    docker cp path/to/jar "${JM_CONTAINER}":/job.jar
-    docker exec -t -i "${JM_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} /job.jar
+    $ JOB_CLASS_NAME="com.job.ClassName"
+    $ JM_CONTAINER=$(docker ps --filter name=jobmanager --format={% raw %}{{.ID}}{% endraw %}))
+    $ docker cp path/to/jar "${JM_CONTAINER}":/job.jar
+    $ docker exec -t -i "${JM_CONTAINER}" flink run -d -c ${JOB_CLASS_NAME} /job.jar
     ```
 
-### Session Cluster with Docker Compose
+Here, we provide the <a id="app-cluster-yml">docker-compose.yml</a> for *Application Cluster*.
 
-**docker-compose.yml:**
+Note: For the Application Mode cluster, the artifacts must be available in the Flink containers, check details [here](#application-mode-on-docker).
+See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments)
+in the `command` for the `jobmanager` service.
 
-```yaml
+{% highlight yaml %}
 version: "2.2"
 services:
   jobmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
     ports:
       - "8081:8081"
-    command: jobmanager
+    command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
+        parallelism.default: 2
 
   taskmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
@@ -444,36 +488,32 @@ services:
       - jobmanager
     command: taskmanager
     scale: 1
+    volumes:
+      - /host/path/to/job/artifacts:/opt/flink/usrlib
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
         taskmanager.numberOfTaskSlots: 2
-```
+        parallelism.default: 2
+{% endhighlight %}
 
-### Job Cluster with Docker Compose
 
-The artifacts must be available in the Flink containers, check details [here](#start-a-job-cluster).
-See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments)
-in the `command` for the `jobmanager` service.
+As well as the <a id="session-cluster-yml">configuration file</a> for *Session Cluster*:
 
-**docker-compose.yml:**
 
-```yaml
+{% highlight yaml %}
 version: "2.2"
 services:
   jobmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
     ports:
       - "8081:8081"
-    command: standalone-job --job-classname com.job.ClassName [--job-id <job id>] [--fromSavepoint /path/to/savepoint [--allowNonRestoredState]] [job arguments]
-    volumes:
-      - /host/path/to/job/artifacts:/opt/flink/usrlib
+    command: jobmanager
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
-        parallelism.default: 2
 
   taskmanager:
     image: flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %}
@@ -481,72 +521,47 @@ services:
       - jobmanager
     command: taskmanager
     scale: 1
-    volumes:
-      - /host/path/to/job/artifacts:/opt/flink/usrlib
     environment:
       - |
         FLINK_PROPERTIES=
         jobmanager.rpc.address: jobmanager
         taskmanager.numberOfTaskSlots: 2
-        parallelism.default: 2
-```
-
-### Enabling Python
-
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
-
-# install python3 and pip3
-RUN apt-get update -y && \
-apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-
-# install Python Flink
-RUN pip3 install apache-flink
 {% endhighlight %}
 
-Build the image named as **pyflink:latest**:
-
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
-
-{% top %}
 
-## Flink with Docker Swarm
+### Flink with Docker Swarm
 
 The [Docker swarm](https://docs.docker.com/engine/swarm) is a container orchestration tool, that
 allows you to manage multiple containers deployed across multiple host machines.
 
-The following chapters contain examples of how to configure and start *JobManager* and *TaskManager* containers.
+The following chapters contain examples of how to configure and start JobManager and TaskManager containers.
 You can adjust them accordingly to start a cluster.
 See also [the Flink Docker image tags](#image-tags) and [how to customize the Flink Docker image](#advanced-customization) for usage in the provided scripts.
 
 The port `8081` is exposed for the Flink Web UI access.
 If you run the swarm locally, you can visit the web UI at [http://localhost:8081](http://localhost:8081) after starting the cluster.
 
-### Session Cluster with Docker Swarm
+#### Session Cluster with Docker Swarm
 
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: flink-session-jobmanager
+$ FLINK_PROPERTIES="jobmanager.rpc.address: flink-session-jobmanager
 taskmanager.numberOfTaskSlots: 2
 "
 
 # Create overlay network
-docker network create -d overlay flink-session
+$ docker network create -d overlay flink-session
 
 # Create the JobManager service
-docker service create \
+$ docker service create \
   --name flink-session-jobmanager \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
-  -p 8081:8081 \
+  --publish 8081:8081 \
   --network flink-session \
   flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} \
     jobmanager
 
 # Create the TaskManager service (scale this out as needed)
-docker service create \
+$ docker service create \
   --name flink-session-taskmanager \
   --replicas 2 \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
@@ -555,22 +570,22 @@ docker service create \
     taskmanager
 ```
 
-### Job Cluster with Docker Swarm
+#### Application Cluster with Docker Swarm
 
 ```sh
-FLINK_PROPERTIES="jobmanager.rpc.address: flink-jobmanager
+$ FLINK_PROPERTIES="jobmanager.rpc.address: flink-jobmanager
 taskmanager.numberOfTaskSlots: 2
 "
 
 # Create overlay network
-docker network create -d overlay flink-job
+$ docker network create -d overlay flink-job
 
 # Create the JobManager service
-docker service create \
+$ docker service create \
   --name flink-jobmanager \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
   --mount type=bind,source=/host/path/to/job/artifacts,target=/opt/flink/usrlib \
-  -p 8081:8081 \
+  --publish 8081:8081 \
   --network flink-job \
   flink:{% if site.is_stable %}{{site.version}}-scala{{site.scala_version_suffix}}{% else %}latest{% endif %} \
     standalone-job \
@@ -580,7 +595,7 @@ docker service create \
     [job arguments]
 
 # Create the TaskManager service (scale this out as needed)
-docker service create \
+$ docker service create \
   --name flink-job-taskmanager \
   --replicas 2 \
   --env FLINK_PROPERTIES="${FLINK_PROPERTIES}" \
@@ -590,7 +605,7 @@ docker service create \
     taskmanager
 ```
 
-The *job artifacts* must be available in the *JobManager* container, as outlined [here](#start-a-job-cluster).
+The *job artifacts* must be available in the JobManager container, as outlined [here](#application-mode-on-docker).
 See also [how to specify the JobManager arguments](#jobmanager-additional-command-line-arguments) to pass them
 to the `flink-jobmanager` container.
 
diff --git a/docs/deployment/resource-providers/standalone/index.md b/docs/deployment/resource-providers/standalone/index.md
index ba09d31..04015d98 100644
--- a/docs/deployment/resource-providers/standalone/index.md
+++ b/docs/deployment/resource-providers/standalone/index.md
@@ -24,197 +24,255 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page provides instructions on how to run Flink in a *fully distributed fashion* on a *static* (but possibly heterogeneous) cluster.
-
 * This will be replaced by the TOC
 {:toc}
 
-## Requirements
 
-### Software Requirements
+## Getting Started
 
-Flink runs on all *UNIX-like environments*, e.g. **Linux**, **Mac OS X**, and **Cygwin** (for Windows) and expects the cluster to consist of **one master node** and **one or more worker nodes**. Before you start to setup the system, make sure you have the following software installed **on each node**:
+This *Getting Started* section guides you through the local setup (on one machine, but in separate processes) of a Flink cluster. This can easily be expanded to set up a distributed standalone cluster, which we describe in the [reference section](#example-2-start-a-distributed-cluster-jobmangers).
 
-- **Java 1.8.x** or higher,
-- **ssh** (sshd must be running to use the Flink scripts that manage
-  remote components)
+### Introduction
 
-If your cluster does not fulfill these software requirements you will need to install/upgrade it.
+The standalone mode is the most barebone way of deploying Flink: The Flink services described in the [deployment overview]({% link deployment/index.md %}) are just launched as processes on the operating system. Unlike deploying Flink with a resource provider such as [Kubernetes]({% link deployment/resource-providers/native_kubernetes.md %}) or [YARN]({% link deployment/resource-providers/yarn.md %}), you have to take care of restarting failed processes, or allocation and de-allocation of [...]
 
-Having __passwordless SSH__ and
-__the same directory structure__ on all your cluster nodes will allow you to use our scripts to control
-everything.
+In the additional subpages of the standalone mode resource provider, we describe additional deployment methods which are based on the standalone mode: [Deployment in Docker containers]({% link deployment/resource-providers/standalone/docker.md %}), and on [Kubernetes]({% link deployment/resource-providers/standalone/kubernetes.md %}).
 
-{% top %}
+### Preparation
 
-### `JAVA_HOME` Configuration
+Flink runs on all *UNIX-like environments*, e.g. **Linux**, **Mac OS X**, and **Cygwin** (for Windows). Before you start to setup the system, make sure your system fulfils the following requirements.
 
-Flink requires the `JAVA_HOME` environment variable to be set on the master and all worker nodes and point to the directory of your Java installation.
+- **Java 1.8.x** or higher installed,
+- Downloaded a recent Flink distribution from the [download page]({{ site.download_url }}) and unpacked it.
 
-You can set this variable in `conf/flink-conf.yaml` via the `env.java.home` key.
+### Starting a Standalone Cluster (Session Mode)
 
-{% top %}
+These steps show how to launch a Flink standalone cluster, and submit an example job:
 
-## Flink Setup
+{% highlight bash %}
+# we assume to be in the root directory of the unzipped Flink distribution
 
-Go to the [downloads page]({{ site.download_url }}) and get the ready-to-run package.
+# (1) Start Cluster
+$ ./bin/start-cluster.sh
 
-After downloading the latest release, copy the archive to your master node and extract it:
+# (2) You can now access the Flink Web Interface on http://localhost:8081
+
+# (3) Submit example job
+$ ./bin/flink run ./examples/streaming/TopSpeedWindowing.jar
+
+# (4) Stop the cluster again
+$ ./bin/stop-cluster.sh
+{% endhighlight %}
+
+In step `(1)`, we've started 2 processes: A JVM for the JobManager, and a JVM for the TaskManager. The JobManager is serving the web interface accessible at [localhost:8081](http://localhost:8081).
+In step `(3)`, we are starting a Flink Client (a short-lived JVM process) that submits an application to the JobManager.
+
+## Deployment Modes
+
+### Application Mode
+
+To start a Flink JobManager with an embedded application, we use the `bin/standalone-job.sh` script. 
+We demonstrate this mode by locally starting the `TopSpeedWindowing.jar` example, running on a single TaskManager.
+
+The application jar file needs to be available in the classpath. The easiest approach to achieve that is putting the jar into the `lib/` folder:
 
 {% highlight bash %}
-tar xzf flink-*.tgz
-cd flink-*
+$ cp ./examples/streaming/TopSpeedWindowing.jar lib/
 {% endhighlight %}
 
-### Configuring Flink
+Then, we can launch the JobManager:
 
-After having extracted the system files, you need to configure Flink for the cluster by editing *conf/flink-conf.yaml*.
+{% highlight bash %}
+$ ./bin/standalone-job.sh start --job-classname org.apache.flink.streaming.examples.windowing.TopSpeedWindowing
+{% endhighlight %}
 
-Set the `jobmanager.rpc.address` key to point to your master node. You should also define the maximum amount of main memory Flink is allowed to allocate on each node by setting the `jobmanager.memory.process.size` and `taskmanager.memory.process.size` keys.
+The web interface is now available at [localhost:8081](http://localhost:8081). However, the application won't be able to start, because there are no TaskManagers running yet:
 
-These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting `taskmanager.memory.process.size` or `taskmanager.memory.flink.size` in *conf/flink-conf.yaml* on those specific nodes.
+{% highlight bash %}
+$ ./bin/taskmanager.sh start
+{% endhighlight %}
 
-Finally, you must provide a list of all nodes in your cluster that shall be used as worker nodes, i.e., nodes running a TaskManager. Edit the file *conf/workers* and enter the IP/host name of each worker node.
+Note: You can start multiple TaskManagers, if your application needs more resources.
 
-The following example illustrates the setup with three nodes (with IP addresses from _10.0.0.1_
-to _10.0.0.3_ and hostnames _master_, _worker1_, _worker2_) and shows the contents of the
-configuration files (which need to be accessible at the same path on all machines):
+Stopping the services is also supported via the scripts. Call them multiple times if you want to stop multiple instances, or use `stop-all`:
 
-<div class="row">
-  <div class="col-md-6 text-center">
-    <img src="{% link /page/img/quickstart_cluster.png %}" style="width: 60%">
-  </div>
-<div class="col-md-6">
-  <div class="row">
-    <p class="lead text-center">
-      /path/to/<strong>flink/conf/<br>flink-conf.yaml</strong>
-    <pre>jobmanager.rpc.address: 10.0.0.1</pre>
-    </p>
-  </div>
-<div class="row" style="margin-top: 1em;">
-  <p class="lead text-center">
-    /path/to/<strong>flink/<br>conf/workers</strong>
-  <pre>
-10.0.0.2
-10.0.0.3</pre>
-  </p>
-</div>
-</div>
-</div>
+{% highlight bash %}
+$ ./bin/taskmanager.sh stop
+$ ./bin/standalone-job.sh stop
+{% endhighlight %}
 
-The Flink directory must be available on every worker under the same path. You can use a shared NFS directory, or copy the entire Flink directory to every worker node.
 
-Please see the [configuration page]({% link deployment/config.md %}) for details and additional configuration options.
+### Per-Job Mode
 
-In particular,
+Per-Job Mode is not supported by the Standalone Cluster.
 
- * the amount of available memory per JobManager (`jobmanager.memory.process.size`),
- * the amount of available memory per TaskManager (`taskmanager.memory.process.size` and check [memory setup guide]({% link deployment/memory/mem_tuning.md %}#configure-memory-for-standalone-deployment)),
- * the number of available CPUs per machine (`taskmanager.numberOfTaskSlots`),
- * the total number of CPUs in the cluster (`parallelism.default`) and
- * the temporary directories (`io.tmp.dirs`)
+### Session Mode
 
-are very important configuration values.
+Local deployment in Session Mode has already been described in the [introduction](#starting-a-standalone-cluster-session-mode) above.
 
-{% top %}
+## Standalone Cluster Reference
+
+### Configuration
+
+All available configuration options are listed on the [configuration page]({% link deployment/config.md %}), in particular the [Basic Setup]({% link deployment/config.md %}#basic-setup) section contains good advise on configuring the ports, memory, parallelism etc.
+
+### Debugging
+
+If Flink is behaving unexpectedly, we recommend looking at Flink's log files as a starting point for further investigations.
+
+The log files are located in the `logs/` directory. There's a `.log` file for each Flink service running on this machine. In the default configuration, log files are rotated on each start of a Flink service -- older runs of a service will have a number suffixed to the log file.
+
+Alternatively, logs are available from the Flink web frontend (both for the JobManager and each TaskManager).
 
-### Starting Flink
+By default, Flink is logging on the "INFO" log level, which provides basic information for all obvious issues. For cases where Flink seems to behave wrongly, reducing the log level to "DEBUG" is advised. The logging level is controlled via the `conf/log4.properties` file.
+Setting `rootLogger.level = DEBUG` will boostrap Flink on the DEBUG log level.
 
-The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the *workers* file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
+There's a dedicated page on the [logging]({%link deployment/advanced/logging.md %}) in Flink.
 
-Assuming that you are on the master node and inside the Flink directory:
+### Component Management Scripts
 
+#### Starting and Stopping a cluster
+
+`bin/start-cluster.sh` and `bin/stop-cluster.sh` rely on `conf/masters` and `conf/workers` to determine the number of cluster component instances.
+
+If password-less SSH access to the listed machines is configured, and they share the same directory structure, the scripts also support starting and stopping instances remotely.
+
+##### Example 1: Start a cluster with 2 TaskManagers locally
+
+`conf/masters` contents:
 {% highlight bash %}
-bin/start-cluster.sh
+localhost
 {% endhighlight %}
 
-To stop Flink, there is also a `stop-cluster.sh` script.
-
-{% top %}
+`conf/workers` contents:
+{% highlight bash %}
+localhost
+localhost
+{% endhighlight %}
 
-### Adding JobManager/TaskManager Instances to a Cluster
+##### Example 2: Start a distributed cluster JobMangers
 
-You can add both JobManager and TaskManager instances to your running cluster with the `bin/jobmanager.sh` and `bin/taskmanager.sh` scripts.
+This assumes a cluster with 4 machines (`master1, worker1, worker2, worker3`), which all can reach each other over the network.
 
-#### Adding a JobManager
+`conf/masters` contents:
+{% highlight bash %}
+master1
+{% endhighlight %}
 
+`conf/workers` contents:
 {% highlight bash %}
-bin/jobmanager.sh ((start|start-foreground) [host] [webui-port])|stop|stop-all
+worker1
+worker2
+worker3
 {% endhighlight %}
 
-#### Adding a TaskManager
+Note that the configuration key [jobmanager.rpc.address]({% link deployment/config.md %}#jobmanager-rpc-address) needs to be set to `master1` for this to work.
+
+We show a third example with a standby JobManager in the [high-availability section](#setting-up-high-availability).
+
+#### Starting and Stopping Flink Components
+
+The `bin/jobmanager.sh` and `bin/taskmanager.sh` scripts support starting the respective daemon in the background (using the `start` argument), or in the foreground (using `start-foreground`). In the foreground mode, the logs are printed to standard out. This mode is useful for deployment scenarios where another process is controlling the Flink daemon (e.g. Docker).
+
+The scripts can be called multiple times, for example if multiple TaskManagers are needed. The instances are tracked by the scripts, and can be stopped one-by-one (using `stop`) or all together (using `stop-all`).
+
+#### Windows Cygwin Users
+
+If you are installing Flink from the git repository and you are using the Windows git shell, Cygwin can produce a failure similar to this one:
 
 {% highlight bash %}
-bin/taskmanager.sh start|start-foreground|stop|stop-all
+c:/flink/bin/start-cluster.sh: line 30: $'\r': command not found
 {% endhighlight %}
 
-Make sure to call these scripts on the hosts on which you want to start/stop the respective instance.
+This error occurs because git is automatically transforming UNIX line endings to Windows style line endings when running on Windows. The problem is that Cygwin can only deal with UNIX style line endings. The solution is to adjust the Cygwin settings to deal with the correct line endings by following these three steps:
+
+1. Start a Cygwin shell.
+
+2. Determine your home directory by entering
 
-## High-Availability with Standalone
+    ```bash
+    cd; pwd
+    ```
+
+    This will return a path under the Cygwin root path.
+
+3. Using NotePad, WordPad or a different text editor open the file `.bash_profile` in the home directory and append the following (if the file does not exist you will have to create it):
+
+    ```bash
+    $ export SHELLOPTS
+    $ set -o igncr
+    ```
+
+4. Save the file and open a new bash shell.
+
+### Setting up High-Availability
 
 In order to enable HA for a standalone cluster, you have to use the [ZooKeeper HA services]({% link deployment/ha/zookeeper_ha.md %}).
 
 Additionally, you have to configure your cluster to start multiple JobManagers.
 
-### Masters File (masters)
-
 In order to start an HA-cluster configure the *masters* file in `conf/masters`:
 
 - **masters file**: The *masters file* contains all hosts, on which JobManagers are started, and the ports to which the web user interface binds.
 
-  <pre>
-jobManagerAddress1:webUIPort1
+  ```bash
+master1:webUIPort1
 [...]
-jobManagerAddressX:webUIPortX
-  </pre>
+masterX:webUIPortX
+```
 
-By default, the job manager will pick a *random port* for inter process communication. You can change this via the [high-availability.jobmanager.port]({% link deployment/config.md %}#high-availability-jobmanager-port) key. This key accepts single ports (e.g. `50010`), ranges (`50000-50025`), or a combination of both (`50010,50011,50020-50025,50050-50075`).
+By default, the JobManager will pick a *random port* for inter process communication. You can change this via the [high-availability.jobmanager.port]({% link deployment/config.md %}#high-availability-jobmanager-port) key. This key accepts single ports (e.g. `50010`), ranges (`50000-50025`), or a combination of both (`50010,50011,50020-50025,50050-50075`).
 
-### Example: Standalone Cluster with 2 JobManagers
+#### Example: Standalone HA Cluster with 2 JobManagers
 
-1. **Configure high availability mode and ZooKeeper quorum** in `conf/flink-conf.yaml`:
+1. Configure high availability mode and ZooKeeper quorum in `conf/flink-conf.yaml`:
 
-   <pre>
+   ```bash
 high-availability: zookeeper
 high-availability.zookeeper.quorum: localhost:2181
 high-availability.zookeeper.path.root: /flink
 high-availability.cluster-id: /cluster_one # important: customize per cluster
-high-availability.storageDir: hdfs:///flink/recovery</pre>
+high-availability.storageDir: hdfs:///flink/recovery
+```
 
-2. **Configure masters** in `conf/masters`:
+2. Configure masters in `conf/masters`:
 
-   <pre>
+   ```bash
 localhost:8081
-localhost:8082</pre>
+localhost:8082
+```
 
-3. **Configure ZooKeeper server** in `conf/zoo.cfg` (currently it's only possible to run a single ZooKeeper server per machine):
+3. Configure ZooKeeper server in `conf/zoo.cfg` (currently it's only possible to run a single ZooKeeper server per machine):
 
-   <pre>server.0=localhost:2888:3888</pre>
+   ```server.0=localhost:2888:3888```
 
-4. **Start ZooKeeper quorum**:
+4. Start ZooKeeper quorum:
 
-   <pre>
-$ bin/start-zookeeper-quorum.sh
-Starting zookeeper daemon on host localhost.</pre>
+   ```bash
+$ ./bin/start-zookeeper-quorum.sh
+Starting zookeeper daemon on host localhost.
+```
 
-5. **Start an HA-cluster**:
+5. Start an HA-cluster:
 
-   <pre>
-$ bin/start-cluster.sh
+   ```bash
+$ ./bin/start-cluster.sh
 Starting HA cluster with 2 masters and 1 peers in ZooKeeper quorum.
 Starting standalonesession daemon on host localhost.
 Starting standalonesession daemon on host localhost.
-Starting taskexecutor daemon on host localhost.</pre>
+Starting taskexecutor daemon on host localhost.
+```
 
-6. **Stop ZooKeeper quorum and cluster**:
+6. Stop ZooKeeper quorum and cluster:
 
-   <pre>
-$ bin/stop-cluster.sh
+   ```bash
+$ ./bin/stop-cluster.sh
 Stopping taskexecutor daemon (pid: 7647) on localhost.
 Stopping standalonesession daemon (pid: 7495) on host localhost.
 Stopping standalonesession daemon (pid: 7349) on host localhost.
-$ bin/stop-zookeeper-quorum.sh
-Stopping zookeeper daemon (pid: 7101) on host localhost.</pre>
+$ ./bin/stop-zookeeper-quorum.sh
+Stopping zookeeper daemon (pid: 7101) on host localhost.
+```
 
 
 {% top %}
diff --git a/docs/deployment/resource-providers/standalone/kubernetes.md b/docs/deployment/resource-providers/standalone/kubernetes.md
index 22bb1f2..9ba7706 100644
--- a/docs/deployment/resource-providers/standalone/kubernetes.md
+++ b/docs/deployment/resource-providers/standalone/kubernetes.md
@@ -23,154 +23,180 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a *Flink Job* and *Session cluster* on [Kubernetes](https://kubernetes.io).
-
 * This will be replaced by the TOC
 {:toc}
 
-{% info %} This page describes deploying a [standalone]({% link deployment/resource-providers/standalone/index.md %}) Flink cluster on top of Kubernetes.
-You can find more information on native Kubernetes deployments [here]({% link deployment/resource-providers/native_kubernetes.md %}).
-
-## Setup Kubernetes
-
-Please follow [Kubernetes' setup guide](https://kubernetes.io/docs/setup/) in order to deploy a Kubernetes cluster.
-If you want to run Kubernetes locally, we recommend using [MiniKube](https://kubernetes.io/docs/setup/minikube/).
-
-<div class="alert alert-info" markdown="span">
-  <strong>Note:</strong> If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set docker0 promisc on'` before deploying a Flink cluster.
-  Otherwise Flink components are not able to self reference themselves through a Kubernetes service.
-</div>
 
-## Flink Docker image
+## Getting Started
 
-Before deploying the Flink Kubernetes components, please read [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and
-[enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) to use the image in the Kubernetes definition files.
+This *Getting Started* guide describes how to deploy a *Session cluster* on [Kubernetes](https://kubernetes.io).
 
-## Deploy Flink cluster on Kubernetes
+### Introduction
 
-Using [the common resource definitions](#common-cluster-resource-definitions), launch the common cluster components
-with the `kubectl` command:
+This page describes deploying a [standalone]({% link deployment/resource-providers/standalone/index.md %}) Flink cluster on top of Kubernetes, using Flink's standalone deployment.
+We generally recommend new users to deploy Flink on Kubernetes using [native Kubernetes deployments]({% link deployment/resource-providers/native_kubernetes.md %}).
 
-```sh
-    kubectl create -f flink-configuration-configmap.yaml
-    kubectl create -f jobmanager-service.yaml
-```
+### Preparation
 
-Note that you could define your own customized options of `flink-conf.yaml` within `flink-configuration-configmap.yaml`.
+This guide expects a Kubernetes environment to be present. You can ensure that your Kubernetes setup is working by running a command like `kubectl get nodes`, which lists all connected Kubelets. 
 
-Then launch the specific components depending on whether you want to deploy a [Session](#deploy-session-cluster) or [Job](#deploy-job-cluster) cluster.
+If you want to run Kubernetes locally, we recommend using [MiniKube](https://minikube.sigs.k8s.io/docs/start/).
 
-You can then access the Flink UI via different ways:
-*  `kubectl proxy`:
+<div class="alert alert-info" markdown="span">
+  <strong>Note:</strong> If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set docker0 promisc on'` before deploying a Flink cluster.
+  Otherwise Flink components are not able to reference themselves through a Kubernetes service.
+</div>
 
-    1. Run `kubectl proxy` in a terminal.
-    2. Navigate to [http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy) in your browser.
+### Starting a Kubernetes Cluster (Session Mode)
 
-*  `kubectl port-forward`:
-    1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward your jobmanager's web ui port to local 8081.
-    2. Navigate to [http://localhost:8081](http://localhost:8081) in your browser.
-    3. Moreover, you could use the following command below to submit jobs to the cluster:
-    {% highlight bash %}./bin/flink run -m localhost:8081 ./examples/streaming/WordCount.jar{% endhighlight %}
-
-*  Create a `NodePort` service on the rest service of jobmanager:
-    1. Run `kubectl create -f jobmanager-rest-service.yaml` to create the `NodePort` service on jobmanager. The example of `jobmanager-rest-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
-    2. Run `kubectl get svc flink-jobmanager-rest` to know the `node-port` of this service and navigate to [http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>) in your browser.
-    3. If you use minikube, you can get its public ip by running `minikube ip`.
-    4. Similarly to the `port-forward` solution, you could also use the following command below to submit jobs to the cluster:
+A *Flink Session cluster* is executed as a long-running Kubernetes Deployment. You can run multiple Flink jobs on a *Session cluster*.
+Each job needs to be submitted to the cluster after the cluster has been deployed.
 
-        {% highlight bash %}./bin/flink run -m <public-node-ip>:<node-port> ./examples/streaming/WordCount.jar{% endhighlight %}
+A *Flink Session cluster* deployment in Kubernetes has at least three components:
 
-You can also access the queryable state of TaskManager if you create a `NodePort` service for it:
-  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create the `NodePort` service on taskmanager. The example of `taskmanager-query-state-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
-  2. Run `kubectl get svc flink-taskmanager-query-state` to know the `node-port` of this service. Then you can create the [QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({% link dev/stream/state/queryable_state.md %}#querying-state) to submit the state queries.
+* a *Deployment* which runs a [JobManager]({% link concepts/glossary.md %}#flink-jobmanager)
+* a *Deployment* for a pool of [TaskManagers]({% link concepts/glossary.md %}#flink-taskmanager)
+* a *Service* exposing the *JobManager's* REST and UI ports
 
-In order to terminate the Flink cluster, delete the specific [Session](#deploy-session-cluster) or [Job](#deploy-job-cluster) cluster components
-and use `kubectl` to terminate the common components:
+Using the file contents provided in the [the common resource definitions](#common-cluster-resource-definitions), create the following files, and create the respective components with the `kubectl` command:
 
 ```sh
-    kubectl delete -f jobmanager-service.yaml
-    kubectl delete -f flink-configuration-configmap.yaml
-    # if created then also the rest service
-    kubectl delete -f jobmanager-rest-service.yaml
-    # if created then also the queryable state service
-    kubectl delete -f taskmanager-query-state-service.yaml
+    # Configuration and service definition
+    $ kubectl create -f flink-configuration-configmap.yaml
+    $ kubectl create -f jobmanager-service.yaml
+    # Create the deployments for the cluster
+    $ kubectl create -f jobmanager-session-deployment.yaml
+    $ kubectl create -f taskmanager-session-deployment.yaml
 ```
 
-### Deploy Session Cluster
+Next, we set up a port forward to access the Flink UI and submit jobs:
 
-A *Flink Session cluster* is executed as a long-running Kubernetes Deployment.
-Note that you can run multiple Flink jobs on a *Session cluster*.
-Each job needs to be submitted to the cluster after the cluster has been deployed.
+1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward your jobmanager's web ui port to local 8081.
+2. Navigate to [http://localhost:8081](http://localhost:8081) in your browser.
+3. Moreover, you could use the following command below to submit jobs to the cluster:
+{% highlight bash %}
+$ ./bin/flink run -m localhost:8081 
+$ ./examples/streaming/TopSpeedWindowing.jar
+{% endhighlight %}
 
-A *Flink Session cluster* deployment in Kubernetes has at least three components:
 
-* a *Deployment* which runs a [JobManager]({% link concepts/glossary.md %}#flink-jobmanager)
-* a *Deployment* for a pool of [TaskManagers]({% link concepts/glossary.md %}#flink-taskmanager)
-* a *Service* exposing the *JobManager's* REST and UI ports
 
-After creating [the common cluster components](#deploy-flink-cluster-on-kubernetes), use [the Session specific resource definitions](#session-cluster-resource-definitions)
-to launch the *Session cluster* with the `kubectl` command:
+You can tear down the cluster using the following commands:
 
 ```sh
-    kubectl create -f jobmanager-session-deployment.yaml
-    kubectl create -f taskmanager-session-deployment.yaml
+    $ kubectl delete -f jobmanager-service.yaml
+    $ kubectl delete -f flink-configuration-configmap.yaml
+    $ kubectl delete -f taskmanager-session-deployment.yaml
+    $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-To terminate the *Session cluster*, these components can be deleted along with [the common ones](#deploy-flink-cluster-on-kubernetes) with the `kubectl` command:
 
-```sh
-    kubectl delete -f taskmanager-session-deployment.yaml
-    kubectl delete -f jobmanager-session-deployment.yaml
-```
+{% top %}
+
+## Deployment Modes
 
-### Deploy Job Cluster
+### Deploy Application Cluster
 
-A *Flink Job cluster* is a dedicated cluster which runs a single job.
-You can find more details [here](#start-a-job-cluster).
+A *Flink Application cluster* is a dedicated cluster which runs a single application.
 
-A basic *Flink Job cluster* deployment in Kubernetes has three components:
+A basic *Flink Application cluster* deployment in Kubernetes has three components:
 
-* a *Job* which runs a *JobManager*
+* an *Application* which runs a *JobManager*
 * a *Deployment* for a pool of *TaskManagers*
 * a *Service* exposing the *JobManager's* REST and UI ports
 
-Check [the Job cluster specific resource definitions](#job-cluster-resource-definitions) and adjust them accordingly.
+Check [the Application cluster specific resource definitions](#application-cluster-resource-definitions) and adjust them accordingly:
 
 The `args` attribute in the `jobmanager-job.yaml` has to specify the main class of the user job.
 See also [how to specify the JobManager arguments]({% link deployment/resource-providers/standalone/docker.md %}#jobmanager-additional-command-line-arguments) to understand
 how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
 
-The *job artifacts* should be available from the `job-artifacts-volume` in [the resource definition examples](#job-cluster-resource-definitions).
+The *job artifacts* should be available from the `job-artifacts-volume` in [the resource definition examples](#application-cluster-resource-definitions).
 The definition examples mount the volume as a local directory of the host assuming that you create the components in a minikube cluster.
 If you do not use a minikube cluster, you can use any other type of volume, available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({% link deployment/resource-providers/standalone/docker.md %}#start-a-job-cluster) which already contains the artifacts instead.
+Alternatively, you can build [a custom image]({% link deployment/resource-providers/standalone/docker.md %}#advanced-customization) which already contains the artifacts instead.
 
-After creating [the common cluster components](#deploy-flink-cluster-on-kubernetes), use [the Job cluster specific resource definitions](#job-cluster-resource-definitions)
-to launch the cluster with the `kubectl` command:
+After creating [the common cluster components](#common-cluster-resource-definitions), use [the Application cluster specific resource definitions](#application-cluster-resource-definitions) to launch the cluster with the `kubectl` command:
 
 ```sh
-    kubectl create -f jobmanager-job.yaml
-    kubectl create -f taskmanager-job-deployment.yaml
+    $ kubectl create -f jobmanager-job.yaml
+    $ kubectl create -f taskmanager-job-deployment.yaml
 ```
 
-To terminate the single job cluster, these components can be deleted along with [the common ones](#deploy-flink-cluster-on-kubernetes)
+To terminate the single application cluster, these components can be deleted along with [the common ones](#common-cluster-resource-definitions)
 with the `kubectl` command:
 
 ```sh
-    kubectl delete -f taskmanager-job-deployment.yaml
-    kubectl delete -f jobmanager-job.yaml
+    $ kubectl delete -f taskmanager-job-deployment.yaml
+    $ kubectl delete -f jobmanager-job.yaml
 ```
 
-## High-Availability with Standalone Kubernetes
+### Per-Job Cluster Mode
+Flink on Standalone Kubernetes does not support the Per-Job Cluster Mode.
 
-For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
+### Session Mode
+
+Deployment of a Session cluster is explained in the [Getting Started](#getting-started) guide at the top of this page.
+
+{% top %}
+
+## Flink on Standalone Kubernetes Reference
 
-### How to configure Kubernetes HA Services
+### Configuration
 
-Session Mode, Per-Job Mode, and Application Mode clusters support using the Kubernetes high availability service. Users just need to add the following Flink config options to [flink-configuration-configmap.yaml](#common-cluster-resource-definitions). All other yamls do not need to be updated.
+All configuration options are listed on the [configuration page]({% link deployment/config.md %}). Configuration options can be added to the `flink-conf.yaml` section of the `flink-configuration-configmap.yaml` config map.
 
-<span class="label label-info">Note</span> The filesystem which corresponds to the scheme of your configured HA storage directory must be available to the runtime. Refer to [custom Flink image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) for more information.
+### Accessing Flink in Kubernetes
+
+You can then access the Flink UI and submit jobs via different ways:
+*  `kubectl proxy`:
+
+    1. Run `kubectl proxy` in a terminal.
+    2. Navigate to [http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy) in your browser.
+
+*  `kubectl port-forward`:
+    1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward your jobmanager's web ui port to local 8081.
+    2. Navigate to [http://localhost:8081](http://localhost:8081) in your browser.
+    3. Moreover, you can use the following command below to submit jobs to the cluster:
+    {% highlight bash %}
+    $ ./bin/flink run -m localhost:8081 
+    $ ./examples/streaming/TopSpeedWindowing.jar
+    {% endhighlight %}
+
+*  Create a `NodePort` service on the rest service of jobmanager:
+    1. Run `kubectl create -f jobmanager-rest-service.yaml` to create the `NodePort` service on jobmanager. The example of `jobmanager-rest-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
+    2. Run `kubectl get svc flink-jobmanager-rest` to know the `node-port` of this service and navigate to [http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>) in your browser.
+    3. If you use minikube, you can get its public ip by running `minikube ip`.
+    4. Similarly to the `port-forward` solution, you can also use the following command below to submit jobs to the cluster:
+
+    {% highlight bash %}
+    $ ./bin/flink run -m <public-node-ip>:<node-port> 
+    $ ./examples/streaming/TopSpeedWindowing.jar
+    {% endhighlight %}
+
+### Debugging and Log Access
+
+Many common errors are easy to detect by checking Flink's log files. If you have access to Flink's web user interface, you can access the JobManager and TaskManager logs from there.
+
+If there are problems starting Flink, you can also use Kubernetes utilities to access the logs. Use `kubectl get pods` to see all running pods.
+For the quickstart example from above, you should see three pods:
+```
+$ kubectl get pods
+NAME                                 READY   STATUS             RESTARTS   AGE
+flink-jobmanager-589967dcfc-m49xv    1/1     Running            3          3m32s
+flink-taskmanager-64847444ff-7rdl4   1/1     Running            3          3m28s
+flink-taskmanager-64847444ff-nnd6m   1/1     Running            3          3m28s
+```
+
+You can now access the logs by running `kubectl logs flink-jobmanager-589967dcfc-m49xv`
+
+### High-Availability with Standalone Kubernetes
+
+For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
+
+Session Mode and Application Mode clusters support using the Kubernetes high availability service. Users just need to add the following Flink config options to [flink-configuration-configmap.yaml](#common-cluster-resource-definitions). All other yamls do not need to be updated.
+
+<span class="label label-info">Note</span> The filesystem which corresponds to the scheme of your configured HA storage directory must be available to the runtime. Refer to [custom Flink image]({% link deployment/resource-providers/standalone/docker.md %}#advanced-customization) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-filesystem-plugins) for more information.
 
 {% highlight yaml %}
 apiVersion: v1
@@ -190,6 +216,14 @@ data:
   ...
 {% endhighlight %}
 
+### Enabling Queryable State
+
+You can access the queryable state of TaskManager if you create a `NodePort` service for it:
+  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create the `NodePort` service for the `taskmanager` pod. The example of `taskmanager-query-state-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
+  2. Run `kubectl get svc flink-taskmanager-query-state` to get the `&lt;node-port&gt;` of this service. Then you can create the [QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({% link dev/stream/state/queryable_state.md %}#querying-state) to submit the state queries.
+
+{% top %}
+
 ## Appendix
 
 ### Common cluster resource definitions
@@ -416,9 +450,9 @@ spec:
             path: log4j-console.properties
 {% endhighlight %}
 
-### Job cluster resource definitions
+### Application cluster resource definitions
 
-`jobmanager-job.yaml`
+`jobmanager-application.yaml`
 {% highlight yaml %}
 apiVersion: batch/v1
 kind: Job
diff --git a/docs/deployment/resource-providers/standalone/kubernetes.zh.md b/docs/deployment/resource-providers/standalone/kubernetes.zh.md
index 7706b1d..cb08b1e 100644
--- a/docs/deployment/resource-providers/standalone/kubernetes.zh.md
+++ b/docs/deployment/resource-providers/standalone/kubernetes.zh.md
@@ -23,154 +23,180 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a *Flink Job* and *Session cluster* on [Kubernetes](https://kubernetes.io).
-
 * This will be replaced by the TOC
 {:toc}
 
-{% info %} This page describes deploying a [standalone]({% link deployment/resource-providers/standalone/index.zh.md %}) Flink cluster on top of Kubernetes.
-You can find more information on native Kubernetes deployments [here]({% link deployment/resource-providers/native_kubernetes.zh.md %}).
-
-## Setup Kubernetes
-
-Please follow [Kubernetes' setup guide](https://kubernetes.io/docs/setup/) in order to deploy a Kubernetes cluster.
-If you want to run Kubernetes locally, we recommend using [MiniKube](https://kubernetes.io/docs/setup/minikube/).
-
-<div class="alert alert-info" markdown="span">
-  <strong>Note:</strong> If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set docker0 promisc on'` before deploying a Flink cluster.
-  Otherwise Flink components are not able to self reference themselves through a Kubernetes service.
-</div>
 
-## Flink Docker image
+## Getting Started
 
-Before deploying the Flink Kubernetes components, please read [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.zh.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.zh.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.zh.md %}#customize-flink-image) and
-[enable plugins]({% link deployment/resource-providers/standalone/docker.zh.md %}#using-plugins) to use the image in the Kubernetes definition files.
+This *Getting Started* guide describes how to deploy a *Session cluster* on [Kubernetes](https://kubernetes.io).
 
-## Deploy Flink cluster on Kubernetes
+### Introduction
 
-Using [the common resource definitions](#common-cluster-resource-definitions), launch the common cluster components
-with the `kubectl` command:
+This page describes deploying a [standalone]({% link deployment/resource-providers/standalone/index.zh.md %}) Flink cluster on top of Kubernetes, using Flink's standalone deployment.
+We generally recommend new users to deploy Flink on Kubernetes using [native Kubernetes deployments]({% link deployment/resource-providers/native_kubernetes.zh.md %}).
 
-```sh
-    kubectl create -f flink-configuration-configmap.yaml
-    kubectl create -f jobmanager-service.yaml
-```
+### Preparation
 
-Note that you could define your own customized options of `flink-conf.yaml` within `flink-configuration-configmap.yaml`.
+This guide expects a Kubernetes environment to be present. You can ensure that your Kubernetes setup is working by running a command like `kubectl get nodes`, which lists all connected Kubelets. 
 
-Then launch the specific components depending on whether you want to deploy a [Session](#deploy-session-cluster) or [Job](#deploy-job-cluster) cluster.
+If you want to run Kubernetes locally, we recommend using [MiniKube](https://minikube.sigs.k8s.io/docs/start/).
 
-You can then access the Flink UI via different ways:
-*  `kubectl proxy`:
+<div class="alert alert-info" markdown="span">
+  <strong>Note:</strong> If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set docker0 promisc on'` before deploying a Flink cluster.
+  Otherwise Flink components are not able to reference themselves through a Kubernetes service.
+</div>
 
-    1. Run `kubectl proxy` in a terminal.
-    2. Navigate to [http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy) in your browser.
+### Starting a Kubernetes Cluster (Session Mode)
 
-*  `kubectl port-forward`:
-    1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward your jobmanager's web ui port to local 8081.
-    2. Navigate to [http://localhost:8081](http://localhost:8081) in your browser.
-    3. Moreover, you could use the following command below to submit jobs to the cluster:
-    {% highlight bash %}./bin/flink run -m localhost:8081 ./examples/streaming/WordCount.jar{% endhighlight %}
-
-*  Create a `NodePort` service on the rest service of jobmanager:
-    1. Run `kubectl create -f jobmanager-rest-service.yaml` to create the `NodePort` service on jobmanager. The example of `jobmanager-rest-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
-    2. Run `kubectl get svc flink-jobmanager-rest` to know the `node-port` of this service and navigate to [http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>) in your browser.
-    3. If you use minikube, you can get its public ip by running `minikube ip`.
-    4. Similarly to the `port-forward` solution, you could also use the following command below to submit jobs to the cluster:
+A *Flink Session cluster* is executed as a long-running Kubernetes Deployment. You can run multiple Flink jobs on a *Session cluster*.
+Each job needs to be submitted to the cluster after the cluster has been deployed.
 
-        {% highlight bash %}./bin/flink run -m <public-node-ip>:<node-port> ./examples/streaming/WordCount.jar{% endhighlight %}
+A *Flink Session cluster* deployment in Kubernetes has at least three components:
 
-You can also access the queryable state of TaskManager if you create a `NodePort` service for it:
-  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create the `NodePort` service on taskmanager. The example of `taskmanager-query-state-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
-  2. Run `kubectl get svc flink-taskmanager-query-state` to know the `node-port` of this service. Then you can create the [QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({% link dev/stream/state/queryable_state.zh.md %}#querying-state) to submit the state queries.
+* a *Deployment* which runs a [JobManager]({% link concepts/glossary.zh.md %}#flink-jobmanager)
+* a *Deployment* for a pool of [TaskManagers]({% link concepts/glossary.zh.md %}#flink-taskmanager)
+* a *Service* exposing the *JobManager's* REST and UI ports
 
-In order to terminate the Flink cluster, delete the specific [Session](#deploy-session-cluster) or [Job](#deploy-job-cluster) cluster components
-and use `kubectl` to terminate the common components:
+Using the file contents provided in the [the common resource definitions](#common-cluster-resource-definitions), create the following files, and create the respective components with the `kubectl` command:
 
 ```sh
-    kubectl delete -f jobmanager-service.yaml
-    kubectl delete -f flink-configuration-configmap.yaml
-    # if created then also the rest service
-    kubectl delete -f jobmanager-rest-service.yaml
-    # if created then also the queryable state service
-    kubectl delete -f taskmanager-query-state-service.yaml
+    # Configuration and service definition
+    $ kubectl create -f flink-configuration-configmap.yaml
+    $ kubectl create -f jobmanager-service.yaml
+    # Create the deployments for the cluster
+    $ kubectl create -f jobmanager-session-deployment.yaml
+    $ kubectl create -f taskmanager-session-deployment.yaml
 ```
 
-### Deploy Session Cluster
+Next, we set up a port forward to access the Flink UI and submit jobs:
 
-A *Flink Session cluster* is executed as a long-running Kubernetes Deployment.
-Note that you can run multiple Flink jobs on a *Session cluster*.
-Each job needs to be submitted to the cluster after the cluster has been deployed.
+1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward your jobmanager's web ui port to local 8081.
+2. Navigate to [http://localhost:8081](http://localhost:8081) in your browser.
+3. Moreover, you could use the following command below to submit jobs to the cluster:
+{% highlight bash %}
+$ ./bin/flink run -m localhost:8081 
+$ ./examples/streaming/TopSpeedWindowing.jar
+{% endhighlight %}
 
-A *Flink Session cluster* deployment in Kubernetes has at least three components:
 
-* a *Deployment* which runs a [JobManager]({% link concepts/glossary.zh.md %}#flink-jobmanager)
-* a *Deployment* for a pool of [TaskManagers]({% link concepts/glossary.zh.md %}#flink-taskmanager)
-* a *Service* exposing the *JobManager's* REST and UI ports
 
-After creating [the common cluster components](#deploy-flink-cluster-on-kubernetes), use [the Session specific resource definitions](#session-cluster-resource-definitions)
-to launch the *Session cluster* with the `kubectl` command:
+You can tear down the cluster using the following commands:
 
 ```sh
-    kubectl create -f jobmanager-session-deployment.yaml
-    kubectl create -f taskmanager-session-deployment.yaml
+    $ kubectl delete -f jobmanager-service.yaml
+    $ kubectl delete -f flink-configuration-configmap.yaml
+    $ kubectl delete -f taskmanager-session-deployment.yaml
+    $ kubectl delete -f jobmanager-session-deployment.yaml
 ```
 
-To terminate the *Session cluster*, these components can be deleted along with [the common ones](#deploy-flink-cluster-on-kubernetes) with the `kubectl` command:
 
-```sh
-    kubectl delete -f taskmanager-session-deployment.yaml
-    kubectl delete -f jobmanager-session-deployment.yaml
-```
+{% top %}
+
+## Deployment Modes
 
-### Deploy Job Cluster
+### Deploy Application Cluster
 
-A *Flink Job cluster* is a dedicated cluster which runs a single job.
-You can find more details [here](#start-a-job-cluster).
+A *Flink Application cluster* is a dedicated cluster which runs a single application.
 
-A basic *Flink Job cluster* deployment in Kubernetes has three components:
+A basic *Flink Application cluster* deployment in Kubernetes has three components:
 
-* a *Job* which runs a *JobManager*
+* an *Application* which runs a *JobManager*
 * a *Deployment* for a pool of *TaskManagers*
 * a *Service* exposing the *JobManager's* REST and UI ports
 
-Check [the Job cluster specific resource definitions](#job-cluster-resource-definitions) and adjust them accordingly.
+Check [the Application cluster specific resource definitions](#application-cluster-resource-definitions) and adjust them accordingly:
 
 The `args` attribute in the `jobmanager-job.yaml` has to specify the main class of the user job.
 See also [how to specify the JobManager arguments]({% link deployment/resource-providers/standalone/docker.zh.md %}#jobmanager-additional-command-line-arguments) to understand
 how to pass other `args` to the Flink image in the `jobmanager-job.yaml`.
 
-The *job artifacts* should be available from the `job-artifacts-volume` in [the resource definition examples](#job-cluster-resource-definitions).
+The *job artifacts* should be available from the `job-artifacts-volume` in [the resource definition examples](#application-cluster-resource-definitions).
 The definition examples mount the volume as a local directory of the host assuming that you create the components in a minikube cluster.
 If you do not use a minikube cluster, you can use any other type of volume, available in your Kubernetes cluster, to supply the *job artifacts*.
-Alternatively, you can build [a custom image]({% link deployment/resource-providers/standalone/docker.zh.md %}#start-a-job-cluster) which already contains the artifacts instead.
+Alternatively, you can build [a custom image]({% link deployment/resource-providers/standalone/docker.zh.md %}#advanced-customization) which already contains the artifacts instead.
 
-After creating [the common cluster components](#deploy-flink-cluster-on-kubernetes), use [the Job cluster specific resource definitions](#job-cluster-resource-definitions)
-to launch the cluster with the `kubectl` command:
+After creating [the common cluster components](#common-cluster-resource-definitions), use [the Application cluster specific resource definitions](#application-cluster-resource-definitions) to launch the cluster with the `kubectl` command:
 
 ```sh
-    kubectl create -f jobmanager-job.yaml
-    kubectl create -f taskmanager-job-deployment.yaml
+    $ kubectl create -f jobmanager-job.yaml
+    $ kubectl create -f taskmanager-job-deployment.yaml
 ```
 
-To terminate the single job cluster, these components can be deleted along with [the common ones](#deploy-flink-cluster-on-kubernetes)
+To terminate the single application cluster, these components can be deleted along with [the common ones](#common-cluster-resource-definitions)
 with the `kubectl` command:
 
 ```sh
-    kubectl delete -f taskmanager-job-deployment.yaml
-    kubectl delete -f jobmanager-job.yaml
+    $ kubectl delete -f taskmanager-job-deployment.yaml
+    $ kubectl delete -f jobmanager-job.yaml
 ```
 
-## High-Availability with Standalone Kubernetes
+### Per-Job Cluster Mode
+Flink on Standalone Kubernetes does not support the Per-Job Cluster Mode.
 
-For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.zh.md %}).
+### Session Mode
+
+Deployment of a Session cluster is explained in the [Getting Started](#getting-started) guide at the top of this page.
+
+{% top %}
+
+## Flink on Standalone Kubernetes Reference
+
+### Configuration
+
+All configuration options are listed on the [configuration page]({% link deployment/config.zh.md %}). Configuration options can be added to the `flink-conf.yaml` section of the `flink-configuration-configmap.yaml` config map.
+
+### Accessing Flink in Kubernetes
+
+You can then access the Flink UI and submit jobs via different ways:
+*  `kubectl proxy`:
+
+    1. Run `kubectl proxy` in a terminal.
+    2. Navigate to [http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:webui/proxy) in your browser.
+
+*  `kubectl port-forward`:
+    1. Run `kubectl port-forward ${flink-jobmanager-pod} 8081:8081` to forward your jobmanager's web ui port to local 8081.
+    2. Navigate to [http://localhost:8081](http://localhost:8081) in your browser.
+    3. Moreover, you can use the following command below to submit jobs to the cluster:
+    {% highlight bash %}
+    $ ./bin/flink run -m localhost:8081 
+    $ ./examples/streaming/TopSpeedWindowing.jar
+    {% endhighlight %}
+
+*  Create a `NodePort` service on the rest service of jobmanager:
+    1. Run `kubectl create -f jobmanager-rest-service.yaml` to create the `NodePort` service on jobmanager. The example of `jobmanager-rest-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
+    2. Run `kubectl get svc flink-jobmanager-rest` to know the `node-port` of this service and navigate to [http://&lt;public-node-ip&gt;:&lt;node-port&gt;](http://<public-node-ip>:<node-port>) in your browser.
+    3. If you use minikube, you can get its public ip by running `minikube ip`.
+    4. Similarly to the `port-forward` solution, you can also use the following command below to submit jobs to the cluster:
+
+    {% highlight bash %}
+    $ ./bin/flink run -m <public-node-ip>:<node-port> 
+    $ ./examples/streaming/TopSpeedWindowing.jar
+    {% endhighlight %}
+
+### Debugging and Log Access
+
+Many common errors are easy to detect by checking Flink's log files. If you have access to Flink's web user interface, you can access the JobManager and TaskManager logs from there.
 
-### How to configure Kubernetes HA Services
+If there are problems starting Flink, you can also use Kubernetes utilities to access the logs. Use `kubectl get pods` to see all running pods.
+For the quickstart example from above, you should see three pods:
+```
+$ kubectl get pods
+NAME                                 READY   STATUS             RESTARTS   AGE
+flink-jobmanager-589967dcfc-m49xv    1/1     Running            3          3m32s
+flink-taskmanager-64847444ff-7rdl4   1/1     Running            3          3m28s
+flink-taskmanager-64847444ff-nnd6m   1/1     Running            3          3m28s
+```
+
+You can now access the logs by running `kubectl logs flink-jobmanager-589967dcfc-m49xv`
 
-Session Mode, Per-Job Mode, and Application Mode clusters support using the Kubernetes high availability service. Users just need to add the following Flink config options to [flink-configuration-configmap.yaml](#common-cluster-resource-definitions). All other yamls do not need to be updated.
+### High-Availability with Standalone Kubernetes
+
+For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.zh.md %}).
 
-<span class="label label-info">Note</span> The filesystem which corresponds to the scheme of your configured HA storage directory must be available to the runtime. Refer to [custom Flink image]({% link deployment/resource-providers/standalone/docker.zh.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.zh.md %}#using-plugins) for more information.
+Session Mode and Application Mode clusters support using the Kubernetes high availability service. Users just need to add the following Flink config options to [flink-configuration-configmap.yaml](#common-cluster-resource-definitions). All other yamls do not need to be updated.
+
+<span class="label label-info">Note</span> The filesystem which corresponds to the scheme of your configured HA storage directory must be available to the runtime. Refer to [custom Flink image]({% link deployment/resource-providers/standalone/docker.zh.md %}#advanced-customization) and [enable plugins]({% link deployment/resource-providers/standalone/docker.zh.md %}#using-filesystem-plugins) for more information.
 
 {% highlight yaml %}
 apiVersion: v1
@@ -190,6 +216,13 @@ data:
   ...
 {% endhighlight %}
 
+### Enabling Queryable State
+
+You can access the queryable state of TaskManager if you create a `NodePort` service for it:
+  1. Run `kubectl create -f taskmanager-query-state-service.yaml` to create the `NodePort` service for the `taskmanager` pod. The example of `taskmanager-query-state-service.yaml` can be found in [appendix](#common-cluster-resource-definitions).
+  2. Run `kubectl get svc flink-taskmanager-query-state` to get the `&lt;node-port&gt;` of this service. Then you can create the [QueryableStateClient(&lt;public-node-ip&gt;, &lt;node-port&gt;]({% link dev/stream/state/queryable_state.zh.md %}#querying-state) to submit the state queries.
+
+{% top %}
 
 ## Appendix
 
@@ -417,9 +450,9 @@ spec:
             path: log4j-console.properties
 {% endhighlight %}
 
-### Job cluster resource definitions
+### Application cluster resource definitions
 
-`jobmanager-job.yaml`
+`jobmanager-application.yaml`
 {% highlight yaml %}
 apiVersion: batch/v1
 kind: Job
diff --git a/docs/deployment/resource-providers/standalone/local.md b/docs/deployment/resource-providers/standalone/local.md
deleted file mode 100644
index 1d7f579..0000000
--- a/docs/deployment/resource-providers/standalone/local.md
+++ /dev/null
@@ -1,169 +0,0 @@
----
-title: "Local Cluster"
-nav-title: 'Local Cluster'
-nav-parent_id: standalone
-nav-pos: 2
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-Get a local Flink cluster up and running in a few simple steps.
-
-* This will be replaced by the TOC
-{:toc}
-
-## Setup: Download and Start Flink
-
-Flink runs on __Linux and Mac OS X__.
-<span class="label label-info">Note:</span> Windows users can run Flink in Cygwin or WSL.
-To be able to run Flink, the only requirement is to have a working __Java 8 or 11__ installation.
-
-You can check the correct installation of Java by issuing the following command:
-
-{% highlight bash %}
-java -version
-{% endhighlight %}
-
-If you have Java 8, the output will look something like this:
-
-{% highlight bash %}
-java version "1.8.0_111"
-Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
-Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
-{% endhighlight %}
-
-{% if site.is_stable %}
-<div class="codetabs" markdown="1">
-<div data-lang="Download and Unpack" markdown="1">
-1. Download a binary from the [downloads page](https://flink.apache.org/downloads.html). You can pick
-   any Scala variant you like. For certain features you may also have to download one of the pre-bundled Hadoop jars
-   and place them into the `/lib` directory.
-2. Go to the download directory.
-3. Unpack the downloaded archive.
-
-{% highlight bash %}
-$ cd ~/Downloads        # Go to download directory
-$ tar xzf flink-*.tgz   # Unpack the downloaded archive
-$ cd flink-{{site.version}}
-{% endhighlight %}
-</div>
-
-<div data-lang="MacOS X" markdown="1">
-For MacOS X users, Flink can be installed through [Homebrew](https://brew.sh/).
-
-{% highlight bash %}
-$ brew install apache-flink
-...
-$ flink --version
-Version: 1.2.0, Commit ID: 1c659cf
-{% endhighlight %}
-</div>
-
-</div>
-
-{% else %}
-### Download and Compile
-Clone the source code from one of our [repositories](https://flink.apache.org/community.html#source-code), e.g.:
-
-{% highlight bash %}
-$ git clone https://github.com/apache/flink.git
-$ cd flink
-
-# Building Flink will take up to 25 minutes
-# You can speed up the build by skipping the
-# web ui by passing the flag '-Pskip-webui-build'
-
-$ mvn clean package -DskipTests -Pfast 
-
-# This is where Flink is installed
-$ cd build-target               
-{% endhighlight %}
-{% endif %}
-
-### Start a Local Flink Cluster
-
-{% highlight bash %}
-$ ./bin/start-cluster.sh  # Start Flink
-{% endhighlight %}
-
-Check the __JobManager's web frontend__ at [http://localhost:8081](http://localhost:8081) and make sure everything is up and running. The web frontend should report a single available TaskManager instance.
-
-<a href="{% link /page/img/quickstart-setup/jobmanager-1.png %}" ><img class="img-responsive" src="{% link /page/img/quickstart-setup/jobmanager-1.png %}" alt="JobManager: Overview"/></a>
-
-You can also verify that the system is running by checking the log files in the `logs` directory:
-
-{% highlight bash %}
-$ tail log/flink-*-standalonesession-*.log
-INFO ... - Rest endpoint listening at localhost:8081
-INFO ... - http://localhost:8081 was granted leadership ...
-INFO ... - Web frontend listening at http://localhost:8081.
-INFO ... - Starting RPC endpoint for StandaloneResourceManager at akka://flink/user/resourcemanager .
-INFO ... - Starting RPC endpoint for StandaloneDispatcher at akka://flink/user/dispatcher .
-INFO ... - ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was granted leadership ...
-INFO ... - Starting the SlotManager.
-INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership ...
-INFO ... - Recovering all persisted jobs.
-INFO ... - Registering TaskManager ... at ResourceManager
-{% endhighlight %}
-
-#### Windows Cygwin Users
-
-If you are installing Flink from the git repository and you are using the Windows git shell, Cygwin can produce a failure similar to this one:
-
-{% highlight bash %}
-c:/flink/bin/start-cluster.sh: line 30: $'\r': command not found
-{% endhighlight %}
-
-This error occurs because git is automatically transforming UNIX line endings to Windows style line endings when running in Windows. The problem is that Cygwin can only deal with UNIX style line endings. The solution is to adjust the Cygwin settings to deal with the correct line endings by following these three steps:
-
-1. Start a Cygwin shell.
-
-2. Determine your home directory by entering
-
-{% highlight bash %}
-cd; pwd
-{% endhighlight %}
-
-    This will return a path under the Cygwin root path.
-
-3. Using NotePad, WordPad or a different text editor open the file `.bash_profile` in the home directory and append the following: (If the file does not exist you will have to create it)
-
-{% highlight bash %}
-export SHELLOPTS
-set -o igncr
-{% endhighlight %}
-
-Save the file and open a new bash shell.
-
-### Stop a Local Flink Cluster
-
-To **stop** Flink when you're done type:
-
-<div class="codetabs" markdown="1">
-<div data-lang="Bash" markdown="1">
-{% highlight bash %}
-$ ./bin/stop-cluster.sh
-{% endhighlight %}
-</div>
-<div data-lang="Windows Shell" markdown="1">
-You can terminate the processes via CTRL-C in the spawned shell windows.
-</div>
-</div>
-
-{% top %}
diff --git a/docs/deployment/resource-providers/standalone/local.zh.md b/docs/deployment/resource-providers/standalone/local.zh.md
deleted file mode 100755
index 4a4c6fe..0000000
--- a/docs/deployment/resource-providers/standalone/local.zh.md
+++ /dev/null
@@ -1,178 +0,0 @@
----
-title: "本地集群"
-nav-title: '本地集群'
-nav-parent_id: standalone
-nav-pos: 2
----
-<!--
-Licensed to the Apache Software Foundation (ASF) under one
-or more contributor license agreements.  See the NOTICE file
-distributed with this work for additional information
-regarding copyright ownership.  The ASF licenses this file
-to you under the Apache License, Version 2.0 (the
-"License"); you may not use this file except in compliance
-with the License.  You may obtain a copy of the License at
-
-  http://www.apache.org/licenses/LICENSE-2.0
-
-Unless required by applicable law or agreed to in writing,
-software distributed under the License is distributed on an
-"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
-KIND, either express or implied.  See the License for the
-specific language governing permissions and limitations
-under the License.
--->
-
-通过几个简单的步骤启动并运行 Flink 本地集群。
-
-* This will be replaced by the TOC
-{:toc}
-
-<a name="setup-download-and-start-flink"></a>
-
-## 步骤:下载并启动 Flink
-
-Flink 运行在 __Linux 和 Mac OS X__ 上。
-<span class="label label-info">注意:</span> Windows 用户可以在 Cygwin 或 WSL 中运行 Flink。
-
-为了运行 Flink,唯一的要求是安装了 __Java 8 或 11__ 的环境。
-
-你可以通过以下命令来检查是否正确安装了 Java:
-
-{% highlight bash %}
-java -version
-{% endhighlight %}
-
-如果你安装了 Java 8,则会有类似如下的输出:
-
-{% highlight bash %}
-java version "1.8.0_111"
-Java(TM) SE Runtime Environment (build 1.8.0_111-b14)
-Java HotSpot(TM) 64-Bit Server VM (build 25.111-b14, mixed mode)
-{% endhighlight %}
-
-{% if site.is_stable %}
-<div class="codetabs" markdown="1">
-<div data-lang="下载并解压" markdown="1">
-1. 从[下载页面](https://flink.apache.org/downloads.html)下载一个二进制发行版本。你可以根据需要选择不同的 Scala 版本。对于某些功能,你可能还需要下载一个预先打包好的 Hadoop  jar 文件,并把它放到 `lib` 目录下。
-2. 前往下载目录。
-3. 解压下载的文档。
-
-{% highlight bash %}
-$ cd ~/Downloads        # 前往下载目录
-$ tar xzf flink-*.tgz   # 解压下载的文档
-$ cd flink-{{site.version}}
-{% endhighlight %}
-</div>
-
-<div data-lang="MacOS X" markdown="1">
-对于 MacOS X 用户,Flink 可以通过 [Homebrew](https://brew.sh/) 来安装。
-
-{% highlight bash %}
-$ brew install apache-flink
-...
-$ flink --version
-Version: 1.2.0, Commit ID: 1c659cf
-{% endhighlight %}
-</div>
-
-</div>
-
-{% else %}
-<a name="download-and-compile"></a>
-
-### 下载并编译
-你可以从我们的[仓库](https://flink.apache.org/community.html#source-code)之一克隆源代码,例如:
-
-{% highlight bash %}
-$ git clone https://github.com/apache/flink.git
-$ cd flink
-
-# 构建 Flink 可能最多需要 25 分钟
-# 你可以通过传递参数 '-Pskip-webui-build'
-# 以跳过 web ui 来加速构建
-
-$ mvn clean package -DskipTests -Pfast 
-
-# 这是 Flink 会被安装到的目录
-$ cd build-target               
-{% endhighlight %}
-{% endif %}
-
-<a name="start-a-local-flink-cluster"></a>
-
-### 启动 Flink 本地集群
-
-{% highlight bash %}
-$ ./bin/start-cluster.sh  # 启动 Flink
-{% endhighlight %}
-
-你可以检查 __JobManager 的 web 页面__  [http://localhost:8081](http://localhost:8081) 以确认本地集群正常启动并运行着。web 页面应该会显示有一个可用的 TaskManager。
-
-<a href="{% link /page/img/quickstart-setup/jobmanager-1.png %}" ><img class="img-responsive" src="{% link /page/img/quickstart-setup/jobmanager-1.png %}" alt="JobManager: Overview"/></a>
-
-你也可以通过检查 `logs` 目录中的日志文件来确认系统正在运行。
-
-{% highlight bash %}
-$ tail log/flink-*-standalonesession-*.log
-INFO ... - Rest endpoint listening at localhost:8081
-INFO ... - http://localhost:8081 was granted leadership ...
-INFO ... - Web frontend listening at http://localhost:8081.
-INFO ... - Starting RPC endpoint for StandaloneResourceManager at akka://flink/user/resourcemanager .
-INFO ... - Starting RPC endpoint for StandaloneDispatcher at akka://flink/user/dispatcher .
-INFO ... - ResourceManager akka.tcp://flink@localhost:6123/user/resourcemanager was granted leadership ...
-INFO ... - Starting the SlotManager.
-INFO ... - Dispatcher akka.tcp://flink@localhost:6123/user/dispatcher was granted leadership ...
-INFO ... - Recovering all persisted jobs.
-INFO ... - Registering TaskManager ... at ResourceManager
-{% endhighlight %}
-
-<a name="windows-cygwin-users"></a>
-
-#### Windows Cygwin 用户
-
-如果你使用 Windows 的 git shell 从 git 仓库安装 Flink,Cygwin 可能会产生类似如下的错误:
-
-{% highlight bash %}
-c:/flink/bin/start-cluster.sh: line 30: $'\r': command not found
-{% endhighlight %}
-
-这个错误是因为当 git 运行在 Windows 上时,它会自动地将 UNIX 的行结束符转换成 Windows 风格的行结束符,但是 Cygwin 只能处理 Unix 风格的行结束符。解决方案是通过以下三步调整 Cygwin 的配置,使其能够正确处理行结束符:
-
-1. 启动 Cygwin shell。
-
-2. 输入以下命令进入你的用户目录
-
-{% highlight bash %}
-cd; pwd
-{% endhighlight %}
-
-    这个命令会在 Cygwin 的根目录下返回路径地址。
-
-3. 使用 NotePad、WordPad 或其他的文本编辑器在用户目录下打开 `.bash_profile` 文件并添加以下内容:(如果这个文件不存在则需要创建)
-
-{% highlight bash %}
-export SHELLOPTS
-set -o igncr
-{% endhighlight %}
-
-保存文件并打开一个新的 bash shell。
-
-<a name="stop-a-local-flink-cluster"></a>
-
-### 关闭 Flink 本地集群
-
-当执行以下命令后,Flink 会被**关闭**:
-
-<div class="codetabs" markdown="1">
-<div data-lang="Bash" markdown="1">
-{% highlight bash %}
-$ ./bin/stop-cluster.sh
-{% endhighlight %}
-</div>
-<div data-lang="Windows Shell" markdown="1">
-你可以在打开的 shell 窗口中通过 CTRL-C 来结束进程。
-</div>
-</div>
-
-{% top %}
diff --git a/docs/redirects/local_setup_tutorial.md b/docs/redirects/local_setup_tutorial.md
index 11512bb..084b024 100644
--- a/docs/redirects/local_setup_tutorial.md
+++ b/docs/redirects/local_setup_tutorial.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup Tutorial"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /getting-started/tutorials/local_setup.html
 ---
 <!--
diff --git a/docs/redirects/local_setup_tutorial.zh.md b/docs/redirects/local_setup_tutorial.zh.md
index 11512bb..084b024 100644
--- a/docs/redirects/local_setup_tutorial.zh.md
+++ b/docs/redirects/local_setup_tutorial.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup Tutorial"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /getting-started/tutorials/local_setup.html
 ---
 <!--
diff --git a/docs/redirects/setup_quickstart.md b/docs/redirects/setup_quickstart.md
index 98de38c..976c870 100644
--- a/docs/redirects/setup_quickstart.md
+++ b/docs/redirects/setup_quickstart.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup Tutorial"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /quickstart/setup_quickstart.html
 ---
 <!--
diff --git a/docs/redirects/setup_quickstart.zh.md b/docs/redirects/setup_quickstart.zh.md
index 98de38c..976c870 100644
--- a/docs/redirects/setup_quickstart.zh.md
+++ b/docs/redirects/setup_quickstart.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup Tutorial"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /quickstart/setup_quickstart.html
 ---
 <!--
diff --git a/docs/redirects/tutorials_flink_on_windows.md b/docs/redirects/tutorials_flink_on_windows.md
index 155b719..aa9663a 100644
--- a/docs/redirects/tutorials_flink_on_windows.md
+++ b/docs/redirects/tutorials_flink_on_windows.md
@@ -1,7 +1,7 @@
 ---
 title: "Flink On Windows"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /tutorials/flink_on_windows.html
 ---
 <!--
diff --git a/docs/redirects/tutorials_flink_on_windows.zh.md b/docs/redirects/tutorials_flink_on_windows.zh.md
index 155b719..aa9663a 100644
--- a/docs/redirects/tutorials_flink_on_windows.zh.md
+++ b/docs/redirects/tutorials_flink_on_windows.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Flink On Windows"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /tutorials/flink_on_windows.html
 ---
 <!--
diff --git a/docs/redirects/tutorials_local_setup.md b/docs/redirects/tutorials_local_setup.md
index e9b73f9..ac93c04 100644
--- a/docs/redirects/tutorials_local_setup.md
+++ b/docs/redirects/tutorials_local_setup.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /tutorials/local_setup.html
 ---
 <!--
diff --git a/docs/redirects/tutorials_local_setup.zh.md b/docs/redirects/tutorials_local_setup.zh.md
index e9b73f9..ac93c04 100644
--- a/docs/redirects/tutorials_local_setup.zh.md
+++ b/docs/redirects/tutorials_local_setup.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Local Setup"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /tutorials/local_setup.html
 ---
 <!--
diff --git a/docs/redirects/windows.md b/docs/redirects/windows.md
index 24d21c5..22d4857 100644
--- a/docs/redirects/windows.md
+++ b/docs/redirects/windows.md
@@ -1,7 +1,7 @@
 ---
 title: "Running Flink on Windows"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /start/flink_on_windows.html
 ---
 <!--
diff --git a/docs/redirects/windows.zh.md b/docs/redirects/windows.zh.md
index 24d21c5..22d4857 100644
--- a/docs/redirects/windows.zh.md
+++ b/docs/redirects/windows.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Running Flink on Windows"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /start/flink_on_windows.html
 ---
 <!--
diff --git a/docs/redirects/windows_local_setup.md b/docs/redirects/windows_local_setup.md
index 643f9d8..59a5bf7 100644
--- a/docs/redirects/windows_local_setup.md
+++ b/docs/redirects/windows_local_setup.md
@@ -1,7 +1,7 @@
 ---
 title: "Running Flink on Windows"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /getting-started/tutorials/flink_on_windows.html
 ---
 <!--
diff --git a/docs/redirects/windows_local_setup.zh.md b/docs/redirects/windows_local_setup.zh.md
index 643f9d8..59a5bf7 100644
--- a/docs/redirects/windows_local_setup.zh.md
+++ b/docs/redirects/windows_local_setup.zh.md
@@ -1,7 +1,7 @@
 ---
 title: "Running Flink on Windows"
 layout: redirect
-redirect: deployment/resource-providers/standalone/local
+redirect: deployment/resource-providers/standalone/index
 permalink: /getting-started/tutorials/flink_on_windows.html
 ---
 <!--