You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@flink.apache.org by GitBox <gi...@apache.org> on 2020/12/03 17:16:15 UTC

[GitHub] [flink] tillrohrmann opened a new pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

tillrohrmann opened a new pull request #14305:
URL: https://github.com/apache/flink/pull/14305


   Rework of the existing native K8s documentation.
   
   cc @rmetzger, @XComp, @wangyang0918 
   
   ![localhost_4000_deployment_resource-providers_native_kubernetes html](https://user-images.githubusercontent.com/5756858/101063783-856e6000-3593-11eb-8d8e-9512ee3bf629.png)
   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-740479323


   Thanks again for the review @XComp. I've addressed your comments and squashed all fixup commits.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537386611



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster

Review comment:
       Alright.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537395398



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.

Review comment:
       Yes, your suggestion is much better. Thanks!




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537667852



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.

Review comment:
       I'd go with "please refer to the official..."




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] shuiqiangchen commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
shuiqiangchen commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537282915



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       Thank you for your clarification. At my point of view, it would be better to add a `Deployment` page under Python API for describing how to deploy a Python program on different cluster(standalone/Yarn/K8s).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4ee770ed510ddacabe8edb97fc355be1ebaa8ffd Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4ee770ed510ddacabe8edb97fc355be1ebaa8ffd Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501) 
   * 215d2414fd5ea678eeca7c8b5f5fc3e1ff973600 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4ee770ed510ddacabe8edb97fc355be1ebaa8ffd Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * f31088cec92d9b5bf3e98d49b2702b31d47a2152 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537400724



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time inspecting the log files.

Review comment:
       Will update it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * f31088cec92d9b5bf3e98d49b2702b31d47a2152 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581) 
   * 457fc69726ff01bd37aa6ea96865fb1cc9f86c90 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] XComp commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
XComp commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537614328



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes version `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
 Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
 Users may need to update the permission of `default` service account or specify another service account that has the right role bound.

Review comment:
       The section below should be adapted as well. There's a missing `the` in there as well.
   More importantly, I would suggest to change the name for the Service Account from `flink` to something else. That caused some confusion on my side when I started. `flink` as a name is overloaded. We should use something like `flink-service-account` instead.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r538136415



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       Hmm, deployment under development might be a bit confusing. Maybe we can add a `Deployment -> Python` page and link this page from `Application Development -> Python API`?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537664290



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode

Review comment:
       Being explicit about it, is easier for the user. I'll add a section where we state that the per-job mode is not supported.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536213406



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs

Review comment:
       Documentation for configuring log4j, as in modifying the log4j properties file contents, should go into the logging documentation; that said we currently do not nor do I intend to add such documentation at the moment.
   
   _How_ to do this on Kubernetes (i.e., editing the config map) should stay here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r538143721



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run \
+  --target kubernetes-session \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  ./examples/streaming/TopSpeedWindowing.jar

Review comment:
       Ok, I will update this for all code snippets.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "triggerType" : "PUSH"
     }, {
       "hash" : "09d2d89e041ba4af050e03f5d4fb26d084e43b2d",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10632",
       "triggerID" : "09d2d89e041ba4af050e03f5d4fb26d084e43b2d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603) 
   * a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8 UNKNOWN
   * 09d2d89e041ba4af050e03f5d4fb26d084e43b2d Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10632) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * 457fc69726ff01bd37aa6ea96865fb1cc9f86c90 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571) 
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537661317



##########
File path: docs/deployment/resource-providers/standalone/docker.zh.md
##########
@@ -491,6 +491,27 @@ services:
         parallelism.default: 2
 ```
 
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*

Review comment:
       I would leave this untouched since I only want to preserve the current information.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] shuiqiangchen commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
shuiqiangchen commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537286597



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -352,6 +352,27 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
+  
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink

Review comment:
       Yes, executing the command will install the latest version of PyFlink, maybe `RUN pip3 install apache-flink[==<SPECIFIC_VERSION>]` is better while the `[==<SPECIFIC_VERSION>]` is optional.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536220329



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.

Review comment:
       ```suggestion
   The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).

Review comment:
       ```suggestion
   Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via

Review comment:
       ```suggestion
   Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
   ```
   Had to do a double-take because I read it as "Once you have your Kubernetes cluster running and configured[, then] kubetcl[...]"

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:

Review comment:
       ```suggestion
   This section assumes a running Kubernetes cluster fulfilling the following requirements:
   ```
   ?

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.

Review comment:
       ```suggestion
   The `kubernetes.container.image` option specifies the image to start the pods with.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.

Review comment:
       The phrasing is a bit confusing. Maybe this should just like to the CLI docs?
   ```suggestion
   You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.

Review comment:
       ```suggestion
   * **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
   ```
   ?

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.

Review comment:
       ```suggestion
     Type `help` to list all supported commands.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.

Review comment:
       same as above

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).

Review comment:
       ```suggestion
   The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.

Review comment:
       find what?

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.

Review comment:
       ```suggestion
     `NodeIP` can be replaced with the Kubernetes ApiServer address.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time inspecting the log files.

Review comment:
       ```suggestion
   You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.

Review comment:
       ```suggestion
     `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537399899



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.

Review comment:
       yes, will change it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536022184



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster

Review comment:
       Is there any benefit of the `echo 'stop'` command? I was choosing `kubectl delete` because it seems simpler. The `KubernetesSessionCli` seems to do the same. On the other hand, if we want to change this behaviour in the future, it might be better to hide this detail from the user via `bin/kubernetes.session.sh`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537662268



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar

Review comment:
       I think `$` makes it a bit easier to parse. I'd be fine to keep it consistent on every page for the time being. But if you are ok with it and @rmetzger as well, then we might simply adopt it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r538142464



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run \
+  --target kubernetes-session \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-Job]{% link deployment/index.md %}#per-job-mode) or [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.

Review comment:
       I think I will remove it because it is not supported.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537388837



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -352,6 +352,27 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
+  
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink

Review comment:
       I would suggest to update this as part of reworking the `docker.md` page as my change's intention is to only maintain the current information we have.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] shuiqiangchen commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
shuiqiangchen commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537286597



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -352,6 +352,27 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
+  
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink

Review comment:
       Yes, executing the command will install the latest version of PyFlink, maybe `RUN pip3 install apache-flink[==<SPECIFIC_VERSION>]` is better that the `[==<SPECIFIC_VERSION>]` is optional.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] XComp commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
XComp commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537557124



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via

Review comment:
       ```suggestion
   Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar

Review comment:
       ```suggestion
   $ ./bin/flink run-application \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     -Dkubernetes.container.image=custom-image-name \
     local:///opt/flink/usrlib/my-flink-job.jar
   ```
   Added indentation and moved every parameter into its own line.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.

Review comment:
       ```suggestion
   For production use, we recommend deploying Flink Applications in the [Per-Job]{% link deployment/index.md %}#per-job-mode) or [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.

Review comment:
       ```suggestion
   <span class="label label-info">Note</span> `local` is the only supported scheme in Application Mode.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true

Review comment:
       ```suggestion
   $ ./bin/kubernetes-session.sh \
     -Dkubernetes.cluster-id=my-first-flink-cluster \
     -Dexecution.attached=true
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true

Review comment:
       ```suggestion
   $ echo 'stop' | ./bin/kubernetes-session.sh \
     -Dkubernetes.cluster-id=my-first-flink-cluster \
     -Dexecution.attached=true
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.

Review comment:
       ```suggestion
     If you want to access the JobManager UI or submit job to the existing session, you need to start a local proxy.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.

Review comment:
       ```suggestion
   When the deployment is deleted, all related resources will be deleted automatically.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes version `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
 Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
 Users may need to update the permission of `default` service account or specify another service account that has the right role bound.

Review comment:
       The section below should be adapted as well. There's a missing `the` in there as well.
   More importantly, I would suggest change the name for the Service Account from `flink` to something else. That caused some confusion on my side when I started. `flink` as a name is overloaded. We should use something like `flink-service-account` instead.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar

Review comment:
       ```suggestion
   $ ./bin/flink run \
       --target kubernetes-session \
       -Dkubernetes.cluster-id=my-first-flink-cluster \
       ./examples/streaming/TopSpeedWindowing.jar
   ```
   IMHO, formatting commands having more than one parameter improves the readability.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command

Review comment:
       ```suggestion
   After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command:
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time inspecting the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}

Review comment:
       ```suggestion
   {% highlight bash %}
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.

Review comment:
       ```suggestion
   You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of this page.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar

Review comment:
       ```suggestion
   $ ./bin/kubernetes-session.sh
     -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
     -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
   ```
   Indentation added

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.

Review comment:
       ```suggestion
     `<NodeIP>:<NodePort>` can be used to contact the JobManager service.
   ```
   * Usually, `service` is lowercased. 
   * And `JobManager` should be camelcased.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.

Review comment:
       ```suggestion
     You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.

Review comment:
       Here I'm not 100% sure whether `reference` is the right word? 🤔 Alternatively, something like "Please read [...]" might work as well.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar

Review comment:
       I just realize that all of the commands have `$` as a prefix on this page. I'm fine adding it as long as it's consistent. YARN and Mesos don't have it right now.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.

Review comment:
       ```suggestion
   Such information might otherwise be put in a pod specification or in an image.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.

Review comment:
       ```suggestion
   See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for how to enable plugins, add dependencies and other options.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.

Review comment:
       ```suggestion
   All the Flink created resources, including `ConfigMap`, `Service`, and `Pod`, have the `OwnerReference` being set to `deployment/<cluster-id>`.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password

Review comment:
       ```suggestion
   $ ./bin/kubernetes-session.sh \
     -Dkubernetes.env.secretKeyRef=\
         env:SECRET_USERNAME,secret:mysecret,key:username;\
         env:SECRET_PASSWORD,secret:mysecret,key:password
   ```
   I'm not sure here about the formatting - maybe, my proposal isn't an improvement. But I wanted to bring it up for discussion, anyway.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.

Review comment:
       ```suggestion
   All the Flink-created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes version `>= 1.9` are supported.

Review comment:
       ```suggestion
   Currently, all Kubernetes versions `>= 1.9` are supported.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes version `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
 Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
 Users may need to update the permission of `default` service account or specify another service account that has the right role bound.

Review comment:
       ```suggestion
   Users may need to update the permission of the `default` service account or specify another service account that has the right role bound.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode

Review comment:
       I also added unsupported modes as sections to the Mesos documentation. This way it's more explicit that certain modes are not supported. I'm fine with both approaches but would rather do it consistently. Does anyone has a preferred approach?

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>

Review comment:
       ```suggestion
   # List running job on the cluster
   ./bin/flink \
     list \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster
   # Cancel running job
   ./bin/flink \
     cancel \
     --target kubernetes-application \
     -Dkubernetes.cluster-id=my-first-application-cluster \
     <jobId>
   ```
   I removed the leading `$` since we don't use this notation anywhere else. Additionally, I added indentation for consistency purposes (even though I wouldn't object here if someones says that the previous version was more readable since both commands had a quite similar structure).

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:

Review comment:
       ```suggestion
   The Session Mode can be executed in two modes:
   ```
   Not sure if I'm too picky here but "executed" sounds better here. I connect "run" with jobs. ...just as a proposal.

##########
File path: docs/deployment/resource-providers/standalone/docker.zh.md
##########
@@ -491,6 +491,27 @@ services:
         parallelism.default: 2
 ```
 
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*

Review comment:
       ```suggestion
   RUN apt-get update -y && \
       apt-get install -y python3.7 python3-pip python3.7-dev && \
       rm -rf /var/lib/apt/lists/*
   ```
   I would reformat these lines to improve the readability.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes version `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
 Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.

Review comment:
       ```suggestion
   Every namespace has a default service account. However, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536023584



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs

Review comment:
       This is a good idea. I am wondering whether the general pattern shouldn't be part of the logging documentation cc @zentol who's currently working on it. Then the `native_kubernetes.md` could simply state how to change the `log4j.properties` via the `ConfigMap`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738153988


   Thanks a lot for your contribution to the Apache Flink project. I'm the @flinkbot. I help the community
   to review your pull request. We will use this comment to track the progress of the review.
   
   
   ## Automated Checks
   Last check on commit 4ee770ed510ddacabe8edb97fc355be1ebaa8ffd (Thu Dec 03 17:19:26 UTC 2020)
   
    ✅no warnings
   
   <sub>Mention the bot in a comment to re-run the automated checks.</sub>
   ## Review Progress
   
   * ❓ 1. The [description] looks good.
   * ❓ 2. There is [consensus] that the contribution should go into to Flink.
   * ❓ 3. Needs [attention] from.
   * ❓ 4. The change fits into the overall [architecture].
   * ❓ 5. Overall code [quality] is good.
   
   Please see the [Pull Request Review Guide](https://flink.apache.org/contributing/reviewing-prs.html) for a full explanation of the review process.<details>
    The Bot is tracking the review progress through labels. Labels are applied according to the order of the review items. For consensus, approval by a Flink committer of PMC member is required <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot approve description` to approve one or more aspects (aspects: `description`, `consensus`, `architecture` and `quality`)
    - `@flinkbot approve all` to approve all aspects
    - `@flinkbot approve-until architecture` to approve everything until `architecture`
    - `@flinkbot attention @username1 [@username2 ..]` to require somebody's attention
    - `@flinkbot disapprove architecture` to remove an approval you gave earlier
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-740057161


   Thanks for the review @XComp. I've addressed your comments and pushed an update.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * 457fc69726ff01bd37aa6ea96865fb1cc9f86c90 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537399380



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.

Review comment:
       I'll rephrase it into: "`NodeIP` can also be replaced with Kubernetes ApiServer address. You can find its address in your kube config file."




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536025669



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       I think we need a page for describing in general how to deploy a Python program. What I am not so sure is whether we need this for every resource provider. Here, for example, there is not much specific to K8s. One needs to bundle the Python dependencies and then call `flink` with some special parameters which, hopefully, are the same for all different types of deployments.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536216828



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -352,6 +352,27 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
+  
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink

Review comment:
       We should install a specific version; this will always fetch the latest.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] XComp commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
XComp commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537817400



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run \
+  --target kubernetes-session \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  ./examples/streaming/TopSpeedWindowing.jar

Review comment:
       ```suggestion
   $ ./bin/flink run \
         --target kubernetes-session \
         -Dkubernetes.cluster-id=my-first-flink-cluster \
         ./examples/streaming/TopSpeedWindowing.jar
   ```
   I'm fine with using the `$` as a command prefix. But we should add more indentation in that case. 
   <img width="870" alt="Screenshot 2020-12-07 at 21 37 22" src="https://user-images.githubusercontent.com/1101012/101403001-d6a08b80-38d4-11eb-8669-c17381aafa1a.png">
   

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run \
+  --target kubernetes-session \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-Job]{% link deployment/index.md %}#per-job-mode) or [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command:
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application \
+  --target kubernetes-application \
+  -Dkubernetes.cluster-id=my-first-application-cluster \
+  -Dkubernetes.container.image=custom-image-name \
+  local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in Application Mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
-
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+### Per-Job Cluster Mode
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+Flink on Mesos does not support Per-Job Cluster Mode.
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+### Session Mode
 
-### Attach to an existing Session
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of this page.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+The Session Mode can be executed in two modes:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
-{% endhighlight %}
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Stop Flink Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
+
+#### Stop a Running Session Cluster
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
+$ echo 'stop' | ./bin/kubernetes-session.sh \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  -Dexecution.attached=true
 {% endhighlight %}
 
-## Flink Kubernetes Application
+{% top %}
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+## Flink on Kubernetes Reference
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+### Configuring Flink on Kubernetes
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+### Accessing Flink's Web UI
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
-
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the JobManager UI or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the JobManager service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
+
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+
+Please refer to the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+
+### Logging
 
-### Stop Flink Application
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+#### Accessing the Logs
+
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highlight bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
+$ ./bin/kubernetes-session.sh
   -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=\
+  env:SECRET_USERNAME,secret:mysecret,key:username;\
+  env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, and `Pod`, have the `OwnerReference` being set to `deployment/<cluster-id>`.
+When the deployment is deleted, all related resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes versions `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
-Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
-Users may need to update the permission of `default` service account or specify another service account that has the right role bound.
+Every namespace has a default service account. However, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
+Users may need to update the permission of the `default` service account or specify another service account that has the right role bound.
 
 {% highlight bash %}
 $ kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit --serviceaccount=default:default
 {% endhighlight %}
 
-If you do not want to use `default` service account, use the following command to create a new `flink` service account and set the role binding.
-Then use the config option `-Dkubernetes.jobmanager.service-account=flink` to make the JobManager pod using the `flink` service account to create and delete TaskManager pods.
+If you do not want to use the `default` service account, use the following command to create a new `flink-service-account` service account and set the role binding.
+Then use the config option `-Dkubernetes.jobmanager.service-account=flink-service-account` to make the JobManager pod using the `flink-service-account` service account to create and delete TaskManager pods.

Review comment:
       ```suggestion
   Then use the config option `-Dkubernetes.jobmanager.service-account=flink-service-account` to make the JobManager pod use the `flink-service-account` service account to create and delete TaskManager pods.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run \
+  --target kubernetes-session \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-Job]{% link deployment/index.md %}#per-job-mode) or [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command:
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application \
+  --target kubernetes-application \
+  -Dkubernetes.cluster-id=my-first-application-cluster \
+  -Dkubernetes.container.image=custom-image-name \
+  local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in Application Mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
-
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+### Per-Job Cluster Mode
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+Flink on Mesos does not support Per-Job Cluster Mode.
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+### Session Mode
 
-### Attach to an existing Session
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of this page.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+The Session Mode can be executed in two modes:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
-{% endhighlight %}
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Stop Flink Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
+
+#### Stop a Running Session Cluster
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
+$ echo 'stop' | ./bin/kubernetes-session.sh \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  -Dexecution.attached=true
 {% endhighlight %}
 
-## Flink Kubernetes Application
+{% top %}
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+## Flink on Kubernetes Reference
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+### Configuring Flink on Kubernetes
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+### Accessing Flink's Web UI
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
-
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the JobManager UI or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the JobManager service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
+
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+
+Please refer to the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+
+### Logging
 
-### Stop Flink Application
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+#### Accessing the Logs
+
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highlight bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
+$ ./bin/kubernetes-session.sh
   -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=\
+  env:SECRET_USERNAME,secret:mysecret,key:username;\
+  env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, and `Pod`, have the `OwnerReference` being set to `deployment/<cluster-id>`.
+When the deployment is deleted, all related resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes versions `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
-Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
-Users may need to update the permission of `default` service account or specify another service account that has the right role bound.
+Every namespace has a default service account. However, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
+Users may need to update the permission of the `default` service account or specify another service account that has the right role bound.
 
 {% highlight bash %}
 $ kubectl create clusterrolebinding flink-role-binding-default --clusterrole=edit --serviceaccount=default:default
 {% endhighlight %}
 
-If you do not want to use `default` service account, use the following command to create a new `flink` service account and set the role binding.
-Then use the config option `-Dkubernetes.jobmanager.service-account=flink` to make the JobManager pod using the `flink` service account to create and delete TaskManager pods.
+If you do not want to use the `default` service account, use the following command to create a new `flink-service-account` service account and set the role binding.
+Then use the config option `-Dkubernetes.jobmanager.service-account=flink-service-account` to make the JobManager pod using the `flink-service-account` service account to create and delete TaskManager pods.
 
 {% highlight bash %}
-$ kubectl create serviceaccount flink
-$ kubectl create clusterrolebinding flink-role-binding-flink --clusterrole=edit --serviceaccount=default:flink
+$ kubectl create serviceaccount flink-service-account
+$ kubectl create clusterrolebinding flink-role-binding-flink --clusterrole=edit --serviceaccount=default:flink-service-account
 {% endhighlight %}
 
 Please reference the official Kubernetes documentation on [RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for more information.

Review comment:
       ```suggestion
   Please refer to the official Kubernetes documentation on [RBAC Authorization](https://kubernetes.io/docs/reference/access-authn-authz/rbac/) for more information.
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,425 +23,297 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink cluster in [Session Mode]({% link deployment/index.md %}#session-mode) via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run \
+  --target kubernetes-session \
+  -Dkubernetes.cluster-id=my-first-flink-cluster \
+  ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-Job]{% link deployment/index.md %}#per-job-mode) or [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.

Review comment:
       ```suggestion
   For production use, we recommend deploying Flink Applications in the [Per-Job]({% link deployment/index.md %}#per-job-mode) or [Application Mode]({% link deployment/index.md %}#application-mode), as these modes provide a better isolation for the Applications.
   ```
   
   Another option is to remove `[Per-Job]{% link deployment/index.md %}#per-job-mode) or` completely as it's not supported by Flink on Kubernetes, anyway.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangyang0918 commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
wangyang0918 commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537268317



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       Maybe @dianfu @shuiqiangchen could share more information here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 215d2414fd5ea678eeca7c8b5f5fc3e1ff973600 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528) 
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * f31088cec92d9b5bf3e98d49b2702b31d47a2152 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] shuiqiangchen commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
shuiqiangchen commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r538052302



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       No, the Python deployment does not require any specified resource-provider. So here I'm in favour of your suggestion that adding a new page to generally describe different deployments. And the full path might be Application Development -> Python API -> Deployment, how do you think about it?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * 457fc69726ff01bd37aa6ea96865fb1cc9f86c90 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571) 
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536026162



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
+$ kubectl logs <pod-name>
 {% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-### Stop Flink Application
+### Using Plugins
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
+### Custom Docker Image
 
-## Log Files
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+### Using Secrets
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+[Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
 
-## Using plugins
+* Using Secrets as files from a pod;
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+* Using Secrets as environment variables;
 
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
+#### Using Secrets as Files From a Pod
 
-## Using Secrets
-
-[Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
-
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
-
-### Namespaces

Review comment:
       If this has come up several times, then I'll re-add it. Thanks for the information.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536113556



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs

Review comment:
       I have added the K8s specific part to the documentation. Please take a look.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] rmetzger commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
rmetzger commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r539141570



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -352,6 +352,27 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
+  
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink

Review comment:
       Thanks a lot for the ping. I'll update the instructions.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537393647



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:

Review comment:
       I think I wanted to write `The *Getting Started* section...`




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603) 
   * a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann closed pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann closed pull request #14305:
URL: https://github.com/apache/flink/pull/14305


   


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536244626



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs

Review comment:
       I actually find this super helpful, tbh. Maybe we can include it in the logging documentation or at least link to where the `monitorInterval` is described in the log4j2 documentation.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537394288



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via

Review comment:
       Your suggestion is good.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537671810



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.

Review comment:
       Sounds good.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537670343



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password

Review comment:
       Good question, maybe
   
   ```
   $ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=\
         env:SECRET_USERNAME,secret:mysecret,key:username;\
         env:SECRET_PASSWORD,secret:mysecret,key:password
   ```
   
   as an alternative.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537663599



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>

Review comment:
       I would not apply the formatting. Moreover, the command `list` and `cancel` should be in the same line as `bin/flink`.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] shuiqiangchen commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
shuiqiangchen commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r538148665



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       Yes, I feel the same that we should illustrate development and deployment separately. I will add the `Deployment -> Python` page in a new PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangyang0918 commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
wangyang0918 commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537270214



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.

Review comment:
       > You could find it in your kube config file.
   
   I think it means "Kubernetes ApiServer address" here.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537664876



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:

Review comment:
       Fair enough. I am fine with it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] zentol commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
zentol commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536217125



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.

Review comment:
       ```suggestion
   Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
   ```




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r536024540



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make

Review comment:
       Ok, then let me add this to a more advanced section.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537659134



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar

Review comment:
       I think you are right.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] shuiqiangchen commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
shuiqiangchen commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r538052302



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       No, the Python deployment does not require any specified resource-provider. So here I'm in favour of your suggestion that adding a new page to generally describe the different deployments, which is out of this PR. And the full path might be Application Development -> Python API -> Deployment, how do you think about it?




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 215d2414fd5ea678eeca7c8b5f5fc3e1ff973600 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "triggerType" : "PUSH"
     }, {
       "hash" : "09d2d89e041ba4af050e03f5d4fb26d084e43b2d",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "09d2d89e041ba4af050e03f5d4fb26d084e43b2d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3 Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603) 
   * a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8 UNKNOWN
   * 09d2d89e041ba4af050e03f5d4fb26d084e43b2d UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangyang0918 commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
wangyang0918 commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537267418



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster

Review comment:
       Actually they have the same effect. I am not aware that we have a strong reason to change the behavior of `echo 'stop'` and `kubectl delete deploy`. So keep it like now.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4ee770ed510ddacabe8edb97fc355be1ebaa8ffd Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501) 
   * 215d2414fd5ea678eeca7c8b5f5fc3e1ff973600 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "PENDING",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 215d2414fd5ea678eeca7c8b5f5fc3e1ff973600 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528) 
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * f31088cec92d9b5bf3e98d49b2702b31d47a2152 Azure: [PENDING](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] wangyang0918 commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
wangyang0918 commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r535939587



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make

Review comment:
       I remember some users complain about the TaskManager timed out too fast and could not debugging the logs. After then we add this part in the documentation.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs

Review comment:
       Maybe out of the scope of this PR. Recently, when I am debugging the Flink clusters on Kubernetes, I find a very useful feature "dynamically changing log level". I think we could add a sub-section here.
   
   Combined the log4j2 [automatic reconfiguration](https://logging.apache.org/log4j/2.x/manual/configuration.html) and Kubernetes ConfigMap, dynamically changing log level is possible when deploying Flink on Kubernetes. Please perform as the following steps.
   * Add `monitorInterval = 30` into the `conf/log4j-console.properties`
   * Start a Flink session or application cluster normally
   * Edit the `log4j-console.properties` contents in the ConfigMap via `kubectl edit cm <configmap-name>`. For example, enable the `DEBUG` log for package `org.apache.flink.kubernetes`.
   * Use `kubectl logs <pod-name>` or the Flink dashboard to view the logs again. The new log level should take effect.

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster

Review comment:
       I am not sure whether we should suggest our users to stop the session via the following command here.
   
   ```
   echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
   ```

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       So we will add a new page for how to submit python jobs to K8s/Yarn?

##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
+$ kubectl logs <pod-name>
 {% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-### Stop Flink Application
+### Using Plugins
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
+### Custom Docker Image
 
-## Log Files
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+### Using Secrets
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+[Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
 
-## Using plugins
+* Using Secrets as files from a pod;
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+* Using Secrets as environment variables;
 
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
+#### Using Secrets as Files From a Pod
 
-## Using Secrets
-
-[Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
-
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
-
-### Namespaces

Review comment:
       Could we leave this section or at least how to specify the namespace? Even though I agree with you that this is mostly Kubermetes concepts. However, I was asked many times about this question. What's is namespace? And how to submit my job to a specified namespace? In the production environment, the namespace is usually bound to the resource quota and permission.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot commented on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot commented on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 4ee770ed510ddacabe8edb97fc355be1ebaa8ffd UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537676620



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true

Review comment:
       I will update the PR.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10603",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8",
       "triggerType" : "PUSH"
     }, {
       "hash" : "09d2d89e041ba4af050e03f5d4fb26d084e43b2d",
       "status" : "SUCCESS",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10632",
       "triggerID" : "09d2d89e041ba4af050e03f5d4fb26d084e43b2d",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * a0263ec0bd24e3a0de9c4e2ee59a2df37ed906d8 UNKNOWN
   * 09d2d89e041ba4af050e03f5d4fb26d084e43b2d Azure: [SUCCESS](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10632) 
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537388052



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,244 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configurations are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
+
+There are several ways to expose Flink's Web UI and REST endpoint.
+This can be configured using [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-Use the following command to start a Flink application.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ kubectl port-forward service/<ServiceName> 8081
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service.
+  `NodeIP` could be easily replaced with Kubernetes ApiServer address.
+  You could find it in your kube config file.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+### Accessing the Logs
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \

Review comment:
       Is the Python deployment specific to the used resource-provider (Yarn, K8s, standalone)? Where exactly in the current structure would you put it @shuiqiangchen? Can you provide the full path (from a navigational point of view, e.g. Deployment -> Resource Providers -> Standalone -> Docker).




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-739835586


   Thanks a lot for the review @zentol and @wangyang0918. I have addressed most of your comments and pushed an update.


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 215d2414fd5ea678eeca7c8b5f5fc3e1ff973600 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528) 
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537397547



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,269 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+This *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and configured your `kubectl` to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The configuration `kubernetes.cluster-id` specifies the cluster name and must unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The configuration `kubernetes.container.image` specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands controlling the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` for seeing all the supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+The system will use the configuration in `conf/flink-conf.yaml` and override these values with key-value pairs `-Dkey=value` which are provided to `bin/kubernetes-session.sh`.

Review comment:
       Will change it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537660775



##########
File path: docs/deployment/resource-providers/native_kubernetes.md
##########
@@ -24,394 +23,268 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page describes how to deploy a Flink session cluster natively on [Kubernetes](https://kubernetes.io).
+This page describes how to deploy Flink natively on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-<div class="alert alert-warning">
-Flink's native Kubernetes integration is still experimental. There may be changes in the configuration and CLI flags in later versions.
-</div>
+## Getting Started
 
-## Requirements
+This *Getting Started* section guides you through setting up a fully functional Flink Cluster on Kubernetes.
 
-- Kubernetes 1.9 or above.
-- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
-- Kubernetes DNS enabled.
-- A service Account with [RBAC](#rbac) permissions to create, delete pods.
-
-## Flink Kubernetes Session
+### Introduction
 
-### Start Flink Session
+Kubernetes is a popular container-orchestration system for automating computer application deployment, scaling, and management.
+Flink's native Kubernetes integration allows you to directly deploy Flink on a running Kubernetes cluster.
+Moreover, Flink is able to dynamically allocate and de-allocate TaskManagers depending on the required resources because it can directly talk to Kubernetes.
 
-Follow these instructions to start a Flink Session within your Kubernetes cluster.
+### Preparation
 
-A session will start all required Flink services (JobManager and TaskManagers) so that you can submit programs to the cluster.
-Note that you can run multiple programs per session.
+The *Getting Started* section assumes a running Kubernetes cluster fulfilling the following requirements:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh
-{% endhighlight %}
+- Kubernetes >= 1.9.
+- KubeConfig, which has access to list, create, delete pods and services, configurable via `~/.kube/config`. You can verify permissions by running `kubectl auth can-i <list|create|edit|delete> pods`.
+- Enabled Kubernetes DNS.
+- `default` service account with [RBAC](#rbac) permissions to create, delete pods.
 
-All the Kubernetes configuration options can be found in our [configuration guide]({% link deployment/config.md %}#kubernetes).
+If you have problems setting up a Kubernetes cluster, then take a look at [how to setup a Kubernetes cluster](https://kubernetes.io/docs/setup/).
 
-**Example**: Issue the following command to start a session cluster with 4 GB of memory and 2 CPUs with 4 slots per TaskManager:
+### Starting a Flink Session on Kubernetes
 
-In this example we override the `resourcemanager.taskmanager-timeout` setting to make
-the pods with task managers remain for a longer period than the default of 30 seconds.
-Although this setting may cause more cloud cost it has the effect that starting new jobs is in some scenarios
-faster and during development you have more time to inspect the logfiles of your job.
+Once you have your Kubernetes cluster running and `kubectl` is configured to point to it, you can launch a Flink session cluster via
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000
-{% endhighlight %}
+# (1) Start Kubernetes session
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster
 
-The system will use the configuration in `conf/flink-conf.yaml`.
-Please follow our [configuration guide]({% link deployment/config.md %}) if you want to change something.
+# (2) Submit example job
+$ ./bin/flink run --target kubernetes-session -Dkubernetes.cluster-id=my-first-flink-cluster ./examples/streaming/TopSpeedWindowing.jar
 
-If you do not specify a particular name for your session by `kubernetes.cluster-id`, the Flink client will generate a UUID name.
+# (3) Stop Kubernetes session by deleting cluster deployment
+$ kubectl delete deployment/my-first-flink-cluster
 
-<span class="label label-info">Note</span> A docker image with Python and PyFlink installed is required if you are going to start a session cluster for Python Flink Jobs.
-Please refer to the following [section](#custom-flink-docker-image).
+{% endhighlight %}
 
-### Custom Flink Docker image
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
+<span class="label label-info">Note</span> When using [Minikube](https://minikube.sigs.k8s.io/docs/), you need to call `minikube tunnel` in order to [expose Flink's LoadBalancer service on Minikube](https://minikube.sigs.k8s.io/docs/handbook/accessing/#using-minikube-tunnel).
 
-If you want to use a custom Docker image to deploy Flink containers, check [the Flink Docker image documentation]({% link deployment/resource-providers/standalone/docker.md %}),
-[its tags]({% link deployment/resource-providers/standalone/docker.md %}#image-tags), [how to customize the Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) and [enable plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins).
-If you created a custom Docker image you can provide it by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) configuration option:
+Congratulations! You have successfully run a Flink application by deploying Flink on Kubernetes.
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=<CustomImageName>
-{% endhighlight %}
-</div>
+{% top %}
 
-<div data-lang="python" markdown="1">
-To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
-{% highlight Dockerfile %}
-FROM flink
+## Deployment Modes Supported by Flink on Kubernetes
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
-    
-# install Python Flink
-RUN pip3 install apache-flink
-{% endhighlight %}
+For production use, we recommend deploying Flink Applications in the [Per-job or Application Mode]({% link deployment/index.md %}#deployment-modes), as these modes provide a better isolation for the Applications.
 
-Build the image named as **pyflink:latest**:
+### Application Mode
 
-{% highlight bash %}
-sudo docker build -t pyflink:latest .
-{% endhighlight %}
+The [Application Mode]({% link deployment/index.md %}#application-mode) requires that the user code is bundled together with the Flink image because it runs the user code's `main()` method on the cluster.
+The Application Mode makes sure that all Flink components are properly cleaned up after the termination of the application.
 
-Then you are able to start a PyFlink session cluster by setting the [`kubernetes.container.image`]({% link deployment/config.md %}#kubernetes-container-image) 
-configuration option value to be the name of custom image:
+The Flink community provides a [base Docker image]({% link deployment/resource-providers/standalone/docker.md %}#docker-hub-flink-images) which can be used to bundle the user code:
 
-{% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dresourcemanager.taskmanager-timeout=3600000 \
-  -Dkubernetes.container.image=pyflink:latest
+{% highlight dockerfile %}
+FROM flink
+RUN mkdir -p $FLINK_HOME/usrlib
+COPY /path/of/my-flink-job.jar $FLINK_HOME/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
-
-</div>
 
-### Submitting jobs to an existing Session
+After creating and publishing the Docker image under `custom-image-name`, you can start an Application cluster with the following command
 
-<div class="codetabs" markdown="1">
-<div data-lang="java" markdown="1">
-Use the following command to submit a Flink Job to the Kubernetes cluster.
 {% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> examples/streaming/WindowJoin.jar
+$ ./bin/flink run-application --target kubernetes-application \
+-Dkubernetes.cluster-id=my-first-application-cluster \
+-Dkubernetes.container.image=custom-image-name \
+local:///opt/flink/usrlib/my-flink-job.jar
 {% endhighlight %}
-</div>
 
-<div data-lang="python" markdown="1">
-Use the following command to submit a PyFlink Job to the Kubernetes cluster.
-{% highlight bash %}
-$ ./bin/flink run -d -t kubernetes-session -Dkubernetes.cluster-id=<ClusterId> -pym scala_function -pyfs examples/python/table/udf
-{% endhighlight %}
-</div>
-</div>
+<span class="label label-info">Note</span> `local` is the only supported scheme in application mode.
 
-### Accessing Job Manager UI
+The `kubernetes.cluster-id` option specifies the cluster name and must be unique.
+If you do not specify this option, then Flink will generate a random name.
 
-There are several ways to expose a Service onto an external (outside of your cluster) IP address.
-This can be configured using [`kubernetes.rest-service.exposed.type`]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type).
+The `kubernetes.container.image` option specifies the image to start the pods with.
 
-- `ClusterIP`: Exposes the service on a cluster-internal IP.
-The Service is only reachable within the cluster. If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
-You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
+Once the application cluster is deployed you can interact with it:
 
 {% highlight bash %}
-$ kubectl port-forward service/<ServiceName> 8081
+# List running job on the cluster
+$ ./bin/flink list --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster
+# Cancel running job
+$ ./bin/flink cancel --target kubernetes-application -Dkubernetes.cluster-id=my-first-application-cluster <jobId>
 {% endhighlight %}
 
-- `NodePort`: Exposes the service on each Node’s IP at a static port (the `NodePort`). `<NodeIP>:<NodePort>` could be used to contact the Job Manager Service. `NodeIP` could be easily replaced with Kubernetes ApiServer address.
-You could find it in your kube config file.
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/flink`.
 
-- `LoadBalancer`: Exposes the service externally using a cloud provider’s load balancer.
-Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
-You can use `kubectl get services/<ClusterId>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
+### Session Mode
 
-  <span class="label label-warning">Warning!</span> Your JobManager (which can run arbitary jar files) might be exposed to the public internet, without authentication.
+You have seen the deployment of a Session cluster in the [Getting Started](#getting-started) guide at the top of the page.
 
-- `ExternalName`: Map a service to a DNS name, not supported in current version.
+The Session Mode can be run in two modes:
 
-Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
+* **detached mode** (default): The `kubernetes-session.sh` deploys the Flink cluster on Kubernetes and then terminates.
 
-### Attach to an existing Session
+* **attached mode** (`-Dexecution.attached=true`): The `kubernetes-session.sh` stays alive and allows entering commands to control the running Flink cluster.
+  For example, `stop` stops the running Session cluster.
+  Type `help` to list all supported commands.
 
-The Kubernetes session is started in detached mode by default, meaning the Flink client will exit after submitting all the resources to the Kubernetes cluster. Use the following command to attach to an existing session.
+In order to re-attach to a running Session cluster with the cluster id `my-first-flink-cluster` use the following command:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-### Stop Flink Session
+You can override configurations set in `conf/flink-conf.yaml` by passing key-value pairs `-Dkey=value` to `bin/kubernetes-session.sh`.
 
-To stop a Flink Kubernetes session, attach the Flink client to the cluster and type `stop`.
+#### Stop a Running Session Cluster
+
+In order to stop a running Session Cluster with cluster id `my-first-flink-cluster` you can either [delete the Flink deployment](#manual-resource-cleanup) or use:
 
 {% highlight bash %}
-$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=<ClusterId> -Dexecution.attached=true
+$ echo 'stop' | ./bin/kubernetes-session.sh -Dkubernetes.cluster-id=my-first-flink-cluster -Dexecution.attached=true
 {% endhighlight %}
 
-#### Manual Resource Cleanup
+{% top %}
 
-Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to cleanup all cluster components.
-All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<ClusterId>`.
-When the deployment is deleted, all other resources will be deleted automatically.
+## Flink on Kubernetes Reference
 
-{% highlight bash %}
-$ kubectl delete deployment/<ClusterID>
-{% endhighlight %}
+### Configuring Flink on Kubernetes
 
-## Flink Kubernetes Application
+The Kubernetes-specific configuration options are listed on the [configuration page]({% link deployment/config.md %}#kubernetes).
 
-### Start Flink Application
-<div class="codetabs" markdown="1">
-Application mode allows users to create a single image containing their Job and the Flink runtime, which will automatically create and destroy cluster components as needed. The Flink community provides base docker images [customized]({% link deployment/resource-providers/standalone/docker.md %}#customize-flink-image) for any use case.
-<div data-lang="java" markdown="1">
-{% highlight dockerfile %}
-FROM flink
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/my-flink-job-*.jar $FLINK_HOME/usrlib/my-flink-job.jar
-{% endhighlight %}
+### Accessing Flink's Web UI
 
-Use the following command to start a Flink application.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  local:///opt/flink/usrlib/my-flink-job.jar
-{% endhighlight %}
-</div>
+Flink's Web UI and REST endpoint can be exposed in several ways via the [kubernetes.rest-service.exposed.type]({% link deployment/config.md %}#kubernetes-rest-service-exposed-type) configuration option.
 
-<div data-lang="python" markdown="1">
-{% highlight dockerfile %}
-FROM flink
+- **ClusterIP**: Exposes the service on a cluster-internal IP.
+  The Service is only reachable within the cluster.
+  If you want to access the Job Manager ui or submit job to the existing session, you need to start a local proxy.
+  You can then use `localhost:8081` to submit a Flink job to the session or view the dashboard.
 
-# install python3 and pip3
-RUN apt-get update -y && \
-    apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
-RUN ln -s /usr/bin/python3 /usr/bin/python
+{% highlight bash %}
+$ kubectl port-forward service/<ServiceName> 8081
+{% endhighlight %}
 
-# install Python Flink
-RUN pip3 install apache-flink
-COPY /path/of/python/codes /opt/python_codes
+- **NodePort**: Exposes the service on each Node’s IP at a static port (the `NodePort`).
+  `<NodeIP>:<NodePort>` can be used to contact the Job Manager Service.
+  `NodeIP` can also be replaced with the Kubernetes ApiServer address. 
+  You can find its address in your kube config file.
 
-# if there are third party python dependencies, users can install them when building the image
-COPY /path/to/requirements.txt /opt/requirements.txt
-RUN pip3 install -r requirements.txt
+- **LoadBalancer**: Exposes the service externally using a cloud provider’s load balancer.
+  Since the cloud provider and Kubernetes needs some time to prepare the load balancer, you may get a `NodePort` JobManager Web Interface in the client log.
+  You can use `kubectl get services/<cluster-id>-rest` to get EXTERNAL-IP and then construct the load balancer JobManager Web Interface manually `http://<EXTERNAL-IP>:8081`.
 
-# if the job requires external java dependencies, they should be built into the image as well
-RUN mkdir -p $FLINK_HOME/usrlib
-COPY /path/of/external/jar/dependencies $FLINK_HOME/usrlib/
-{% endhighlight %}
+Please reference the official documentation on [publishing services in Kubernetes](https://kubernetes.io/docs/concepts/services-networking/service/#publishing-services-service-types) for more information.
 
-Use the following command to start a PyFlink application, assuming the application image name is **my-pyflink-app:latest**.
-{% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=my-pyflink-app:latest \
-  -pym <ENTRY_MODULE_NAME> (or -py /opt/python_codes/<ENTRY_FILE_NAME>) -pyfs /opt/python_codes
-{% endhighlight %}
-You are able to specify the python main entry script path with `-py` or main entry module name with `-pym`, the path
- of the python codes in the image with `-pyfs` and some other options.
-</div>
-</div>
-Note: Only "local" is supported as schema for application mode. This assumes that the jar is located in the image, not the Flink client.
+### Logging
 
-Note: All the jars in the "$FLINK_HOME/usrlib" directory in the image will be added to user classpath.
+The Kubernetes integration exposes `conf/log4j-console.properties` and `conf/logback-console.xml` as a ConfigMap to the pods.
+Changes to these files will be visible to a newly started cluster.
 
-### Stop Flink Application
+#### Accessing the Logs
 
-When an application is stopped, all Flink cluster resources are automatically destroyed.
-As always, Jobs may stop when manually canceled or, in the case of bounded Jobs, complete.
+By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
+The `STDOUT` and `STDERR` output will only be redirected to the console.
+You can access them via
 
 {% highlight bash %}
-$ ./bin/flink cancel -t kubernetes-application -Dkubernetes.cluster-id=<ClusterID> <JobID>
+$ kubectl logs <pod-name>
 {% endhighlight %}
 
+If the pod is running, you can also use `kubectl exec -it <pod-name> bash` to tunnel in and view the logs or debug the process.
 
-## Log Files
+#### Accessing the Logs of the TaskManagers
 
-By default, the JobManager and TaskManager will output the logs to the console and `/opt/flink/log` in each pod simultaneously.
-The STDOUT and STDERR will only be redirected to the console. You can access them via `kubectl logs <PodName>`.
+Flink will automatically de-allocate idling TaskManagers in order to not waste resources.
+This behaviour can make it harder to access the logs of the respective pods.
+You can increase the time before idling TaskManagers are released by configuring [resourcemanager.taskmanager-timeout]({% link deployment/config.md %}#resourcemanager-taskmanager-timeout) so that you have more time to inspect the log files.
 
-If the pod is running, you can also use `kubectl exec -it <PodName> bash` to tunnel in and view the logs or debug the process.
+#### Changing the Log Level Dynamically
 
-## Using plugins
+If you have configured your logger to [detect configuration changes automatically]({% link deployment/advanced/logging.md %}), then you can dynamically adapt the log level by changing the respective ConfigMap (assuming that the cluster id is `my-first-flink-cluster`):
 
-In order to use [plugins]({% link deployment/filesystems/plugins.md %}), they must be copied to the correct location in the Flink JobManager/TaskManager pod for them to work. 
-You can use the built-in plugins without mounting a volume or building a custom Docker image.
-For example, use the following command to pass the environment variable to enable the S3 plugin for your Flink application.
+{% highligh bash %}
+$ kubectl edit cm flink-config-my-first-flink-cluster
+{% endhighlight %}
+
+### Using Plugins
+
+In order to use [plugins]({% link deployment/filesystems/plugins.md %}), you must copy them to the correct location in the Flink JobManager/TaskManager pod.
+You can use the [built-in plugins]({% link deployment/resource-providers/standalone/docker.md %}#using-plugins) without mounting a volume or building a custom Docker image.
+For example, use the following command to enable the S3 plugin for your Flink session cluster.
 
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/usrlib/my-flink-job.jar
+$ ./bin/kubernetes-session.sh
+-Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
+-Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar
 {% endhighlight %}
 
-## Using Secrets
+### Custom Docker Image
+
+If you want to use a custom Docker image, then you can specify it via the configuration option `kubernetes.container.image`.
+The Flink community provides a rich [Flink Docker image]({% link deployment/resource-providers/standalone/docker.md %}) which can be a good starting point.
+See [how to customize Flink's Docker image]({% link deployment/resource-providers/standalone/docker.md %}l#customize-flink-image) for how to enable plugins, add dependencies and other options.
+
+### Using Secrets
 
 [Kubernetes Secrets](https://kubernetes.io/docs/concepts/configuration/secret/) is an object that contains a small amount of sensitive data such as a password, a token, or a key.
-Such information might otherwise be put in a Pod specification or in an image. Flink on Kubernetes can use Secrets in two ways:
-
-- Using Secrets as files from a pod;
-
-- Using Secrets as environment variables;
-
-### Using Secrets as files from a pod
-
-Here is an example of a Pod that mounts a Secret in a volume:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    volumeMounts:
-    - name: foo
-      mountPath: "/opt/foo"
-  volumes:
-  - name: foo
-    secret:
-      secretName: foo
-{% endhighlight %}
+Such information might otherwise be put in a Pod specification or in an image.
+Flink on Kubernetes can use Secrets in two ways:
+
+* Using Secrets as files from a pod;
+
+* Using Secrets as environment variables;
 
-By applying this yaml, each key in foo Secrets becomes the filename under `/opt/foo` path. Flink on Kubernetes can enable this feature by the following command:
+#### Using Secrets as Files From a Pod
+
+The following command will mount the secret `mysecret` under the path `/path/to/secret` in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.secrets=foo:/opt/foo
+$ ./bin/kubernetes-session.sh -Dkubernetes.secrets=mysecret:/path/to/secret
 {% endhighlight %}
 
+The username and password of the secret `mysecret` can then be found stored in the files `/path/to/secret/username` and `/path/to/secret/password`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-files-from-a-pod).
 
-### Using Secrets as environment variables
-
-Here is an example of a Pod that uses secrets from environment variables:
-
-{% highlight yaml %}
-apiVersion: v1
-kind: Pod
-metadata:
-  name: foo
-spec:
-  containers:
-  - name: foo
-    image: foo
-    env:
-      - name: FOO_ENV
-        valueFrom:
-          secretKeyRef:
-            name: foo_secret
-            key: foo_key
-{% endhighlight %}
+#### Using Secrets as Environment Variables
 
-By applying this yaml, an environment variable named `FOO_ENV` is added into `foo` container, and `FOO_ENV` consumes the value of `foo_key` which is defined in Secrets `foo_secret`.
-Flink on Kubernetes can enable this feature by the following command:
+The following command will expose the secret `mysecret` as environment variable in the started pods:
 
 {% highlight bash %}
-$ ./bin/kubernetes-session.sh \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dkubernetes.env.secretKeyRef=env:FOO_ENV,secret:foo_secret,key:foo_key
+$ ./bin/kubernetes-session.sh -Dkubernetes.env.secretKeyRef=env:SECRET_USERNAME,secret:mysecret,key:username;\
+env:SECRET_PASSWORD,secret:mysecret,key:password
 {% endhighlight %}
 
+The env variable `SECRET_USERNAME` contains the username and the env variable `SECRET_PASSWORD` contains the password of the secret `mysecret`.
 For more details see the [official Kubernetes documentation](https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables).
 
-## High-Availability with Native Kubernetes
+### High-Availability on Kubernetes
 
 For high availability on Kubernetes, you can use the [existing high availability services]({% link deployment/ha/index.md %}).
 
-### How to configure Kubernetes HA Services
+### Manual Resource Cleanup
+
+Flink uses [Kubernetes OwnerReference's](https://kubernetes.io/docs/concepts/workloads/controllers/garbage-collection/) to clean up all cluster components.
+All the Flink created resources, including `ConfigMap`, `Service`, `Pod`, have been set the OwnerReference to `deployment/<cluster-id>`.
+When the deployment is deleted, all other resources will be deleted automatically.
 
-Using the following command to start a native Flink application cluster on Kubernetes with high availability configured.
 {% highlight bash %}
-$ ./bin/flink run-application -p 8 -t kubernetes-application \
-  -Dkubernetes.cluster-id=<ClusterId> \
-  -Dtaskmanager.memory.process.size=4096m \
-  -Dkubernetes.taskmanager.cpu=2 \
-  -Dtaskmanager.numberOfTaskSlots=4 \
-  -Dkubernetes.container.image=<CustomImageName> \
-  -Dhigh-availability=org.apache.flink.kubernetes.highavailability.KubernetesHaServicesFactory \
-  -Dhigh-availability.storageDir=s3://flink/flink-ha \
-  -Drestart-strategy=fixed-delay -Drestart-strategy.fixed-delay.attempts=10 \
-  -Dcontainerized.master.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  -Dcontainerized.taskmanager.env.ENABLE_BUILT_IN_PLUGINS=flink-s3-fs-hadoop-{{site.version}}.jar \
-  local:///opt/flink/examples/streaming/StateMachineExample.jar
+$ kubectl delete deployment/<cluster-id>
 {% endhighlight %}
 
-## Kubernetes concepts
+### Supported Kubernetes Versions
 
-### Namespaces
+Currently, all Kubernetes version `>= 1.9` are supported.
 
-[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) are a way to divide cluster resources between multiple users (via resource quota).
-It is similar to the queue concept in Yarn cluster. Flink on Kubernetes can use namespaces to launch Flink clusters.
-The namespace can be specified using the `-Dkubernetes.namespace=default` argument when starting a Flink cluster.
+### Namespaces
 
-[ResourceQuota](https://kubernetes.io/docs/concepts/policy/resource-quotas/) provides constraints that limit aggregate resource consumption per namespace.
-It can limit the quantity of objects that can be created in a namespace by type, as well as the total amount of compute resources that may be consumed by resources in that project.
+[Namespaces in Kubernetes](https://kubernetes.io/docs/concepts/overview/working-with-objects/namespaces/) divide cluster resources between multiple users via [resource quotas](https://kubernetes.io/docs/concepts/policy/resource-quotas/).
+Flink on Kubernetes can use namespaces to launch Flink clusters.
+The namespace can be configured via [kubernetes.namespace]({% link deployment/config.md %}#kubernetes-namespace).
 
 ### RBAC
 
 Role-based access control ([RBAC](https://kubernetes.io/docs/reference/access-authn-authz/rbac/)) is a method of regulating access to compute or network resources based on the roles of individual users within an enterprise.
-Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster. 
+Users can configure RBAC roles and service accounts used by JobManager to access the Kubernetes API server within the Kubernetes cluster.
 
 Every namespace has a default service account, however, the `default` service account may not have the permission to create or delete pods within the Kubernetes cluster.
 Users may need to update the permission of `default` service account or specify another service account that has the right role bound.

Review comment:
       Good idea. I'll update it.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] tillrohrmann commented on a change in pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
tillrohrmann commented on a change in pull request #14305:
URL: https://github.com/apache/flink/pull/14305#discussion_r537388837



##########
File path: docs/deployment/resource-providers/standalone/docker.md
##########
@@ -352,6 +352,27 @@ as described in [how to run the Flink image](#how-to-run-flink-image).
     # e.g. to distribute the custom image to your cluster
     docker push custom_flink_image
     ```
+  
+### Enabling Python
+
+To build a custom image which has Python and Pyflink prepared, you can refer to the following Dockerfile:
+{% highlight Dockerfile %}
+FROM flink
+
+# install python3 and pip3
+RUN apt-get update -y && \
+apt-get install -y python3.7 python3-pip python3.7-dev && rm -rf /var/lib/apt/lists/*
+RUN ln -s /usr/bin/python3 /usr/bin/python
+
+# install Python Flink
+RUN pip3 install apache-flink

Review comment:
       I would suggest to update this as part of reworking the `docker.md` page as my change's intention is to only maintain the current information we have. cc @rmetzger who is currently reworking these pages.




----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org



[GitHub] [flink] flinkbot edited a comment on pull request #14305: [FLINK-20355][docs] Add new native K8s documentation page

Posted by GitBox <gi...@apache.org>.
flinkbot edited a comment on pull request #14305:
URL: https://github.com/apache/flink/pull/14305#issuecomment-738174107


   <!--
   Meta data
   {
     "version" : 1,
     "metaDataEntries" : [ {
       "hash" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10501",
       "triggerID" : "4ee770ed510ddacabe8edb97fc355be1ebaa8ffd",
       "triggerType" : "PUSH"
     }, {
       "hash" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10528",
       "triggerID" : "215d2414fd5ea678eeca7c8b5f5fc3e1ff973600",
       "triggerType" : "PUSH"
     }, {
       "hash" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "981e374fc5199c6b887094cdc692e17043cf65ea",
       "triggerType" : "PUSH"
     }, {
       "hash" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "status" : "DELETED",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10581",
       "triggerID" : "f31088cec92d9b5bf3e98d49b2702b31d47a2152",
       "triggerType" : "PUSH"
     }, {
       "hash" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "status" : "FAILURE",
       "url" : "https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571",
       "triggerID" : "457fc69726ff01bd37aa6ea96865fb1cc9f86c90",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d63721954708c8a9b3c907b68b9475818fc436b3",
       "triggerType" : "PUSH"
     }, {
       "hash" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "status" : "UNKNOWN",
       "url" : "TBD",
       "triggerID" : "d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3",
       "triggerType" : "PUSH"
     } ]
   }-->
   ## CI report:
   
   * 981e374fc5199c6b887094cdc692e17043cf65ea UNKNOWN
   * 457fc69726ff01bd37aa6ea96865fb1cc9f86c90 Azure: [FAILURE](https://dev.azure.com/apache-flink/98463496-1af2-4620-8eab-a2ecc1a2e6fe/_build/results?buildId=10571) 
   * d63721954708c8a9b3c907b68b9475818fc436b3 UNKNOWN
   * d027fb0e285fef63bb4e6ab4a071ad3edd7a04e3 UNKNOWN
   
   <details>
   <summary>Bot commands</summary>
     The @flinkbot bot supports the following commands:
   
    - `@flinkbot run travis` re-run the last Travis build
    - `@flinkbot run azure` re-run the last Azure build
   </details>


----------------------------------------------------------------
This is an automated message from the Apache Git Service.
To respond to the message, please log on to GitHub and use the
URL above to go to the specific comment.

For queries about this service, please contact Infrastructure at:
users@infra.apache.org