You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by tr...@apache.org on 2018/08/17 07:04:20 UTC

[flink] branch release-1.6 updated (decc0bf -> a12d087)

This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a change to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git.


    from decc0bf  [FLINK-10020] [kinesis] Support recoverable exceptions in listShards.
     new 02d9d35  [hotfix] Update Travis' Docker Compose version to 1.22.0
     new dcbc408  [hotfix][docs] Introduce Github branch variable in _config.yml
     new a12d087  [FLINK-10001][docs] Add documentation for job cluster deployment on Docker and K8s

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .travis.yml                                        | 21 ++++---
 docs/_config.yml                                   |  2 +
 docs/ops/deployment/docker.md                      | 42 +++++++++----
 docs/ops/deployment/kubernetes.md                  | 67 ++++++++++++++------
 flink-container/docker/README.md                   | 72 ++++++++++++++++------
 flink-container/docker/docker-compose.yml          | 13 +++-
 flink-container/kubernetes/README.md               | 11 ++--
 .../container-scripts/docker-compose.test.yml      |  2 +-
 8 files changed, 164 insertions(+), 66 deletions(-)


[flink] 01/03: [hotfix] Update Travis' Docker Compose version to 1.22.0

Posted by tr...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 02d9d352532dbf1f6dd3d34ced7ede3db70298d6
Author: Till Rohrmann <tr...@apache.org>
AuthorDate: Thu Aug 16 09:34:09 2018 +0200

    [hotfix] Update Travis' Docker Compose version to 1.22.0
---
 .travis.yml | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/.travis.yml b/.travis.yml
index cad9c87..2bee7e2 100644
--- a/.travis.yml
+++ b/.travis.yml
@@ -90,15 +90,15 @@ matrix:
 git:
   depth: 100
 
-
 env:
-    global:
-        # Global variable to avoid hanging travis builds when downloading cache archives.
-        - MALLOC_ARENA_MAX=2
-        # Build artifacts like logs (variables for apache/flink repo)
-        - secure: "gL3QRn6/XyVK+Em9RmVqpM6nbTwlhjK4/JiRYZGGCkBgTq4ZnG+Eq2qKAO22TAsqRSi7g7WAoAhUulPt0SJqH7hjMe0LetbO0izbVXDefwf2PJlsNgBbuFG6604++VUaUEyfPYYw9ADjV59LWG7+B/fjbRsevqRBZ30b1gv/tQQ="
-        - secure: "eM9r8IglvnUKctxz/ga6hwGnCpdOvGyYdGj0H/UiNDEx3Lq1A6yp3gChEIXGJqRUXDI5TaIuidunUGY7KHml8urm8eG2Yk2ttxXehZqLpEaOU2jdNJCdLX8tlVfh14T9bxG5AYHQEV3qJUqDFtfXD3whvzuinrm1oEIA3qUxiA8="
-        - secure: "EQYDWgJM5ANJ/sAFwmSEwSTOe9CDN/ENyQAr5/ntM67XanhTZj2Amgt9LthCRUU4EEPl/OFUTwNHMpv/+wa3q7dwVFldSIg5wyCndzJSATPyPBVjYgsXIQZVIjsq4TwTyrTteT55V6Oz2+l27Fvung2FPuN83ovswsJePFzMBxI="
+  global:
+    # Global variable to avoid hanging travis builds when downloading cache archives.
+    - MALLOC_ARENA_MAX=2
+    # Build artifacts like logs (variables for apache/flink repo)
+    - secure: "gL3QRn6/XyVK+Em9RmVqpM6nbTwlhjK4/JiRYZGGCkBgTq4ZnG+Eq2qKAO22TAsqRSi7g7WAoAhUulPt0SJqH7hjMe0LetbO0izbVXDefwf2PJlsNgBbuFG6604++VUaUEyfPYYw9ADjV59LWG7+B/fjbRsevqRBZ30b1gv/tQQ="
+    - secure: "eM9r8IglvnUKctxz/ga6hwGnCpdOvGyYdGj0H/UiNDEx3Lq1A6yp3gChEIXGJqRUXDI5TaIuidunUGY7KHml8urm8eG2Yk2ttxXehZqLpEaOU2jdNJCdLX8tlVfh14T9bxG5AYHQEV3qJUqDFtfXD3whvzuinrm1oEIA3qUxiA8="
+    - secure: "EQYDWgJM5ANJ/sAFwmSEwSTOe9CDN/ENyQAr5/ntM67XanhTZj2Amgt9LthCRUU4EEPl/OFUTwNHMpv/+wa3q7dwVFldSIg5wyCndzJSATPyPBVjYgsXIQZVIjsq4TwTyrTteT55V6Oz2+l27Fvung2FPuN83ovswsJePFzMBxI="
+    - DOCKER_COMPOSE_VERSION=1.22.0
 
 before_script:
    - "gem install --no-document --version 0.8.9 faraday "
@@ -113,6 +113,11 @@ before_install:
    - "export MAVEN_OPTS=\"-Dorg.slf4j.simpleLogger.showDateTime=true -Dorg.slf4j.simpleLogger.dateTimeFormat=HH:mm:ss.SSS\""
 # just in case: clean up the .m2 home and remove invalid jar files
    - 'test ! -d $HOME/.m2/repository/ || find $HOME/.m2/repository/ -name "*.jar" -exec sh -c ''if ! zip -T {} >/dev/null ; then echo "deleting invalid file: {}"; rm -f {} ; fi'' \;'
+# Installing the specified docker compose version
+   - sudo rm /usr/local/bin/docker-compose
+   - curl -L https://github.com/docker/compose/releases/download/${DOCKER_COMPOSE_VERSION}/docker-compose-`uname -s`-`uname -m` > docker-compose
+   - chmod +x docker-compose
+   - sudo mv docker-compose /usr/local/bin
 
 # We run mvn and monitor its output. If there is no output for the specified number of seconds, we
 # print the stack traces of all running Java processes.


[flink] 03/03: [FLINK-10001][docs] Add documentation for job cluster deployment on Docker and K8s

Posted by tr...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git

commit a12d087a17d64f748ede472b369cea44c56c923f
Author: Till Rohrmann <tr...@apache.org>
AuthorDate: Tue Aug 14 14:39:14 2018 +0200

    [FLINK-10001][docs] Add documentation for job cluster deployment on Docker and K8s
    
    [FLINK-10001][docs] Add documentation for job cluster deployment on K8s
    
    This closes #6562.
---
 docs/ops/deployment/docker.md                      | 42 +++++++++----
 docs/ops/deployment/kubernetes.md                  | 67 ++++++++++++++------
 flink-container/docker/README.md                   | 72 ++++++++++++++++------
 flink-container/docker/docker-compose.yml          | 13 +++-
 flink-container/kubernetes/README.md               | 11 ++--
 .../container-scripts/docker-compose.test.yml      |  2 +-
 6 files changed, 149 insertions(+), 58 deletions(-)

diff --git a/docs/ops/deployment/docker.md b/docs/ops/deployment/docker.md
index 4986f2a..453693d 100644
--- a/docs/ops/deployment/docker.md
+++ b/docs/ops/deployment/docker.md
@@ -23,20 +23,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Docker](https://www.docker.com) is a popular container runtime. There are
-official Docker images for Apache Flink available on Docker Hub which can be
-used directly or extended to better integrate into a production environment.
+[Docker](https://www.docker.com) is a popular container runtime. 
+There are Docker images for Apache Flink available on Docker Hub which can be used to deploy a session cluster.
+The Flink repository also contains tooling to create container images to deploy a job cluster.
 
 * This will be replaced by the TOC
 {:toc}
 
-## Official Docker Images
+## Flink session cluster
+
+A Flink session cluster can be used to run multiple jobs. 
+Each job needs to be submitted to the cluster after it has been deployed. 
+
+### Docker images
 
 The [official Docker repository](https://hub.docker.com/_/flink/) is
 hosted on Docker Hub and serves images of Flink version 1.2.1 and later.
 
-Images for each supported combination of Hadoop and Scala are available, and
-tag aliases are provided for convenience.
+Images for each supported combination of Hadoop and Scala are available, and tag aliases are provided for convenience.
 
 For example, the following aliases can be used: *(`1.2.y` indicates the latest
 release of Flink 1.2)*
@@ -63,13 +67,25 @@ For example:
 **Note:** The docker images are provided as a community project by individuals
 on a best-effort basis. They are not official releases by the Apache Flink PMC.
 
+## Flink job cluster
+
+A Flink job cluster is a dedicated cluster which runs a single job. 
+The job is part of the image and, thus, there is no extra job submission needed. 
+
+### Docker images
+
+The Flink job cluster image needs to contain the user code jars of the job for which the cluster is started.
+Therefore, one needs to build a dedicated container image for every job.
+The `flink-container` module contains a `build.sh` script which can be used to create such an image.
+Please see the [instructions](https://github.com/apache/flink/blob/{{ site.github_branch }}/flink-container/docker/README.md) for more details. 
+
 ## Flink with Docker Compose
 
 [Docker Compose](https://docs.docker.com/compose/) is a convenient way to run a
 group of Docker containers locally.
 
-An [example config file](https://github.com/docker-flink/examples/blob/master/docker-compose.yml)
-is available on GitHub.
+Example config files for a [session cluster](https://github.com/docker-flink/examples/blob/master/docker-compose.yml) and a [job cluster](https://github.com/apache/flink/blob/{{ site.github_branch }}/flink-container/docker/docker-compose.yml)
+are available on GitHub.
 
 ### Usage
 
@@ -85,10 +101,14 @@ is available on GitHub.
 
         docker-compose scale taskmanager=<N>
 
-When the cluster is running, you can visit the web UI at [http://localhost:8081
-](http://localhost:8081) and submit a job.
+* Kill the cluster
+
+        docker-compose kill
+
+When the cluster is running, you can visit the web UI at [http://localhost:8081](http://localhost:8081). 
+You can also use the web UI to submit a job to a session cluster.
 
-To submit a job via the command line, you must copy the JAR to the Jobmanager
+To submit a job to a session cluster via the command line, you must copy the JAR to the JobManager
 container and submit the job from there.
 
 For example:
diff --git a/docs/ops/deployment/kubernetes.md b/docs/ops/deployment/kubernetes.md
index 37489fe..298e473 100644
--- a/docs/ops/deployment/kubernetes.md
+++ b/docs/ops/deployment/kubernetes.md
@@ -23,51 +23,78 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-[Kubernetes](https://kubernetes.io) is a container orchestration system.
+This page describes how to deploy a Flink job and session cluster on [Kubernetes](https://kubernetes.io).
 
 * This will be replaced by the TOC
 {:toc}
 
-## Simple Kubernetes Flink Cluster
+## Setup Kubernetes
 
-A basic Flink cluster deployment in Kubernetes has three components:
+Please follow [Kubernetes' setup guide](https://kubernetes.io/docs/setup/) in order to deploy a Kubernetes cluster.
+If you want to run Kubernetes locally, we recommend using [MiniKube](https://kubernetes.io/docs/setup/minikube/).
 
-* a Deployment for a single Jobmanager
-* a Deployment for a pool of Taskmanagers
-* a Service exposing the Jobmanager's RPC and UI ports
+<div class="alert alert-info" markdown="span">
+  <strong>Note:</strong> If using MiniKube please make sure to execute `minikube ssh 'sudo ip link set docker0 
+  promisc on'` before deploying a Flink cluster. Otherwise Flink components are not able to self reference 
+  themselves through a Kubernetes service. 
+</div>
 
-### Launching the cluster
+## Flink session cluster on Kubernetes
 
-Using the [resource definitions found below](#simple-kubernetes-flink-cluster-
-resources), launch the cluster with the `kubectl` command:
+A Flink session cluster is executed as a long-running Kubernetes Deployment. 
+Note that you can run multiple Flink jobs on a session cluster.
+Each job needs to be submitted to the cluster after the cluster has been deployed.
+
+A basic Flink session cluster deployment in Kubernetes has three components:
+
+* a Deployment/Job which runs the JobManager
+* a Deployment for a pool of TaskManagers
+* a Service exposing the JobManager's REST and UI ports
+
+### Deploy Flink session cluster on Kubernetes
+
+Using the resource definitions for a [session cluster](#session-cluster-resource-definitions), launch the cluster with the `kubectl` command:
 
-    kubectl create -f jobmanager-deployment.yaml
     kubectl create -f jobmanager-service.yaml
+    kubectl create -f jobmanager-deployment.yaml
     kubectl create -f taskmanager-deployment.yaml
 
 You can then access the Flink UI via `kubectl proxy`:
 
 1. Run `kubectl proxy` in a terminal
-2. Navigate to [http://localhost:8001/api/v1/proxy/namespaces/default/services/flink-jobmanager:8081
-](http://localhost:8001/api/v1/proxy/namespaces/default/services/flink-
-jobmanager:8081) in your browser
+2. Navigate to [http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:ui/proxy](http://localhost:8001/api/v1/namespaces/default/services/flink-jobmanager:ui/proxy) in your browser
 
-### Deleting the cluster
-
-Again, use `kubectl` to delete the cluster:
+In order to terminate the Flink session cluster, use `kubectl`:
 
     kubectl delete -f jobmanager-deployment.yaml
-    kubectl delete -f jobmanager-service.yaml
     kubectl delete -f taskmanager-deployment.yaml
+    kubectl delete -f jobmanager-service.yaml
+
+## Flink job cluster on Kubernetes
+
+A Flink job cluster is a dedicated cluster which runs a single job. 
+The job is part of the image and, thus, there is no extra job submission needed. 
+
+### Creating the job-specific image
+
+The Flink job cluster image needs to contain the user code jars of the job for which the cluster is started.
+Therefore, one needs to build a dedicated container image for every job.
+Please follow these [instructions](https://github.com/apache/flink/blob/{{ site.github_branch }}/flink-container/docker/README.md) to build the Docker image.
+    
+### Deploy Flink job cluster on Kubernetes
+
+In order to deploy the a job cluster on Kubernetes please follow these [instructions](https://github.com/apache/flink/blob/{{ site.github_branch }}/flink-container/kubernetes/README.md#deploy-flink-job-cluster).
 
 ## Advanced Cluster Deployment
 
-An early version of a [Flink Helm chart](https://github.com/docker-flink/
-examples) is available on GitHub.
+An early version of a [Flink Helm chart](https://github.com/docker-flink/examples) is available on GitHub.
 
 ## Appendix
 
-### Simple Kubernetes Flink cluster resources
+### Session cluster resource definitions
+
+The Deployment definitions use the pre-built image `flink:latest` which can be found [on Docker Hub](https://hub.docker.com/r/_/flink/).
+The image is built from this [Github repository](https://github.com/docker-flink/docker-flink).
 
 `jobmanager-deployment.yaml`
 {% highlight yaml %}
diff --git a/flink-container/docker/README.md b/flink-container/docker/README.md
index 644b31c..7d3c030 100644
--- a/flink-container/docker/README.md
+++ b/flink-container/docker/README.md
@@ -1,40 +1,74 @@
-# Apache Flink job cluster deployment on docker using docker-compose
+# Apache Flink job cluster Docker image
 
-## Installation
+In order to deploy a job cluster on Docker, one needs to create an image which contains the Flink binaries as well as the user code jars of the job to execute.
+This directory contains a `build.sh` which facilitates the process.
+The script takes a Flink distribution either from an official release, an archive or a local distribution and combines it with the specified job jar.  
 
-Install the most recent stable version of docker
-https://docs.docker.com/installation/
+## Installing Docker
 
-## Build
+Install the most recent stable version of [Docker](https://docs.docker.com/installation/).
 
-Images are based on the official Java Alpine (OpenJDK 8) image. If you want to
-build the flink image run:
+## Building the Docker image
 
-    build.sh --from-local-dist --job-jar /path/to/job/jar/job.jar --image-name flink:job
+Images are based on the official Java Alpine (OpenJDK 8) image.
 
-If you want to build the container for a specific version of flink/hadoop/scala
-you can configure it in the respective args:
+Before building the image, one needs to build the user code jars for the job.
+Assume that the job jar is stored under `<PATH_TO_JOB_JAR>` 
 
-    docker build --build-arg FLINK_VERSION=1.6.0 --build-arg HADOOP_VERSION=28 --build-arg SCALA_VERSION=2.11 -t "flink:1.6.0-hadoop2.8-scala_2.11" flink
+If you want to build the Flink image from the version you have checked out locally run:
 
-## Deploy
+    build.sh --from-local-dist --job-jar <PATH_TO_JOB_JAR> --image-name <IMAGE_NAME>
+    
+Note that you first need to call `mvn package -pl flink-dist -am` to build the Flink binaries.
 
-- Deploy cluster and see config/setup log output (best run in a screen session)
+If you want to build the Flink image from an archive stored under `<PATH_TO_ARCHIVE>` run:
 
-        docker-compose up
+    build.sh --from-archive <PATH_TO_ARCHIVE> --job-jar <PATH_TO_JOB_JAR> --image-name <IMAGE_NAME>
 
-- Deploy as a daemon (and return)
+If you want to build the Flink image for a specific version of Flink/Hadoop/Scala run:
 
-        docker-compose up -d
+    build.sh --from-release --flink-version 1.6.0 --hadoop-version 2.8 --scala-version 2.11 --image-name <IMAGE_NAME>
+    
+The script will try to download the released version from the Apache archive.
 
-- Scale the cluster up or down to *N* TaskManagers
+## Deploying via Docker compose
+
+The `docker-compose.yml` contains the following parameters:
+
+* `FLINK_DOCKER_IMAGE_NAME` - Image name to use for the deployment (default: `flink-job:latest`)
+* `FLINK_JOB` - Name of the Flink job to execute (default: none)
+* `DEFAULT_PARALLELISM` - Default parallelism with which to start the job (default: 1)
+* `FLINK_JOB_ARGUMENTS` - Additional arguments which will be passed to the job cluster (default: none)
+
+The parameters can be set by exporting the corresponding environment variable.
+
+Deploy cluster and see config/setup log output (best run in a screen session)
+
+        FLINK_DOCKER_IMAGE_NAME=<IMAGE_NAME> FLINK_JOB=<JOB_NAME> docker-compose up
+
+Deploy as a daemon (and return)
+
+        FLINK_DOCKER_IMAGE_NAME=<IMAGE_NAME> FLINK_JOB=<JOB_NAME> docker-compose up -d
+        
+In order to start the job with a different default parallelism set `DEFAULT_PARALLELISM`. 
+This will automatically start `DEFAULT_PARALLELISM` TaskManagers:
+        
+        FLINK_DOCKER_IMAGE_NAME=<IMAGE_NAME> FLINK_JOB=<JOB_NAME> DEFAULT_PARALLELISM=<DEFAULT_PARALLELISM> docker-compose up
+        
+One can also provide additional job arguments via `FLINK_JOB_ARGUMENTS` which are passed to the job:
+        
+        FLINK_DOCKER_IMAGE_NAME=<IMAGE_NAME> FLINK_JOB=<JOB_NAME> FLINK_JOB_ARGUMENTS=<JOB_ARGUMENTS> docker-compose up
+
+Scale the cluster up or down to *N* TaskManagers
 
         docker-compose scale taskmanager=<N>
 
-- Access the Job Manager container
+Access the Job Manager container
 
         docker exec -it $(docker ps --filter name=flink_jobmanager --format={{.ID}}) /bin/sh
+        
+Access the web UI by going to `<IP_DOCKER_MACHINE>:8081` in your web browser.
 
-- Kill the cluster
+Kill the cluster
 
         docker-compose kill
diff --git a/flink-container/docker/docker-compose.yml b/flink-container/docker/docker-compose.yml
index 5fddff3..28b5368 100644
--- a/flink-container/docker/docker-compose.yml
+++ b/flink-container/docker/docker-compose.yml
@@ -16,16 +16,23 @@
 # limitations under the License.
 ################################################################################
 
-# Set the FLINK_DOCKER_IMAGE_NAME environment variable to override the image name to use
+# Docker compose file for a Flink job cluster deployment.
+#
+# Parameters:
+# * FLINK_DOCKER_IMAGE_NAME - Image name to use for the deployment (default: flink-job:latest)
+# * FLINK_JOB - Name of the Flink job to execute (default: none)
+# * DEFAULT_PARALLELISM - Default parallelism with which to start the job (default: 1)
+# * FLINK_JOB_ARGUMENTS - Additional arguments which will be passed to the job cluster (default: none)
 
-version: "2.1"
+version: "2.2"
 services:
   job-cluster:
     image: ${FLINK_DOCKER_IMAGE_NAME:-flink-job}
     ports:
       - "8081:8081"
-    command: job-cluster --job-classname ${FLINK_JOB} -Djobmanager.rpc.address=job-cluster ${FLINK_JOB_ARGUMENTS}
+    command: job-cluster --job-classname ${FLINK_JOB} -Djobmanager.rpc.address=job-cluster -Dparallelism.default=${DEFAULT_PARALLELISM:-1} ${FLINK_JOB_ARGUMENTS}
 
   taskmanager:
     image: ${FLINK_DOCKER_IMAGE_NAME:-flink-job}
     command: task-manager -Djobmanager.rpc.address=job-cluster
+    scale: ${DEFAULT_PARALLELISM:-1}
diff --git a/flink-container/kubernetes/README.md b/flink-container/kubernetes/README.md
index 5fd2286..efba823 100644
--- a/flink-container/kubernetes/README.md
+++ b/flink-container/kubernetes/README.md
@@ -22,26 +22,29 @@ The files contain the following variables:
 One way to substitute the variables is to use `envsubst`.
 See [here](https://stackoverflow.com/a/23622446/4815083) for a guide to install it on Mac OS X.
 
+Alternatively, copy the template files (suffixed with `*.template`) and replace the variables.
+
 In non HA mode, you should first start the job cluster service:
 
 `kubectl create -f job-cluster-service.yaml`
 
 In order to deploy the job cluster entrypoint run:
 
-`FLINK_IMAGE_NAME=<job-image> FLINK_JOB=<job-name> FLINK_JOB_PARALLELISM=<parallelism> envsubst < job-cluster-job.yaml.template | kubectl create -f -`
+`FLINK_IMAGE_NAME=<IMAGE_NAME> FLINK_JOB=<JOB_NAME> FLINK_JOB_PARALLELISM=<PARALLELISM> envsubst < job-cluster-job.yaml.template | kubectl create -f -`
 
 Now you should see the `flink-job-cluster` job being started by calling `kubectl get job`.
 
 At last, you should start the task manager deployment:
 
-`FLINK_IMAGE_NAME=<job-image> FLINK_JOB_PARALLELISM=<parallelism> envsubst < task-manager-deployment.yaml.template | kubectl create -f -`
+`FLINK_IMAGE_NAME=<IMAGE_NAME> FLINK_JOB_PARALLELISM=<PARALLELISM> envsubst < task-manager-deployment.yaml.template | kubectl create -f -`
 
 ## Interact with Flink job cluster
 
-After starting the job cluster service, the web UI will be available under `<NodeIP>:30081`.
+After starting the job cluster service, the web UI will be available under `<NODE_IP>:30081`.
+In the case of Minikube, `<NODE_IP>` equals `minikube ip`.
 You can then use the Flink client to send Flink commands to the cluster:
 
-`bin/flink list -m <NodeIP:30081>`
+`bin/flink list -m <NODE_IP:30081>`
 
 ## Terminate Flink job cluster
 
diff --git a/flink-end-to-end-tests/test-scripts/container-scripts/docker-compose.test.yml b/flink-end-to-end-tests/test-scripts/container-scripts/docker-compose.test.yml
index 5281111..7af9183 100644
--- a/flink-end-to-end-tests/test-scripts/container-scripts/docker-compose.test.yml
+++ b/flink-end-to-end-tests/test-scripts/container-scripts/docker-compose.test.yml
@@ -18,7 +18,7 @@
 
 # Extensions to flink-container/docker/docker-compose.yml that mounts volumes needed for tests
 
-version: "2.1"
+version: "2.2"
 services:
   job-cluster:
     volumes:


[flink] 02/03: [hotfix][docs] Introduce Github branch variable in _config.yml

Posted by tr...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

trohrmann pushed a commit to branch release-1.6
in repository https://gitbox.apache.org/repos/asf/flink.git

commit dcbc4086bcae1d94c4f4cddf02f92515d0f37bd5
Author: Till Rohrmann <tr...@apache.org>
AuthorDate: Wed Aug 15 16:27:12 2018 +0200

    [hotfix][docs] Introduce Github branch variable in _config.yml
    
    github_branch is the name of the Github branch of the current release.
---
 docs/_config.yml | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/docs/_config.yml b/docs/_config.yml
index 4ccf6ef..9080c16 100644
--- a/docs/_config.yml
+++ b/docs/_config.yml
@@ -33,6 +33,8 @@ version: "1.6.0"
 version_title: "1.6"
 version_javadocs: "1.6"
 version_scaladocs: "1.6"
+# Branch on Github for this version
+github_branch: "release-1.6"
 
 # This suffix is appended to the Scala-dependent Maven artifact names
 scala_version_suffix: "_2.11"