You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by gr...@apache.org on 2017/05/10 18:36:02 UTC

[1/3] flink git commit: [FLINK-6330] [docs] Add basic Docker, K8s docs

Repository: flink
Updated Branches:
  refs/heads/master 3642c5a60 -> 71d76731d


[FLINK-6330] [docs] Add basic Docker, K8s docs

This closes #3751


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/91f37658
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/91f37658
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/91f37658

Branch: refs/heads/master
Commit: 91f376589b717d46b124d7f8e181950926f2ca1e
Parents: 3642c5a
Author: Patrick Lucas <me...@patricklucas.com>
Authored: Fri Apr 21 15:00:53 2017 +0200
Committer: Greg Hogan <co...@greghogan.com>
Committed: Wed May 10 13:27:21 2017 -0400

----------------------------------------------------------------------
 docs/docker/run.sh       |   4 +-
 docs/setup/docker.md     |  99 ++++++++++++++++++++++++++
 docs/setup/kubernetes.md | 157 ++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 259 insertions(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/91f37658/docs/docker/run.sh
----------------------------------------------------------------------
diff --git a/docs/docker/run.sh b/docs/docker/run.sh
index 3c8878a..5598c0a 100755
--- a/docs/docker/run.sh
+++ b/docs/docker/run.sh
@@ -31,10 +31,12 @@ if [ "$(uname -s)" == "Linux" ]; then
   USER_NAME=${SUDO_USER:=$USER}
   USER_ID=$(id -u "${USER_NAME}")
   GROUP_ID=$(id -g "${USER_NAME}")
+  LOCAL_HOME="/home/${USER_NAME}"
 else # boot2docker uid and gid
   USER_NAME=$USER
   USER_ID=1000
   GROUP_ID=50
+  LOCAL_HOME="/Users/${USER_NAME}"
 fi
 
 docker build -t "${IMAGE_NAME}-${USER_NAME}" - <<UserSpecificDocker
@@ -65,7 +67,7 @@ docker run -i -t \
   -w ${FLINK_DOC_ROOT} \
   -u "${USER}" \
   -v "${FLINK_DOC_ROOT}:${FLINK_DOC_ROOT}" \
-  -v "/home/${USER_NAME}:/home/${USER_NAME}" \
+  -v "${LOCAL_HOME}:/home/${USER_NAME}" \
   -p 4000:4000 \
   ${IMAGE_NAME}-${USER_NAME} \
   bash -c "${CMD}"

http://git-wip-us.apache.org/repos/asf/flink/blob/91f37658/docs/setup/docker.md
----------------------------------------------------------------------
diff --git a/docs/setup/docker.md b/docs/setup/docker.md
new file mode 100644
index 0000000..29e696f
--- /dev/null
+++ b/docs/setup/docker.md
@@ -0,0 +1,99 @@
+---
+title:  "Docker Setup"
+nav-title: Docker
+nav-parent_id: deployment
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+[Docker](https://www.docker.com) is a popular container runtime. There are
+official Flink Docker images available on Docker Hub which can be used directly
+or extended to better integrate into a production environment.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Official Flink Docker Images
+
+The [official Flink Docker repository](https://hub.docker.com/_/flink/) is
+hosted on Docker Hub and serves images of Flink version 1.2.1 and later.
+
+Images for each supported combination of Hadoop and Scala are available, and
+tag aliases are provided for convenience.
+
+For example, the following aliases can be used: *(`1.2.y` indicates the latest
+release of Flink 1.2)*
+
+* `flink:latest` →
+`flink:<latest-flink>-hadoop<latest-hadoop>-scala_<latest-scala>`
+* `flink:1.2` → `flink:1.2.y-hadoop27-scala_2.11`
+* `flink:1.2.1-scala_2.10` → `flink:1.2.1-hadoop27-scala_2.10`
+* `flink:1.2-hadoop26` → `flink:1.2.y-hadoop26-scala_2.11`
+
+<!-- NOTE: uncomment when docker-flink/docker-flink/issues/14 is resolved. -->
+<!--
+Additionally, images based on Alpine Linux are available. Reference them by
+appending `-alpine` to the tag. For the Alpine version of `flink:latest`, use
+`flink:alpine`.
+
+For example:
+
+* `flink:alpine`
+* `flink:1.2.1-alpine`
+* `flink:1.2-scala_2.10-alpine`
+-->
+
+## Flink with Docker Compose
+
+[Docker Compose](https://docs.docker.com/compose/) is a convenient way to run a
+group of Docker containers locally.
+
+An [example config file](https://github.com/docker-flink/examples) is available
+on GitHub.
+
+### Usage
+
+* Launch a cluster in the foreground
+
+        docker-compose up
+
+* Launch a cluster in the background
+
+        docker-compose up -d
+
+* Scale the cluster up or down to *N* TaskManagers
+
+        docker-compose scale taskmanager=<N>
+
+When the cluster is running, you can visit the web UI at [http://localhost:8081
+](http://localhost:8081) and submit a job.
+
+To submit a job via the command line, you must copy the JAR to the Jobmanager
+container and submit the job from there.
+
+For example:
+
+{% raw %}
+    $ JOBMANAGER_CONTAINER=$(docker ps --filter name=jobmanager --format={{.ID}})
+    $ docker cp path/to/jar "$JOBMANAGER_CONTAINER":/job.jar
+    $ docker exec -t -i "$JOBMANAGER_CONTAINER" flink run /job.jar
+{% endraw %}
+
+{% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/91f37658/docs/setup/kubernetes.md
----------------------------------------------------------------------
diff --git a/docs/setup/kubernetes.md b/docs/setup/kubernetes.md
new file mode 100644
index 0000000..0790a05
--- /dev/null
+++ b/docs/setup/kubernetes.md
@@ -0,0 +1,157 @@
+---
+title:  "Kubernetes Setup"
+nav-title: Kubernetes
+nav-parent_id: deployment
+nav-pos: 4
+---
+<!--
+Licensed to the Apache Software Foundation (ASF) under one
+or more contributor license agreements.  See the NOTICE file
+distributed with this work for additional information
+regarding copyright ownership.  The ASF licenses this file
+to you under the Apache License, Version 2.0 (the
+"License"); you may not use this file except in compliance
+with the License.  You may obtain a copy of the License at
+
+  http://www.apache.org/licenses/LICENSE-2.0
+
+Unless required by applicable law or agreed to in writing,
+software distributed under the License is distributed on an
+"AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
+KIND, either express or implied.  See the License for the
+specific language governing permissions and limitations
+under the License.
+-->
+
+[Kubernetes](https://kubernetes.io) is a container orchestration system.
+
+* This will be replaced by the TOC
+{:toc}
+
+## Simple Kubernetes Flink Cluster
+
+A basic Flink cluster deployment in Kubernetes has three components:
+
+* a Deployment for a single Jobmanager
+* a Deployment for a pool of Taskmanagers
+* a Service exposing the Jobmanager's RPC and UI ports
+
+### Launching the cluster
+
+Using the [resource definitions found below](#simple-kubernetes-flink-cluster-
+resources), launch the cluster with the `kubectl` command:
+
+    kubectl create -f jobmanager-deployment.yaml
+    kubectl create -f taskmanager-deployment.yaml
+    kubectl create -f jobmanager-service.yaml
+
+You can then access the Flink UI via `kubectl proxy`:
+
+1. Run `kubectl proxy` in a terminal
+2. Navigate to [http://localhost:8001/api/v1/proxy/namespaces/default/services/flink-jobmanager:8081
+](http://localhost:8001/api/v1/proxy/namespaces/default/services/flink-
+jobmanager:8081) in your browser
+
+### Deleting the cluster
+
+Again, use `kubectl` to delete the cluster:
+
+    kubectl delete -f jobmanager-deployment.yaml
+    kubectl delete -f taskmanager-deployment.yaml
+    kubectl delete -f jobmanager-service.yaml
+
+## Advanced Cluster Deployment
+
+An early version of a [Flink Helm chart](https://github.com/docker-flink/
+examples) is available on GitHub.
+
+## Appendix
+
+### Simple Kubernetes Flink cluster resources
+
+`jobmanager-deployment.yaml`
+{% highlight yaml %}
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: flink-jobmanager
+spec:
+  replicas: 1
+  template:
+    metadata:
+      labels:
+        app: flink
+        component: jobmanager
+    spec:
+      containers:
+      - name: jobmanager
+        image: flink:latest
+        args:
+        - jobmanager
+        ports:
+        - containerPort: 6123
+          name: rpc
+        - containerPort: 6124
+          name: blob
+        - containerPort: 6125
+          name: query
+        - containerPort: 8081
+          name: ui
+        env:
+        - name: JOB_MANAGER_RPC_ADDRESS
+          value: flink-jobmanager
+{% endhighlight %}
+
+`taskmanager-deployment.yaml`
+{% highlight yaml %}
+apiVersion: extensions/v1beta1
+kind: Deployment
+metadata:
+  name: flink-taskmanager
+spec:
+  replicas: 2
+  template:
+    metadata:
+      labels:
+        app: flink
+        component: taskmanager
+    spec:
+      containers:
+      - name: taskmanager
+        image: flink:latest
+        args:
+        - taskmanager
+        ports:
+        - containerPort: 6121
+          name: data
+        - containerPort: 6122
+          name: rpc
+        - containerPort: 6125
+          name: query
+        env:
+        - name: JOB_MANAGER_RPC_ADDRESS
+          value: flink-jobmanager
+{% endhighlight %}
+
+`jobmanager-service.yaml`
+{% highlight yaml %}
+apiVersion: v1
+kind: Service
+metadata:
+  name: flink-jobmanager
+spec:
+  ports:
+  - name: rpc
+    port: 6123
+  - name: blob
+    port: 6124
+  - name: query
+    port: 6125
+  - name: ui
+    port: 8081
+  selector:
+    app: flink
+    component: jobmanager
+{% endhighlight %}
+
+{% top %}


[2/3] flink git commit: [FLINK-6512] [docs] improved code formatting in some examples

Posted by gr...@apache.org.
[FLINK-6512] [docs] improved code formatting in some examples

This closes #3857


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/8a8d95e3
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/8a8d95e3
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/8a8d95e3

Branch: refs/heads/master
Commit: 8a8d95e31132889ea5cc3423ea50280fbaf47062
Parents: 91f3765
Author: David Anderson <da...@alpinegizmo.com>
Authored: Tue May 9 17:23:46 2017 +0200
Committer: Greg Hogan <co...@greghogan.com>
Committed: Wed May 10 13:30:02 2017 -0400

----------------------------------------------------------------------
 docs/dev/best_practices.md |  30 ++--
 docs/dev/migration.md      | 300 +++++++++++++++++++++-------------------
 2 files changed, 171 insertions(+), 159 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/8a8d95e3/docs/dev/best_practices.md
----------------------------------------------------------------------
diff --git a/docs/dev/best_practices.md b/docs/dev/best_practices.md
index b2111c4..4dfd7fd 100644
--- a/docs/dev/best_practices.md
+++ b/docs/dev/best_practices.md
@@ -59,8 +59,8 @@ ParameterTool parameter = ParameterTool.fromPropertiesFile(propertiesFile);
 This allows getting arguments like `--input hdfs:///mydata --elements 42` from the command line.
 {% highlight java %}
 public static void main(String[] args) {
-	ParameterTool parameter = ParameterTool.fromArgs(args);
-	// .. regular code ..
+    ParameterTool parameter = ParameterTool.fromArgs(args);
+    // .. regular code ..
 {% endhighlight %}
 
 
@@ -114,17 +114,18 @@ The example below shows how to pass the parameters as a `Configuration` object t
 
 {% highlight java %}
 ParameterTool parameters = ParameterTool.fromArgs(args);
-DataSet<Tuple2<String, Integer>> counts = text.flatMap(new Tokenizer()).withParameters(parameters.getConfiguration())
+DataSet<Tuple2<String, Integer>> counts = text
+        .flatMap(new Tokenizer()).withParameters(parameters.getConfiguration())
 {% endhighlight %}
 
 In the `Tokenizer`, the object is now accessible in the `open(Configuration conf)` method:
 
 {% highlight java %}
 public static final class Tokenizer extends RichFlatMapFunction<String, Tuple2<String, Integer>> {
-	@Override
-	public void open(Configuration parameters) throws Exception {
-		parameters.getInteger("myInt", -1);
-		// .. do
+    @Override
+    public void open(Configuration parameters) throws Exception {
+	parameters.getInteger("myInt", -1);
+	// .. do
 {% endhighlight %}
 
 
@@ -147,11 +148,12 @@ Access them in any rich user function:
 {% highlight java %}
 public static final class Tokenizer extends RichFlatMapFunction<String, Tuple2<String, Integer>> {
 
-	@Override
-	public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
-		ParameterTool parameters = (ParameterTool) getRuntimeContext().getExecutionConfig().getGlobalJobParameters();
-		parameters.getRequired("input");
-		// .. do more ..
+    @Override
+    public void flatMap(String value, Collector<Tuple2<String, Integer>> out) {
+	ParameterTool parameters = (ParameterTool)
+	    getRuntimeContext().getExecutionConfig().getGlobalJobParameters();
+	parameters.getRequired("input");
+	// .. do more ..
 {% endhighlight %}
 
 
@@ -198,8 +200,8 @@ import org.slf4j.Logger;
 import org.slf4j.LoggerFactory;
 
 public class MyClass implements MapFunction {
-	private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);
-	// ...
+    private static final Logger LOG = LoggerFactory.getLogger(MyClass.class);
+    // ...
 {% endhighlight %}
 
 

http://git-wip-us.apache.org/repos/asf/flink/blob/8a8d95e3/docs/dev/migration.md
----------------------------------------------------------------------
diff --git a/docs/dev/migration.md b/docs/dev/migration.md
index a5910a8..11eb42c 100644
--- a/docs/dev/migration.md
+++ b/docs/dev/migration.md
@@ -51,69 +51,70 @@ As running examples for the remainder of this document we will use the `CountMap
 functions. The first is an example of a function with **keyed** state, while
 the second has **non-keyed** state. The code for the aforementioned two functions in Flink 1.1 is presented below:
 
-    public class CountMapper extends RichFlatMapFunction<Tuple2<String, Integer>, Tuple2<String, Integer>> {
+{% highlight java %}
+public class CountMapper extends RichFlatMapFunction<Tuple2<String, Integer>, Tuple2<String, Integer>> {
 
-        private transient ValueState<Integer> counter;
+    private transient ValueState<Integer> counter;
 
-        private final int numberElements;
+    private final int numberElements;
 
-        public CountMapper(int numberElements) {
-            this.numberElements = numberElements;
-        }
+    public CountMapper(int numberElements) {
+        this.numberElements = numberElements;
+    }
 
-        @Override
-        public void open(Configuration parameters) throws Exception {
-            counter = getRuntimeContext().getState(
-      	        new ValueStateDescriptor<>("counter", Integer.class, 0));
-        }
+    @Override
+    public void open(Configuration parameters) throws Exception {
+        counter = getRuntimeContext().getState(
+            new ValueStateDescriptor<>("counter", Integer.class, 0));
+    }
 
-        @Override
-        public void flatMap(Tuple2<String, Integer> value, Collector<Tuple2<String, Integer>> out) throws Exception {
-            int count = counter.value() + 1;
-      	    counter.update(count);
+    @Override
+    public void flatMap(Tuple2<String, Integer> value, Collector<Tuple2<String, Integer>> out) throws Exception {
+        int count = counter.value() + 1;
+        counter.update(count);
 
-      	    if (count % numberElements == 0) {
-     		    out.collect(Tuple2.of(value.f0, count));
-     		    counter.update(0); // reset to 0
-     	    }
+        if (count % numberElements == 0) {
+            out.collect(Tuple2.of(value.f0, count));
+            counter.update(0); // reset to 0
         }
     }
+}
 
+public class BufferingSink implements SinkFunction<Tuple2<String, Integer>>,
+    Checkpointed<ArrayList<Tuple2<String, Integer>>> {
 
-    public class BufferingSink implements SinkFunction<Tuple2<String, Integer>>,
-            Checkpointed<ArrayList<Tuple2<String, Integer>>> {
-
-	    private final int threshold;
+    private final int threshold;
 
-	    private ArrayList<Tuple2<String, Integer>> bufferedElements;
+    private ArrayList<Tuple2<String, Integer>> bufferedElements;
 
-	    BufferingSink(int threshold) {
-		    this.threshold = threshold;
-		    this.bufferedElements = new ArrayList<>();
-	    }
+    BufferingSink(int threshold) {
+        this.threshold = threshold;
+        this.bufferedElements = new ArrayList<>();
+    }
 
-    	@Override
-	    public void invoke(Tuple2<String, Integer> value) throws Exception {
-		    bufferedElements.add(value);
-		    if (bufferedElements.size() == threshold) {
-			    for (Tuple2<String, Integer> element: bufferedElements) {
-				    // send it to the sink
-			    }
-			    bufferedElements.clear();
-		    }
+    @Override
+    public void invoke(Tuple2<String, Integer> value) throws Exception {
+        bufferedElements.add(value);
+        if (bufferedElements.size() == threshold) {
+            for (Tuple2<String, Integer> element: bufferedElements) {
+	        // send it to the sink
 	    }
+	    bufferedElements.clear();
+	}
+    }
 
-	    @Override
-	    public ArrayList<Tuple2<String, Integer>> snapshotState(
-	            long checkpointId, long checkpointTimestamp) throws Exception {
-		    return bufferedElements;
-	    }
+    @Override
+    public ArrayList<Tuple2<String, Integer>> snapshotState(
+        long checkpointId, long checkpointTimestamp) throws Exception {
+	    return bufferedElements;
+    }
 
-	    @Override
-	    public void restoreState(ArrayList<Tuple2<String, Integer>> state) throws Exception {
-	    	bufferedElements.addAll(state);
-        }
+    @Override
+    public void restoreState(ArrayList<Tuple2<String, Integer>> state) throws Exception {
+        bufferedElements.addAll(state);
     }
+}
+{% endhighlight %}
 
 
 The `CountMapper` is a `RichFlatMapFuction` which assumes a grouped-by-key input stream of the form
@@ -160,9 +161,11 @@ the [State documentation]({{ site.baseurl }}/dev/stream/state.html).
 
 The `ListCheckpointed` interface requires the implementation of two methods:
 
-    List<T> snapshotState(long checkpointId, long timestamp) throws Exception;
+{% highlight java %}
+List<T> snapshotState(long checkpointId, long timestamp) throws Exception;
 
-    void restoreState(List<T> state) throws Exception;
+void restoreState(List<T> state) throws Exception;
+{% endhighlight %}
 
 Their semantics are the same as their counterparts in the old `Checkpointed` interface. The only difference
 is that now `snapshotState()` should return a list of objects to checkpoint, as stated earlier, and
@@ -170,53 +173,55 @@ is that now `snapshotState()` should return a list of objects to checkpoint, as
 return a `Collections.singletonList(MY_STATE)` in the `snapshotState()`. The updated code for `BufferingSink`
 is included below:
 
-    public class BufferingSinkListCheckpointed implements
-            SinkFunction<Tuple2<String, Integer>>,
-            ListCheckpointed<Tuple2<String, Integer>>,
-            CheckpointedRestoring<ArrayList<Tuple2<String, Integer>>> {
+{% highlight java %}
+public class BufferingSinkListCheckpointed implements
+        SinkFunction<Tuple2<String, Integer>>,
+        ListCheckpointed<Tuple2<String, Integer>>,
+        CheckpointedRestoring<ArrayList<Tuple2<String, Integer>>> {
 
-        private final int threshold;
+    private final int threshold;
 
-        private transient ListState<Tuple2<String, Integer>> checkpointedState;
+    private transient ListState<Tuple2<String, Integer>> checkpointedState;
 
-        private List<Tuple2<String, Integer>> bufferedElements;
+    private List<Tuple2<String, Integer>> bufferedElements;
 
-        public BufferingSinkListCheckpointed(int threshold) {
-            this.threshold = threshold;
-            this.bufferedElements = new ArrayList<>();
-        }
+    public BufferingSinkListCheckpointed(int threshold) {
+        this.threshold = threshold;
+        this.bufferedElements = new ArrayList<>();
+    }
 
-        @Override
-        public void invoke(Tuple2<String, Integer> value) throws Exception {
-            this.bufferedElements.add(value);
-            if (bufferedElements.size() == threshold) {
-                for (Tuple2<String, Integer> element: bufferedElements) {
-                    // send it to the sink
-                }
-                bufferedElements.clear();
+    @Override
+    public void invoke(Tuple2<String, Integer> value) throws Exception {
+        this.bufferedElements.add(value);
+        if (bufferedElements.size() == threshold) {
+            for (Tuple2<String, Integer> element: bufferedElements) {
+                // send it to the sink
             }
+            bufferedElements.clear();
         }
+    }
 
-        @Override
-        public List<Tuple2<String, Integer>> snapshotState(
-                long checkpointId, long timestamp) throws Exception {
-            return this.bufferedElements;
-        }
-
-        @Override
-        public void restoreState(List<Tuple2<String, Integer>> state) throws Exception {
-            if (!state.isEmpty()) {
-                this.bufferedElements.addAll(state);
-            }
-        }
+    @Override
+    public List<Tuple2<String, Integer>> snapshotState(
+            long checkpointId, long timestamp) throws Exception {
+        return this.bufferedElements;
+    }
 
-        @Override
-        public void restoreState(ArrayList<Tuple2<String, Integer>> state) throws Exception {
-            // this is from the CheckpointedRestoring interface.
+    @Override
+    public void restoreState(List<Tuple2<String, Integer>> state) throws Exception {
+        if (!state.isEmpty()) {
             this.bufferedElements.addAll(state);
         }
     }
 
+    @Override
+    public void restoreState(ArrayList<Tuple2<String, Integer>> state) throws Exception {
+        // this is from the CheckpointedRestoring interface.
+        this.bufferedElements.addAll(state);
+    }
+}
+{% endhighlight %}
+
 As shown in the code, the updated function also implements the `CheckpointedRestoring` interface. This is for backwards
 compatibility reasons and more details will be explained at the end of this section.
 
@@ -224,9 +229,11 @@ compatibility reasons and more details will be explained at the end of this sect
 
 The `CheckpointedFunction` interface requires again the implementation of two methods:
 
-    void snapshotState(FunctionSnapshotContext context) throws Exception;
+{% highlight java %}
+void snapshotState(FunctionSnapshotContext context) throws Exception;
 
-    void initializeState(FunctionInitializationContext context) throws Exception;
+void initializeState(FunctionInitializationContext context) throws Exception;
+{% endhighlight %}
 
 As in Flink 1.1, `snapshotState()` is called whenever a checkpoint is performed, but now `initializeState()` (which is
 the counterpart of the `restoreState()`) is called every time the user-defined function is initialized, rather than only
@@ -234,57 +241,59 @@ in the case that we are recovering from a failure. Given this, `initializeState(
 types of state are initialized, but also where state recovery logic is included. An implementation of the
 `CheckpointedFunction` interface for `BufferingSink` is presented below.
 
-    public class BufferingSink implements SinkFunction<Tuple2<String, Integer>>,
-            CheckpointedFunction, CheckpointedRestoring<ArrayList<Tuple2<String, Integer>>> {
+{% highlight java %}
+public class BufferingSink implements SinkFunction<Tuple2<String, Integer>>,
+        CheckpointedFunction, CheckpointedRestoring<ArrayList<Tuple2<String, Integer>>> {
 
-        private final int threshold;
+    private final int threshold;
 
-        private transient ListState<Tuple2<String, Integer>> checkpointedState;
+    private transient ListState<Tuple2<String, Integer>> checkpointedState;
 
-        private List<Tuple2<String, Integer>> bufferedElements;
+    private List<Tuple2<String, Integer>> bufferedElements;
 
-        public BufferingSink(int threshold) {
-            this.threshold = threshold;
-            this.bufferedElements = new ArrayList<>();
-        }
+    public BufferingSink(int threshold) {
+        this.threshold = threshold;
+        this.bufferedElements = new ArrayList<>();
+    }
 
-        @Override
-        public void invoke(Tuple2<String, Integer> value) throws Exception {
-            bufferedElements.add(value);
-            if (bufferedElements.size() == threshold) {
-                for (Tuple2<String, Integer> element: bufferedElements) {
-                    // send it to the sink
-                }
-                bufferedElements.clear();
+    @Override
+    public void invoke(Tuple2<String, Integer> value) throws Exception {
+        bufferedElements.add(value);
+        if (bufferedElements.size() == threshold) {
+            for (Tuple2<String, Integer> element: bufferedElements) {
+                // send it to the sink
             }
+            bufferedElements.clear();
         }
+    }
 
-        @Override
-        public void snapshotState(FunctionSnapshotContext context) throws Exception {
-            checkpointedState.clear();
-            for (Tuple2<String, Integer> element : bufferedElements) {
-                checkpointedState.add(element);
-            }
+    @Override
+    public void snapshotState(FunctionSnapshotContext context) throws Exception {
+        checkpointedState.clear();
+        for (Tuple2<String, Integer> element : bufferedElements) {
+            checkpointedState.add(element);
         }
+    }
 
-        @Override
-        public void initializeState(FunctionInitializationContext context) throws Exception {
-            checkpointedState = context.getOperatorStateStore().
-                getSerializableListState("buffered-elements");
+    @Override
+    public void initializeState(FunctionInitializationContext context) throws Exception {
+        checkpointedState = context.getOperatorStateStore().
+            getSerializableListState("buffered-elements");
 
-            if (context.isRestored()) {
-                for (Tuple2<String, Integer> element : checkpointedState.get()) {
-                    bufferedElements.add(element);
-                }
+        if (context.isRestored()) {
+            for (Tuple2<String, Integer> element : checkpointedState.get()) {
+                bufferedElements.add(element);
             }
         }
+    }
 
-        @Override
-        public void restoreState(ArrayList<Tuple2<String, Integer>> state) throws Exception {
-            // this is from the CheckpointedRestoring interface.
-            this.bufferedElements.addAll(state);
-        }
+    @Override
+    public void restoreState(ArrayList<Tuple2<String, Integer>> state) throws Exception {
+        // this is from the CheckpointedRestoring interface.
+        this.bufferedElements.addAll(state);
     }
+}
+{% endhighlight %}
 
 The `initializeState` takes as argument a `FunctionInitializationContext`. This is used to initialize
 the non-keyed state "container". This is a container of type `ListState` where the non-keyed state objects
@@ -305,40 +314,41 @@ for Flink 1.1. If the `CheckpointedFunction` interface was to be used in the `Co
 the old `open()` method could be removed and the new `snapshotState()` and `initializeState()` methods
 would look like this:
 
-    public class CountMapper extends RichFlatMapFunction<Tuple2<String, Integer>, Tuple2<String, Integer>>
-            implements CheckpointedFunction {
+{% highlight java %}
+public class CountMapper extends RichFlatMapFunction<Tuple2<String, Integer>, Tuple2<String, Integer>>
+        implements CheckpointedFunction {
 
-        private transient ValueState<Integer> counter;
+    private transient ValueState<Integer> counter;
 
-        private final int numberElements;
+    private final int numberElements;
 
-        public CountMapper(int numberElements) {
-            this.numberElements = numberElements;
-        }
+    public CountMapper(int numberElements) {
+        this.numberElements = numberElements;
+    }
 
-        @Override
-        public void flatMap(Tuple2<String, Integer> value, Collector<Tuple2<String, Integer>> out) throws Exception {
-            int count = counter.value() + 1;
-            counter.update(count);
+    @Override
+    public void flatMap(Tuple2<String, Integer> value, Collector<Tuple2<String, Integer>> out) throws Exception {
+        int count = counter.value() + 1;
+        counter.update(count);
 
-            if (count % numberElements == 0) {
-                out.collect(Tuple2.of(value.f0, count));
-             	counter.update(0); // reset to 0
-             	}
-            }
+        if (count % numberElements == 0) {
+            out.collect(Tuple2.of(value.f0, count));
+            counter.update(0); // reset to 0
         }
+    }
 
-        @Override
-        public void snapshotState(FunctionSnapshotContext context) throws Exception {
-            //all managed, nothing to do.
-        }
+    @Override
+    public void snapshotState(FunctionSnapshotContext context) throws Exception {
+        // all managed, nothing to do.
+    }
 
-        @Override
-        public void initializeState(FunctionInitializationContext context) throws Exception {
-            counter = context.getKeyedStateStore().getState(
-                new ValueStateDescriptor<>("counter", Integer.class, 0));
-        }
+    @Override
+    public void initializeState(FunctionInitializationContext context) throws Exception {
+        counter = context.getKeyedStateStore().getState(
+            new ValueStateDescriptor<>("counter", Integer.class, 0));
     }
+}
+{% endhighlight %}
 
 Notice that the `snapshotState()` method is empty as Flink itself takes care of snapshotting managed keyed state
 upon checkpointing.


[3/3] flink git commit: [FLINK-6513] [docs] cleaned up some typos and grammatical flaws

Posted by gr...@apache.org.
[FLINK-6513] [docs] cleaned up some typos and grammatical flaws

This closes #3858


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/71d76731
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/71d76731
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/71d76731

Branch: refs/heads/master
Commit: 71d76731dc6f611a6f8772cb06a59f5c642ec6cc
Parents: 8a8d95e
Author: David Anderson <da...@alpinegizmo.com>
Authored: Tue May 9 16:50:53 2017 +0200
Committer: Greg Hogan <co...@greghogan.com>
Committed: Wed May 10 14:26:48 2017 -0400

----------------------------------------------------------------------
 docs/dev/best_practices.md             | 34 +++++++++++++---------------
 docs/dev/stream/checkpointing.md       | 11 ++++-----
 docs/dev/stream/process_function.md    |  4 ++--
 docs/dev/stream/side_output.md         |  4 ++--
 docs/internals/stream_checkpointing.md | 35 ++++++++++++++++-------------
 5 files changed, 46 insertions(+), 42 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/71d76731/docs/dev/best_practices.md
----------------------------------------------------------------------
diff --git a/docs/dev/best_practices.md b/docs/dev/best_practices.md
index 4dfd7fd..6328d22 100644
--- a/docs/dev/best_practices.md
+++ b/docs/dev/best_practices.md
@@ -30,14 +30,12 @@ This page contains a collection of best practices for Flink programmers on how t
 
 ## Parsing command line arguments and passing them around in your Flink application
 
+Almost all Flink applications, both batch and streaming, rely on external configuration parameters.
+They are used to specify input and output sources (like paths or addresses), system parameters (parallelism, runtime configuration), and application specific parameters (typically used within user functions).
 
-Almost all Flink applications, both batch and streaming rely on external configuration parameters.
-For example for specifying input and output sources (like paths or addresses), also system parameters (parallelism, runtime configuration) and application specific parameters (often used within the user functions).
-
-Since version 0.9 we are providing a simple utility called `ParameterTool` to provide at least some basic tooling for solving these problems.
-
-Please note that you don't have to use the `ParameterTool` explained here. Other frameworks such as [Commons CLI](https://commons.apache.org/proper/commons-cli/),
-[argparse4j](http://argparse4j.sourceforge.net/) and others work well with Flink as well.
+Flink provides a simple utility called `ParameterTool` to provide some basic tooling for solving these problems.
+Please note that you don't have to use the `ParameterTool` described here. Other frameworks such as [Commons CLI](https://commons.apache.org/proper/commons-cli/) and
+[argparse4j](http://argparse4j.sourceforge.net/) also work well with Flink.
 
 
 ### Getting your configuration values into the `ParameterTool`
@@ -89,8 +87,8 @@ parameter.getNumberOfParameters()
 // .. there are more methods available.
 {% endhighlight %}
 
-You can use the return values of these methods directly in the main() method (=the client submitting the application).
-For example you could set the parallelism of a operator like this:
+You can use the return values of these methods directly in the `main()` method of the client submitting the application.
+For example, you could set the parallelism of a operator like this:
 
 {% highlight java %}
 ParameterTool parameters = ParameterTool.fromArgs(args);
@@ -105,10 +103,10 @@ ParameterTool parameters = ParameterTool.fromArgs(args);
 DataSet<Tuple2<String, Integer>> counts = text.flatMap(new Tokenizer(parameters));
 {% endhighlight %}
 
-and then use them inside the function for getting values from the command line.
+and then use it inside the function for getting values from the command line.
 
 
-#### Passing it as a `Configuration` object to single functions
+#### Passing parameters as a `Configuration` object to single functions
 
 The example below shows how to pass the parameters as a `Configuration` object to a user defined function.
 
@@ -131,9 +129,9 @@ public static final class Tokenizer extends RichFlatMapFunction<String, Tuple2<S
 
 #### Register the parameters globally
 
-Parameters registered as a global job parameter at the `ExecutionConfig` allow you to access the configuration values from the JobManager web interface and all functions defined by the user.
+Parameters registered as global job parameters in the `ExecutionConfig` can be accessed as configuration values from the JobManager web interface and in all functions defined by the user.
 
-**Register the parameters globally**
+Register the parameters globally:
 
 {% highlight java %}
 ParameterTool parameters = ParameterTool.fromArgs(args);
@@ -286,14 +284,14 @@ Change your projects `pom.xml` file like this:
 
 The following changes were done in the `<dependencies>` section:
 
- * Exclude all `log4j` dependencies from all Flink dependencies: This causes Maven to ignore Flink's transitive dependencies to log4j.
- * Exclude the `slf4j-log4j12` artifact from Flink's dependencies: Since we are going to use the slf4j to logback binding, we have to remove the slf4j to log4j binding.
+ * Exclude all `log4j` dependencies from all Flink dependencies: this causes Maven to ignore Flink's transitive dependencies to log4j.
+ * Exclude the `slf4j-log4j12` artifact from Flink's dependencies: since we are going to use the slf4j to logback binding, we have to remove the slf4j to log4j binding.
  * Add the Logback dependencies: `logback-core` and `logback-classic`
  * Add dependencies for `log4j-over-slf4j`. `log4j-over-slf4j` is a tool which allows legacy applications which are directly using the Log4j APIs to use the Slf4j interface. Flink depends on Hadoop which is directly using Log4j for logging. Therefore, we need to redirect all logger calls from Log4j to Slf4j which is in turn logging to Logback.
 
 Please note that you need to manually add the exclusions to all new Flink dependencies you are adding to the pom file.
 
-You may also need to check if other dependencies (non Flink) are pulling in log4j bindings. You can analyze the dependencies of your project with `mvn dependency:tree`.
+You may also need to check if other (non-Flink) dependencies are pulling in log4j bindings. You can analyze the dependencies of your project with `mvn dependency:tree`.
 
 
 
@@ -301,7 +299,7 @@ You may also need to check if other dependencies (non Flink) are pulling in log4
 
 This tutorial is applicable when running Flink on YARN or as a standalone cluster.
 
-In order to use Logback instead of Log4j with Flink, you need to remove the `log4j-1.2.xx.jar` and `sfl4j-log4j12-xxx.jar` from the `lib/` directory.
+In order to use Logback instead of Log4j with Flink, you need to remove `log4j-1.2.xx.jar` and `sfl4j-log4j12-xxx.jar` from the `lib/` directory.
 
 Next, you need to put the following jar files into the `lib/` folder:
 
@@ -309,7 +307,7 @@ Next, you need to put the following jar files into the `lib/` folder:
  * `logback-core.jar`
  * `log4j-over-slf4j.jar`: This bridge needs to be present in the classpath for redirecting logging calls from Hadoop (which is using Log4j) to Slf4j.
 
-Note that you need to explicitly set the `lib/` directory when using a per job YARN cluster.
+Note that you need to explicitly set the `lib/` directory when using a per-job YARN cluster.
 
 The command to submit Flink on YARN with a custom logger is: `./bin/flink run -yt $FLINK_HOME/lib <... remaining arguments ...>`
 

http://git-wip-us.apache.org/repos/asf/flink/blob/71d76731/docs/dev/stream/checkpointing.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/checkpointing.md b/docs/dev/stream/checkpointing.md
index 774d9ef..3a0a1ae 100644
--- a/docs/dev/stream/checkpointing.md
+++ b/docs/dev/stream/checkpointing.md
@@ -32,7 +32,7 @@ any type of more elaborate operation.
 In order to make state fault tolerant, Flink needs to **checkpoint** the state. Checkpoints allow Flink to recover state and positions
 in the streams to give the application the same semantics as a failure-free execution.
 
-The [documentation on streaming fault tolerance](../../internals/stream_checkpointing.html) describe in detail the technique behind Flink's streaming fault tolerance mechanism.
+The [documentation on streaming fault tolerance](../../internals/stream_checkpointing.html) describes in detail the technique behind Flink's streaming fault tolerance mechanism.
 
 
 ## Prerequisites
@@ -124,12 +124,13 @@ env.getCheckpointConfig.setMaxConcurrentCheckpoints(1)
 
 ## Selecting a State Backend
 
-The checkpointing mechanism stores the progress in the data sources and data sinks, the state of windows, as well as the [user-defined state](state.html) consistently to
-provide *exactly once* processing semantics. Where the checkpoints are stored (e.g., JobManager memory, file system, database) depends on the configured
+Flink's [checkpointing mechanism]({{ site.baseurl }}/internals/stream_checkpointing.html) stores consistent snapshots
+of all the state in timers and stateful operators, including connectors, windows, and any [user-defined state](state.html).
+Where the checkpoints are stored (e.g., JobManager memory, file system, database) depends on the configured
 **State Backend**. 
 
-By default state will be kept in memory, and checkpoints will be stored in-memory at the master node (the JobManager). For proper persistence of large state,
-Flink supports various forms of storing and checkpointing state in so called **State Backends**, which can be set via `StreamExecutionEnvironment.setStateBackend(…)`.
+By default, state is kept in memory in the TaskManagers and checkpoints are stored in memory in the JobManager. For proper persistence of large state,
+Flink supports various approaches for storing and checkpointing state in other state backends. The choice of state backend can be configured via `StreamExecutionEnvironment.setStateBackend(…)`.
 
 See [state backends](../../ops/state_backends.html) for more details on the available state backends and options for job-wide and cluster-wide configuration.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/71d76731/docs/dev/stream/process_function.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/process_function.md b/docs/dev/stream/process_function.md
index d682a89..fb5f39d 100644
--- a/docs/dev/stream/process_function.md
+++ b/docs/dev/stream/process_function.md
@@ -42,11 +42,11 @@ For fault-tolerant state, the `ProcessFunction` gives access to Flink's [keyed s
 `RuntimeContext`, similar to the way other stateful functions can access keyed state.
 
 The timers allow applications to react to changes in processing time and in [event time](../event_time.html).
-Every call to the function `processElement(...)` gets a `Context` object with gives access to the element's
+Every call to the function `processElement(...)` gets a `Context` object which gives access to the element's
 event time timestamp, and to the *TimerService*. The `TimerService` can be used to register callbacks for future
 event-/processing-time instants. When a timer's particular time is reached, the `onTimer(...)` method is
 called. During that call, all states are again scoped to the key with which the timer was created, allowing
-timers to perform keyed state manipulation as well.
+timers to manipulate keyed state.
 
 <span class="label label-info">Note</span> If you want to access keyed state and timers you have
 to apply the `ProcessFunction` on a keyed stream:

http://git-wip-us.apache.org/repos/asf/flink/blob/71d76731/docs/dev/stream/side_output.md
----------------------------------------------------------------------
diff --git a/docs/dev/stream/side_output.md b/docs/dev/stream/side_output.md
index e4c4c19..63b7172 100644
--- a/docs/dev/stream/side_output.md
+++ b/docs/dev/stream/side_output.md
@@ -55,8 +55,8 @@ val outputTag = OutputTag[String]("side-output")
 Notice how the `OutputTag` is typed according to the type of elements that the side output stream
 contains.
 
-Emitting data to a side output it only possible when using a
-[ProcessFunction]({{ site.baseurl }}/dev/stream/process_function.html). In the function, you can use the `Context` parameter
+Emitting data to a side output is only possible from within a
+[ProcessFunction]({{ site.baseurl }}/dev/stream/process_function.html). You can use the `Context` parameter
 to emit data to a side output identified by an `OutputTag`:
 
 <div class="codetabs" markdown="1">

http://git-wip-us.apache.org/repos/asf/flink/blob/71d76731/docs/internals/stream_checkpointing.md
----------------------------------------------------------------------
diff --git a/docs/internals/stream_checkpointing.md b/docs/internals/stream_checkpointing.md
index edc7967..d701c5e 100644
--- a/docs/internals/stream_checkpointing.md
+++ b/docs/internals/stream_checkpointing.md
@@ -37,17 +37,20 @@ record from the data stream **exactly once**. Note that there is a switch to *do
 (described below).
 
 The fault tolerance mechanism continuously draws snapshots of the distributed streaming data flow. For streaming applications
-with small state, these snapshots are very light-weight and can be drawn frequently without impacting the performance much.
+with small state, these snapshots are very light-weight and can be drawn frequently without much impact on performance.
 The state of the streaming applications is stored at a configurable place (such as the master node, or HDFS).
 
 In case of a program failure (due to machine-, network-, or software failure), Flink stops the distributed streaming dataflow.
 The system then restarts the operators and resets them to the latest successful checkpoint. The input streams are reset to the
 point of the state snapshot. Any records that are processed as part of the restarted parallel dataflow are guaranteed to not
-have been part of the checkpointed state before.
+have been part of the previously checkpointed state.
+
+*Note:* By default, checkpointing is disabled. See [Checkpointing]({{ site.baseurl }}/dev/stream/checkpointing.html) for details on how to enable and configure checkpointing.
 
 *Note:* For this mechanism to realize its full guarantees, the data stream source (such as message queue or broker) needs to be able
 to rewind the stream to a defined recent point. [Apache Kafka](http://kafka.apache.org) has this ability and Flink's connector to
-Kafka exploits this ability.
+Kafka exploits this ability. See [Fault Tolerance Guarantees of Data Sources and Sinks]({{ site.baseurl }}/dev/connectors/guarantees.html) for
+more information about the guarantees provided by Flink's connectors.
 
 *Note:* Because Flink's checkpoints are realized through distributed snapshots, we use the words *snapshot* and *checkpoint* interchangeably.
 
@@ -79,12 +82,12 @@ Stream barriers are injected into the parallel data flow at the stream sources.
 (let's call it <i>S<sub>n</sub></i>) is the position in the source stream up to which the snapshot covers the data. For example, in Apache Kafka, this
 position would be the last record's offset in the partition. This position <i>S<sub>n</sub></i> is reported to the *checkpoint coordinator* (Flink's JobManager).
 
-The barriers then flow downstream. When an intermediate operator has received a barrier for snapshot *n* from all of its input streams, it emits itself a barrier
+The barriers then flow downstream. When an intermediate operator has received a barrier for snapshot *n* from all of its input streams, it emits a barrier
 for snapshot *n* into all of its outgoing streams. Once a sink operator (the end of a streaming DAG) has received the barrier *n* from all of its
-input streams, it acknowledges that snapshot *n* to the checkpoint coordinator. After all sinks acknowledged a snapshot, it is considered completed.
+input streams, it acknowledges that snapshot *n* to the checkpoint coordinator. After all sinks have acknowledged a snapshot, it is considered completed.
 
-When snapshot *n* is completed, it is certain that no records from before <i>S<sub>n</sub></i> will be needed any more from the source, because these records (and
-their descendant records) have passed through the entire data flow topology.
+Once snapshot *n* has been completed, the job will never again ask the source for records from before <i>S<sub>n</sub></i>, since at that point these records (and
+their descendant records) will have passed through the entire data flow topology.
 
 <div style="text-align: center">
   <img src="{{ site.baseurl }}/fig/stream_aligning.svg" alt="Aligning data streams at operators with multiple inputs" style="width:100%; padding-top:10px; padding-bottom:10px;" />
@@ -92,8 +95,8 @@ their descendant records) have passed through the entire data flow topology.
 
 Operators that receive more than one input stream need to *align* the input streams on the snapshot barriers. The figure above illustrates this:
 
-  - As soon as the operator received snapshot barrier *n* from an incoming stream, it cannot process any further records from that stream until it has received
-the barrier *n* from the other inputs as well. Otherwise, it would have mixed records that belong to snapshot *n* and with records that belong to snapshot *n+1*.
+  - As soon as the operator receives snapshot barrier *n* from an incoming stream, it cannot process any further records from that stream until it has received
+the barrier *n* from the other inputs as well. Otherwise, it would mix records that belong to snapshot *n* and with records that belong to snapshot *n+1*.
   - Streams that report barrier *n* are temporarily set aside. Records that are received from these streams are not processed, but put into an input buffer.
   - Once the last stream has received barrier *n*, the operator emits all pending outgoing records, and then emits snapshot *n* barriers itself.
   - After that, it resumes processing records from all input streams, processing records from the input buffers before processing the records from the streams.
@@ -103,10 +106,10 @@ the barrier *n* from the other inputs as well. Otherwise, it would have mixed re
 
 When operators contain any form of *state*, this state must be part of the snapshots as well. Operator state comes in different forms:
 
-  - *User-defined state*: This is state that is created and modified directly by the transformation functions (like `map()` or `filter()`). User-defined state can either be a simple variable in the function's java object, or the associated key/value state of a function (see [State in Streaming Applications]({{ site.baseurl }}/dev/stream/state.html) for details).
+  - *User-defined state*: This is state that is created and modified directly by the transformation functions (like `map()` or `filter()`). See [State in Streaming Applications]({{ site.baseurl }}/dev/stream/state.html) for details.
   - *System state*: This state refers to data buffers that are part of the operator's computation. A typical example for this state are the *window buffers*, inside which the system collects (and aggregates) records for windows until the window is evaluated and evicted.
 
-Operators snapshot their state at the point in time when they received all snapshot barriers from their input streams, before emitting the barriers to their output streams. At that point, all updates to the state from records before the barriers will have been made, and no updates that depend on records from after the barriers have been applied. Because the state of a snapshot may be potentially large, it is stored in a configurable *state backend*. By default, this is the JobManager's memory, but for serious setups, a distributed reliable storage should be configured (such as HDFS). After the state has been stored, the operator acknowledges the checkpoint, emits the snapshot barrier into the output streams, and proceeds.
+Operators snapshot their state at the point in time when they have received all snapshot barriers from their input streams, and before emitting the barriers to their output streams. At that point, all updates to the state from records before the barriers will have been made, and no updates that depend on records from after the barriers have been applied. Because the state of a snapshot may be large, it is stored in a configurable *[state backend]({{ site.baseurl }}/ops/state_backends.html)*. By default, this is the JobManager's memory, but for production use a distributed reliable storage should be configured (such as HDFS). After the state has been stored, the operator acknowledges the checkpoint, emits the snapshot barrier into the output streams, and proceeds.
 
 The resulting snapshot now contains:
 
@@ -120,7 +123,7 @@ The resulting snapshot now contains:
 
 ### Exactly Once vs. At Least Once
 
-The alignment step may add latency to the streaming program. Usually, this extra latency is in the order of a few milliseconds, but we have seen cases where the latency
+The alignment step may add latency to the streaming program. Usually, this extra latency is on the order of a few milliseconds, but we have seen cases where the latency
 of some outliers increased noticeably. For applications that require consistently super low latencies (few milliseconds) for all records, Flink has a switch to skip the
 stream alignment during a checkpoint. Checkpoint snapshots are still drawn as soon as an operator has seen the checkpoint barrier from each input.
 
@@ -138,9 +141,9 @@ in *at least once* mode.
 
 Note that the above described mechanism implies that operators stop processing input records while they are storing a snapshot of their state in the *state backend*. This *synchronous* state snapshot introduces a delay every time a snapshot is taken.
 
-It is possible to let an operator continue processing while it stores its state snapshot, effectively letting the state snapshots happen *asynchronously* in the background. To do that, the operator must be able to produce a state object that should be stored in a way such that further modifications to the operator state do not affect that state object. An example for that are *copy-on-write* style data structures, such as used for example in RocksDB.
+It is possible to let an operator continue processing while it stores its state snapshot, effectively letting the state snapshots happen *asynchronously* in the background. To do that, the operator must be able to produce a state object that should be stored in a way such that further modifications to the operator state do not affect that state object. For example, *copy-on-write* data structures, such as are used in RocksDB, have this behavior.
 
-After receiving the checkpoint barriers on its inputs, the operator starts the asynchronous snapshot copying of its state. It immediately emits the barrier to its outputs and continues with the regular stream processing. Once the background copy process has completed, it acknowledges the checkpoint to the checkpoint coordinator (the JobManager). The checkpoint is now only complete after all sinks received the barriers and all stateful operators acknowledged their completed backup (which may be later than the barriers reaching the sinks).
+After receiving the checkpoint barriers on its inputs, the operator starts the asynchronous snapshot copying of its state. It immediately emits the barrier to its outputs and continues with the regular stream processing. Once the background copy process has completed, it acknowledges the checkpoint to the checkpoint coordinator (the JobManager). The checkpoint is now only complete after all sinks have received the barriers and all stateful operators have acknowledged their completed backup (which may be after the barriers reach the sinks).
 
 See [State Backends]({{ site.baseurl }}/ops/state_backends.html) for details on the state snapshots.
 
@@ -153,6 +156,8 @@ stream from position <i>S<sub>k</sub></i>. For example in Apache Kafka, that mea
 
 If state was snapshotted incrementally, the operators start with the state of the latest full snapshot and then apply a series of incremental snapshot updates to that state.
 
+See [Restart Strategies]({{ site.baseurl }}/dev/restart_strategies.html) for more information.
+
 ## Operator Snapshot Implementation
 
 When operator snapshots are taken, there are two parts: the **synchronous** and the **asynchronous** parts.
@@ -163,4 +168,4 @@ is completed and the *asynchronous* part is pending. The asynchronous part is th
 Operators that checkpoint purely synchronously return an already completed `FutureTask`.
 If an asynchronous operation needs to be performed, it is executed in the `run()` method of that `FutureTask`.
 
-The tasks are cancelable, in order to release streams and other resource consuming handles.
+The tasks are cancelable, so that streams and other resource consuming handles can be released.