You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by fh...@apache.org on 2016/11/21 20:45:26 UTC

[1/2] flink git commit: [hotfix] [streamExamples] Fix typo in comment.

Repository: flink
Updated Branches:
  refs/heads/master 5836f7edd -> fdb134cab


[hotfix] [streamExamples] Fix typo in comment.

This closes #2841.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/fb6ecd29
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/fb6ecd29
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/fb6ecd29

Branch: refs/heads/master
Commit: fb6ecd29d2864c39e6b0238632cf0b877682ccea
Parents: 5836f7e
Author: William-Sang <sa...@gmail.com>
Authored: Mon Nov 21 20:23:56 2016 +0800
Committer: Fabian Hueske <fh...@apache.org>
Committed: Mon Nov 21 21:36:41 2016 +0100

----------------------------------------------------------------------
 .../org/apache/flink/streaming/examples/kafka/ReadFromKafka.java   | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/fb6ecd29/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/kafka/ReadFromKafka.java
----------------------------------------------------------------------
diff --git a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/kafka/ReadFromKafka.java b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/kafka/ReadFromKafka.java
index 2a8536e..1e48739 100644
--- a/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/kafka/ReadFromKafka.java
+++ b/flink-examples/flink-examples-streaming/src/main/java/org/apache/flink/streaming/examples/kafka/ReadFromKafka.java
@@ -48,7 +48,7 @@ public class ReadFromKafka {
 		StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();
 		env.getConfig().disableSysoutLogging();
 		env.getConfig().setRestartStrategy(RestartStrategies.fixedDelayRestart(4, 10000));
-		env.enableCheckpointing(5000); // create a checkpoint every 5 secodns
+		env.enableCheckpointing(5000); // create a checkpoint every 5 seconds
 		env.getConfig().setGlobalJobParameters(parameterTool); // make parameters available in the web interface
 
 		DataStream<String> messageStream = env


[2/2] flink git commit: [hotfix] [docs] Fix broken links, figures, and code examples.

Posted by fh...@apache.org.
[hotfix] [docs] Fix broken links, figures, and code examples.

This closes #2834.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/fdb134ca
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/fdb134ca
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/fdb134ca

Branch: refs/heads/master
Commit: fdb134cab84fc0a9455f2505ca03c4af9ac1e3e9
Parents: fb6ecd2
Author: Rohit Agarwal <mi...@gmail.com>
Authored: Fri Nov 18 19:27:26 2016 -0800
Committer: Fabian Hueske <fh...@apache.org>
Committed: Mon Nov 21 21:38:13 2016 +0100

----------------------------------------------------------------------
 docs/dev/api_concepts.md       |  4 ++--
 docs/dev/batch/iterations.md   | 10 +++++-----
 docs/dev/connectors/kafka.md   |  3 +--
 docs/dev/datastream_api.md     |  2 +-
 docs/dev/libs/ml/quickstart.md |  2 +-
 docs/dev/windows.md            |  4 ++--
 6 files changed, 12 insertions(+), 13 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/fdb134ca/docs/dev/api_concepts.md
----------------------------------------------------------------------
diff --git a/docs/dev/api_concepts.md b/docs/dev/api_concepts.md
index ac7d101..49d2ded 100644
--- a/docs/dev/api_concepts.md
+++ b/docs/dev/api_concepts.md
@@ -407,7 +407,7 @@ fields of the Tuple:
 <div data-lang="java" markdown="1">
 {% highlight java %}
 DataStream<Tuple3<Integer,String,Long>> input = // [...]
-KeyedStream<Tuple3<Integer,String,Long> keyed = input.keyBy(0)
+KeyedStream<Tuple3<Integer,String,Long>,Tuple> keyed = input.keyBy(0)
 {% endhighlight %}
 </div>
 <div data-lang="scala" markdown="1">
@@ -425,7 +425,7 @@ Integer type).
 <div data-lang="java" markdown="1">
 {% highlight java %}
 DataStream<Tuple3<Integer,String,Long>> input = // [...]
-KeyedStream<Tuple3<Integer,String,Long> keyed = input.keyBy(0,1)
+KeyedStream<Tuple3<Integer,String,Long>,Tuple> keyed = input.keyBy(0,1)
 {% endhighlight %}
 </div>
 <div data-lang="scala" markdown="1">

http://git-wip-us.apache.org/repos/asf/flink/blob/fdb134ca/docs/dev/batch/iterations.md
----------------------------------------------------------------------
diff --git a/docs/dev/batch/iterations.md b/docs/dev/batch/iterations.md
index 47910d0..cfeffe6 100644
--- a/docs/dev/batch/iterations.md
+++ b/docs/dev/batch/iterations.md
@@ -89,7 +89,7 @@ Iterate Operator
 The **iterate operator** covers the *simple form of iterations*: in each iteration, the **step function** consumes the **entire input** (the *result of the previous iteration*, or the *initial data set*), and computes the **next version of the partial solution** (e.g. `map`, `reduce`, `join`, etc.).
 
 <p class="text-center">
-    <img alt="Iterate Operator" width="60%" src="fig/iterations_iterate_operator.png" />
+    <img alt="Iterate Operator" width="60%" src="{{site.baseurl}}/fig/iterations_iterate_operator.png" />
 </p>
 
   1. **Iteration Input**: Initial input for the *first iteration* from a *data source* or *previous operators*.
@@ -124,7 +124,7 @@ setFinalState(state);
 In the following example, we **iteratively incremenet a set numbers**:
 
 <p class="text-center">
-    <img alt="Iterate Operator Example" width="60%" src="fig/iterations_iterate_operator_example.png" />
+    <img alt="Iterate Operator Example" width="60%" src="{{site.baseurl}}/fig/iterations_iterate_operator_example.png" />
 </p>
 
   1. **Iteration Input**: The inital input is read from a data source and consists of five single-field records (integers `1` to `5`).
@@ -152,7 +152,7 @@ The **delta iterate operator** covers the case of **incremental iterations**. In
 Where applicable, this leads to **more efficient algorithms**, because not every element in the solution set changes in each iteration. This allows to **focus on the hot parts** of the solution and leave the **cold parts untouched**. Frequently, the majority of the solution cools down comparatively fast and the later iterations operate only on a small subset of the data.
 
 <p class="text-center">
-    <img alt="Delta Iterate Operator" width="60%" src="fig/iterations_delta_iterate_operator.png" />
+    <img alt="Delta Iterate Operator" width="60%" src="{{site.baseurl}}/fig/iterations_delta_iterate_operator.png" />
 </p>
 
   1. **Iteration Input**: The initial workset and solution set are read from *data sources* or *previous operators* as input to the first iteration.
@@ -187,7 +187,7 @@ setFinalState(solution);
 In the following example, every vertex has an **ID** and a **coloring**. Each vertex will propagate its vertex ID to neighboring vertices. The **goal** is to *assign the minimum ID to every vertex in a subgraph*. If a received ID is smaller then the current one, it changes to the color of the vertex with the received ID. One application of this can be found in *community analysis* or *connected components* computation.
 
 <p class="text-center">
-    <img alt="Delta Iterate Operator Example" width="100%" src="fig/iterations_delta_iterate_operator_example.png" />
+    <img alt="Delta Iterate Operator Example" width="100%" src="{{site.baseurl}}/fig/iterations_delta_iterate_operator_example.png" />
 </p>
 
 The **initial input** is set as **both workset and solution set.** In the above figure, the colors visualize the **evolution of the solution set**. With each iteration, the color of the minimum ID is spreading in the respective subgraph. At the same time, the amount of work (exchanged and compared vertex IDs) decreases with each iteration. This corresponds to the **decreasing size of the workset**, which goes from all seven vertices to zero after three iterations, at which time the iteration terminates. The **important observation** is that *the lower subgraph converges before the upper half* does and the delta iteration is able to capture this with the workset abstraction.
@@ -208,5 +208,5 @@ Superstep Synchronization
 We referred to each execution of the step function of an iteration operator as *a single iteration*. In parallel setups, **multiple instances of the step function are evaluated in parallel** on different partitions of the iteration state. In many settings, one evaluation of the step function on all parallel instances forms a so called **superstep**, which is also the granularity of synchronization. Therefore, *all* parallel tasks of an iteration need to complete the superstep, before a next superstep will be initialized. **Termination criteria** will also be evaluated at superstep barriers.
 
 <p class="text-center">
-    <img alt="Supersteps" width="50%" src="fig/iterations_supersteps.png" />
+    <img alt="Supersteps" width="50%" src="{{site.baseurl}}/fig/iterations_supersteps.png" />
 </p>

http://git-wip-us.apache.org/repos/asf/flink/blob/fdb134ca/docs/dev/connectors/kafka.md
----------------------------------------------------------------------
diff --git a/docs/dev/connectors/kafka.md b/docs/dev/connectors/kafka.md
index 9a360d4..e3dc821 100644
--- a/docs/dev/connectors/kafka.md
+++ b/docs/dev/connectors/kafka.md
@@ -114,8 +114,7 @@ properties.setProperty("bootstrap.servers", "localhost:9092");
 properties.setProperty("zookeeper.connect", "localhost:2181");
 properties.setProperty("group.id", "test");
 DataStream<String> stream = env
-	.addSource(new FlinkKafkaConsumer08<>("topic", new SimpleStringSchema(), properties))
-	.print();
+	.addSource(new FlinkKafkaConsumer08<>("topic", new SimpleStringSchema(), properties));
 {% endhighlight %}
 </div>
 <div data-lang="scala" markdown="1">

http://git-wip-us.apache.org/repos/asf/flink/blob/fdb134ca/docs/dev/datastream_api.md
----------------------------------------------------------------------
diff --git a/docs/dev/datastream_api.md b/docs/dev/datastream_api.md
index 425dd6a..4c81d63 100644
--- a/docs/dev/datastream_api.md
+++ b/docs/dev/datastream_api.md
@@ -872,7 +872,7 @@ data.map {
   case (id, name, temperature) => // [...]
 }
 {% endhighlight %}
-is not supported by the API out-of-the-box. To use this feature, you should use a <a href="../scala_api_extensions.html">Scala API extension</a>.
+is not supported by the API out-of-the-box. To use this feature, you should use a <a href="./scala_api_extensions.html">Scala API extension</a>.
 
 
 </div>

http://git-wip-us.apache.org/repos/asf/flink/blob/fdb134ca/docs/dev/libs/ml/quickstart.md
----------------------------------------------------------------------
diff --git a/docs/dev/libs/ml/quickstart.md b/docs/dev/libs/ml/quickstart.md
index e4c6962..7ba3ed5 100644
--- a/docs/dev/libs/ml/quickstart.md
+++ b/docs/dev/libs/ml/quickstart.md
@@ -55,7 +55,7 @@ through [principal components analysis](https://en.wikipedia.org/wiki/Principal_
 ## Linking with FlinkML
 
 In order to use FlinkML in your project, first you have to
-[set up a Flink program]({{ site.baseurl }}}/dev/api_concepts.html#linking-with-flink).
+[set up a Flink program]({{ site.baseurl }}/dev/api_concepts.html#linking-with-flink).
 Next, you have to add the FlinkML dependency to the `pom.xml` of your project:
 
 {% highlight xml %}

http://git-wip-us.apache.org/repos/asf/flink/blob/fdb134ca/docs/dev/windows.md
----------------------------------------------------------------------
diff --git a/docs/dev/windows.md b/docs/dev/windows.md
index 2611870..4bce07b 100644
--- a/docs/dev/windows.md
+++ b/docs/dev/windows.md
@@ -631,8 +631,8 @@ input
 When working with event-time windowing it can happen that elements arrive late, i.e the
 watermark that Flink uses to keep track of the progress of event-time is already past the
 end timestamp of a window to which an element belongs. Please
-see [event time](/apis/streaming/event_time.html) and especially
-[late elements](/apis/streaming/event_time.html#late-elements) for a more thorough discussion of
+see [event time](./event_time.html) and especially
+[late elements](./event_time.html#late-elements) for a more thorough discussion of
 how Flink deals with event time.
 
 You can specify how a windowed transformation should deal with late elements and how much lateness