You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ch...@apache.org on 2017/10/20 06:35:55 UTC

[2/3] flink git commit: [hotfix][docs][javadocs] Remove double "the"

[hotfix][docs][javadocs] Remove double "the"

This closes #4865.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/19d484a7
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/19d484a7
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/19d484a7

Branch: refs/heads/master
Commit: 19d484a7877cc29952bf46856e56b30434ede79c
Parents: bc065cd
Author: yew1eb <ye...@gmail.com>
Authored: Fri Oct 20 10:45:11 2017 +0800
Committer: zentol <ch...@apache.org>
Committed: Fri Oct 20 08:31:49 2017 +0200

----------------------------------------------------------------------
 docs/dev/execution_configuration.md                              | 2 +-
 docs/dev/scala_shell.md                                          | 2 +-
 docs/dev/table/streaming.md                                      | 2 +-
 docs/dev/table/tableApi.md                                       | 2 +-
 docs/ops/deployment/aws.md                                       | 2 +-
 .../connectors/elasticsearch/ElasticsearchSinkBaseTest.java      | 2 +-
 .../java/org/apache/flink/api/common/state/KeyedStateStore.java  | 2 +-
 .../src/main/java/org/apache/flink/core/fs/FileSystem.java       | 2 +-
 .../src/main/java/org/apache/flink/api/java/utils/Option.java    | 2 +-
 .../java/org/apache/flink/graph/bipartite/BipartiteGraph.java    | 4 ++--
 .../org/apache/flink/table/catalog/ExternalCatalogSchema.scala   | 2 +-
 .../flink/optimizer/dataproperties/PartitioningProperty.java     | 2 +-
 .../java/org/apache/flink/optimizer/util/CompilerTestBase.java   | 2 +-
 .../flink/runtime/checkpoint/MasterTriggerRestoreHook.java       | 2 +-
 .../org/apache/flink/runtime/executiongraph/ExecutionGraph.java  | 4 ++--
 .../apache/flink/runtime/rpc/messages/RemoteRpcInvocation.java   | 2 +-
 .../main/java/org/apache/flink/runtime/state/StateObject.java    | 2 +-
 .../src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala | 4 ++--
 .../org/apache/flink/streaming/api/datastream/JoinedStreams.java | 2 +-
 .../api/functions/windowing/delta/extractor/FieldsFromTuple.java | 2 +-
 .../java/org/apache/flink/streaming/api/operators/Output.java    | 2 +-
 .../scala/org/apache/flink/streaming/api/scala/DataStream.scala  | 2 +-
 22 files changed, 25 insertions(+), 25 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/docs/dev/execution_configuration.md
----------------------------------------------------------------------
diff --git a/docs/dev/execution_configuration.md b/docs/dev/execution_configuration.md
index 2316450..8fe1b63 100644
--- a/docs/dev/execution_configuration.md
+++ b/docs/dev/execution_configuration.md
@@ -79,7 +79,7 @@ Note that types registered with `registerKryoType()` are not available to Flink'
 
 - `disableAutoTypeRegistration()` Automatic type registration is enabled by default. The automatic type registration is registering all types (including sub-types) used by usercode with Kryo and the POJO serializer.
 
-- `setTaskCancellationInterval(long interval)` Sets the the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.
+- `setTaskCancellationInterval(long interval)` Sets the interval (in milliseconds) to wait between consecutive attempts to cancel a running task. When a task is canceled a new thread is created which periodically calls `interrupt()` on the task thread, if the task thread does not terminate within a certain time. This parameter refers to the time between consecutive calls to `interrupt()` and is set by default to **30000** milliseconds, or **30 seconds**.
 
 The `RuntimeContext` which is accessible in `Rich*` functions through the `getRuntimeContext()` method also allows to access the `ExecutionConfig` in all user defined functions.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/docs/dev/scala_shell.md
----------------------------------------------------------------------
diff --git a/docs/dev/scala_shell.md b/docs/dev/scala_shell.md
index bfd3133..b12060b 100644
--- a/docs/dev/scala_shell.md
+++ b/docs/dev/scala_shell.md
@@ -66,7 +66,7 @@ Scala-Flink> benv.execute("MyProgram")
 
 ### DataStream API
 
-Similar to the the batch program above, we can execute a streaming program through the DataStream API:
+Similar to the batch program above, we can execute a streaming program through the DataStream API:
 
 ~~~scala
 Scala-Flink> val textStreaming = senv.fromElements(

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/docs/dev/table/streaming.md
----------------------------------------------------------------------
diff --git a/docs/dev/table/streaming.md b/docs/dev/table/streaming.md
index 91915c7..310121e 100644
--- a/docs/dev/table/streaming.md
+++ b/docs/dev/table/streaming.md
@@ -549,7 +549,7 @@ val stream: DataStream[Row] = result.toAppendStream[Row](qConfig)
 </div>
 </div>
 
-In the the following we describe the parameters of the `QueryConfig` and how they affect the accuracy and resource consumption of a query.
+In the following we describe the parameters of the `QueryConfig` and how they affect the accuracy and resource consumption of a query.
 
 ### Idle State Retention Time
 

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/docs/dev/table/tableApi.md
----------------------------------------------------------------------
diff --git a/docs/dev/table/tableApi.md b/docs/dev/table/tableApi.md
index 625c599..9380025 100644
--- a/docs/dev/table/tableApi.md
+++ b/docs/dev/table/tableApi.md
@@ -1445,7 +1445,7 @@ The `OverWindow` defines a range of rows over which aggregates are computed. `Ov
 
         <ul>
           <li><code>CURRENT_ROW</code> sets the upper bound of the window to the current row.</li>
-          <li><code>CURRENT_RANGE</code> sets the upper bound of the window to sort key of the the current row, i.e., all rows with the same sort key as the current row are included in the window.</li>
+          <li><code>CURRENT_RANGE</code> sets the upper bound of the window to sort key of the current row, i.e., all rows with the same sort key as the current row are included in the window.</li>
         </ul>
 
         <p>If the <code>following</code> clause is omitted, the upper bound of a time interval window is defined as <code>CURRENT_RANGE</code> and the upper bound of a row-count interval window is defined as <code>CURRENT_ROW</code>.</p>

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/docs/ops/deployment/aws.md
----------------------------------------------------------------------
diff --git a/docs/ops/deployment/aws.md b/docs/ops/deployment/aws.md
index 1a05bfd..165530d 100644
--- a/docs/ops/deployment/aws.md
+++ b/docs/ops/deployment/aws.md
@@ -55,7 +55,7 @@ when creating an EMR cluster.
 
 After creating your cluster, you can [connect to the master node](http://docs.aws.amazon.com/ElasticMapReduce/latest/ManagementGuide/emr-connect-master-node.html) and install Flink:
 
-1. Go the the [Downloads Page]({{ download_url}}) and **download a binary version of Flink matching the Hadoop version** of your EMR cluster, e.g. Hadoop 2.7 for EMR releases 4.3.0, 4.4.0, or 4.5.0.
+1. Go the [Downloads Page]({{ download_url}}) and **download a binary version of Flink matching the Hadoop version** of your EMR cluster, e.g. Hadoop 2.7 for EMR releases 4.3.0, 4.4.0, or 4.5.0.
 2. Extract the Flink distribution and you are ready to deploy [Flink jobs via YARN](yarn_setup.html) after **setting the Hadoop config directory**:
 
 ```bash

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-connectors/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBaseTest.java
----------------------------------------------------------------------
diff --git a/flink-connectors/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBaseTest.java b/flink-connectors/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBaseTest.java
index 5e59785..37e7779 100644
--- a/flink-connectors/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBaseTest.java
+++ b/flink-connectors/flink-connector-elasticsearch-base/src/test/java/org/apache/flink/streaming/connectors/elasticsearch/ElasticsearchSinkBaseTest.java
@@ -420,7 +420,7 @@ public class ElasticsearchSinkBaseTest {
 		/**
 		 * On non-manual flushes, i.e. when flush is called in the snapshot method implementation,
 		 * usages need to explicitly call this to allow the flush to continue. This is useful
-		 * to make sure that specific requests get added to the the next bulk request for flushing.
+		 * to make sure that specific requests get added to the next bulk request for flushing.
 		 */
 		public void continueFlush() {
 			flushLatch.trigger();

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-core/src/main/java/org/apache/flink/api/common/state/KeyedStateStore.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/api/common/state/KeyedStateStore.java b/flink-core/src/main/java/org/apache/flink/api/common/state/KeyedStateStore.java
index ea9ac41..e0a044f 100644
--- a/flink-core/src/main/java/org/apache/flink/api/common/state/KeyedStateStore.java
+++ b/flink-core/src/main/java/org/apache/flink/api/common/state/KeyedStateStore.java
@@ -29,7 +29,7 @@ public interface KeyedStateStore {
 	/**
 	 * Gets a handle to the system's key/value state. The key/value state is only accessible
 	 * if the function is executed on a KeyedStream. On each access, the state exposes the value
-	 * for the the key of the element currently processed by the function.
+	 * for the key of the element currently processed by the function.
 	 * Each function may have multiple partitioned states, addressed with different names.
 	 *
 	 * <p>Because the scope of each value is the key of the currently processed element,

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
----------------------------------------------------------------------
diff --git a/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java b/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
index 643ca9b..d66a893 100644
--- a/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
+++ b/flink-core/src/main/java/org/apache/flink/core/fs/FileSystem.java
@@ -906,7 +906,7 @@ public abstract class FileSystem {
 	private static HashMap<String, FileSystemFactory> loadFileSystems() {
 		final HashMap<String, FileSystemFactory> map = new HashMap<>();
 
-		// by default, we always have the the local file system factory
+		// by default, we always have the local file system factory
 		map.put("file", new LocalFileSystemFactory());
 
 		LOG.debug("Loading extension file systems via services");

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-java/src/main/java/org/apache/flink/api/java/utils/Option.java
----------------------------------------------------------------------
diff --git a/flink-java/src/main/java/org/apache/flink/api/java/utils/Option.java b/flink-java/src/main/java/org/apache/flink/api/java/utils/Option.java
index 818d5f0..c6f151f 100644
--- a/flink-java/src/main/java/org/apache/flink/api/java/utils/Option.java
+++ b/flink-java/src/main/java/org/apache/flink/api/java/utils/Option.java
@@ -61,7 +61,7 @@ public class Option {
 	/**
 	 * Define the type of the Option.
 	 *
-	 * @param type - the type which the the value of the Option can be casted to.
+	 * @param type - the type which the value of the Option can be casted to.
 	 * @return the updated Option
 	 */
 	public Option type(OptionType type) {

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/bipartite/BipartiteGraph.java
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/bipartite/BipartiteGraph.java b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/bipartite/BipartiteGraph.java
index 97c93e2..8965a48 100644
--- a/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/bipartite/BipartiteGraph.java
+++ b/flink-libraries/flink-gelly/src/main/java/org/apache/flink/graph/bipartite/BipartiteGraph.java
@@ -205,7 +205,7 @@ public class BipartiteGraph<KT, KB, VVT, VVB, EV> {
 	 * Convert a bipartite graph into a graph that contains only top vertices. An edge between two vertices in the new
 	 * graph will exist only if the original bipartite graph contains at least one bottom vertex they both connect to.
 	 *
-	 * <p>The full projection performs three joins and returns edges containing the the connecting vertex ID and value,
+	 * <p>The full projection performs three joins and returns edges containing the connecting vertex ID and value,
 	 * both top vertex values, and both bipartite edge values.
 	 *
 	 * <p>Note: KT must override .equals(). This requirement may be removed in a future release.
@@ -271,7 +271,7 @@ public class BipartiteGraph<KT, KB, VVT, VVB, EV> {
 	 * Convert a bipartite graph into a graph that contains only bottom vertices. An edge between two vertices in the
 	 * new graph will exist only if the original bipartite graph contains at least one top vertex they both connect to.
 	 *
-	 * <p>The full projection performs three joins and returns edges containing the the connecting vertex ID and value,
+	 * <p>The full projection performs three joins and returns edges containing the connecting vertex ID and value,
 	 * both bottom vertex values, and both bipartite edge values.
 	 *
 	 * <p>Note: KB must override .equals(). This requirement may be removed in a future release.

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala
----------------------------------------------------------------------
diff --git a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala
index c74066f..0f1efcf 100644
--- a/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala
+++ b/flink-libraries/flink-table/src/main/scala/org/apache/flink/table/catalog/ExternalCatalogSchema.scala
@@ -30,7 +30,7 @@ import scala.collection.JavaConverters._
 /**
   * This class is responsible to connect an external catalog to Calcite's catalog.
   * This enables to look-up and access tables in SQL queries without registering tables in advance.
-  * The the external catalog and all included sub-catalogs and tables is registered as
+  * The external catalog and all included sub-catalogs and tables is registered as
   * sub-schemas and tables in Calcite.
   *
   * @param catalogIdentifier external catalog name

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-optimizer/src/main/java/org/apache/flink/optimizer/dataproperties/PartitioningProperty.java
----------------------------------------------------------------------
diff --git a/flink-optimizer/src/main/java/org/apache/flink/optimizer/dataproperties/PartitioningProperty.java b/flink-optimizer/src/main/java/org/apache/flink/optimizer/dataproperties/PartitioningProperty.java
index 5e06dd3..6efffec 100644
--- a/flink-optimizer/src/main/java/org/apache/flink/optimizer/dataproperties/PartitioningProperty.java
+++ b/flink-optimizer/src/main/java/org/apache/flink/optimizer/dataproperties/PartitioningProperty.java
@@ -19,7 +19,7 @@
 package org.apache.flink.optimizer.dataproperties;
 
 /**
- * An enumeration of the the different types of distributing data across partitions or
+ * An enumeration of the different types of distributing data across partitions or
  * parallel workers.
  */
 public enum PartitioningProperty {

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-optimizer/src/test/java/org/apache/flink/optimizer/util/CompilerTestBase.java
----------------------------------------------------------------------
diff --git a/flink-optimizer/src/test/java/org/apache/flink/optimizer/util/CompilerTestBase.java b/flink-optimizer/src/test/java/org/apache/flink/optimizer/util/CompilerTestBase.java
index b4c39a5..63ef5cc 100644
--- a/flink-optimizer/src/test/java/org/apache/flink/optimizer/util/CompilerTestBase.java
+++ b/flink-optimizer/src/test/java/org/apache/flink/optimizer/util/CompilerTestBase.java
@@ -45,7 +45,7 @@ import org.junit.Before;
 /**
  * Base class for Optimizer tests. Offers utility methods to trigger optimization
  * of a program and to fetch the nodes in an optimizer plan that correspond
- * the the node in the program plan.
+ * the node in the program plan.
  */
 public abstract class CompilerTestBase extends TestLogger implements java.io.Serializable {
 

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/MasterTriggerRestoreHook.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/MasterTriggerRestoreHook.java b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/MasterTriggerRestoreHook.java
index 026046f..629ff9b 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/MasterTriggerRestoreHook.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/checkpoint/MasterTriggerRestoreHook.java
@@ -97,7 +97,7 @@ public interface MasterTriggerRestoreHook<T> {
 	 * This method is called by the checkpoint coordinator prior to restoring the state of a checkpoint.
 	 * If the checkpoint did store data from this hook, that data will be passed to this method. 
 	 * 
-	 * @param checkpointId The The ID (logical timestamp) of the restored checkpoint
+	 * @param checkpointId The ID (logical timestamp) of the restored checkpoint
 	 * @param checkpointData The data originally stored in the checkpoint by this hook, possibly null. 
 	 * 
 	 * @throws Exception Exceptions thrown while restoring the checkpoint will cause the restore

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
index 42cefc6..ea42724 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/executiongraph/ExecutionGraph.java
@@ -938,7 +938,7 @@ public class ExecutionGraph implements AccessExecutionGraph, Archiveable<Archive
 				},
 				futureExecutor);
 
-			// from now on, slots will be rescued by the the futures and their completion, or by the timeout
+			// from now on, slots will be rescued by the futures and their completion, or by the timeout
 			successful = true;
 		}
 		finally {
@@ -1211,7 +1211,7 @@ public class ExecutionGraph implements AccessExecutionGraph, Archiveable<Archive
 	 *
 	 * @param errorIfNoCheckpoint Fail if there is no checkpoint available
 	 * @param allowNonRestoredState Allow to skip checkpoint state that cannot be mapped
-	 * to the the ExecutionGraph vertices (if the checkpoint contains state for a
+	 * to the ExecutionGraph vertices (if the checkpoint contains state for a
 	 * job vertex that is not part of this ExecutionGraph).
 	 */
 	public void restoreLatestCheckpointedState(boolean errorIfNoCheckpoint, boolean allowNonRestoredState) throws Exception {

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/messages/RemoteRpcInvocation.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/messages/RemoteRpcInvocation.java b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/messages/RemoteRpcInvocation.java
index 779d5dd..7b9fb88 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/messages/RemoteRpcInvocation.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/rpc/messages/RemoteRpcInvocation.java
@@ -31,7 +31,7 @@ import java.io.Serializable;
  * message has to be serialized.
  * <p>
  * In order to fail fast and report an appropriate error message to the user, the method name, the
- * parameter types and the arguments are eagerly serialized. In case the the invocation call
+ * parameter types and the arguments are eagerly serialized. In case the invocation call
  * contains a non-serializable object, then an {@link IOException} is thrown.
  */
 public class RemoteRpcInvocation implements RpcInvocation, Serializable {

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java b/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java
index 3b49df7..b82abf3 100644
--- a/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java
+++ b/flink-runtime/src/main/java/org/apache/flink/runtime/state/StateObject.java
@@ -45,7 +45,7 @@ public interface StateObject extends Serializable {
 	void discardState() throws Exception;
 
 	/**
-	 * Returns the size of the state in bytes. If the the size is not known, this
+	 * Returns the size of the state in bytes. If the size is not known, this
 	 * method should return {@code 0}.
 	 * 
 	 * <p>The values produced by this method are only used for informational purposes and

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala
----------------------------------------------------------------------
diff --git a/flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala b/flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala
index c708362..952177c 100644
--- a/flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala
+++ b/flink-runtime/src/main/scala/org/apache/flink/runtime/akka/AkkaUtils.scala
@@ -697,8 +697,8 @@ object AkkaUtils {
     * @param fn The function to retry
     * @param stopCond Flag to signal termination
     * @param maxSleepBetweenRetries Max random sleep time between retries
-    * @tparam T Return type of the the function to retry
-    * @return Return value of the the function to retry
+    * @tparam T Return type of the function to retry
+    * @return Return value of the function to retry
     */
   @tailrec
   def retryOnBindException[T](

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/JoinedStreams.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/JoinedStreams.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/JoinedStreams.java
index f23ebcf..088fab9 100644
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/JoinedStreams.java
+++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/datastream/JoinedStreams.java
@@ -42,7 +42,7 @@ import static java.util.Objects.requireNonNull;
  * <p>To finalize the join operation you also need to specify a {@link KeySelector} for
  * both the first and second input and a {@link WindowAssigner}.
  *
- * <p>Note: Right now, the the join is being evaluated in memory so you need to ensure that the number
+ * <p>Note: Right now, the join is being evaluated in memory so you need to ensure that the number
  * of elements per key does not get too high. Otherwise the JVM might crash.
  *
  * <p>Example:

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java
index e2ba284..8d9b84f 100644
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java
+++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/functions/windowing/delta/extractor/FieldsFromTuple.java
@@ -31,7 +31,7 @@ public class FieldsFromTuple implements Extractor<Tuple, double[]> {
 	int[] indexes;
 
 	/**
-	 * Extracts one or more fields of the the type Double from a tuple and puts
+	 * Extracts one or more fields of the type Double from a tuple and puts
 	 * them into a new double[] (in the specified order).
 	 *
 	 * @param indexes

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/Output.java
----------------------------------------------------------------------
diff --git a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/Output.java b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/Output.java
index cabfebb..3f39152 100644
--- a/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/Output.java
+++ b/flink-streaming-java/src/main/java/org/apache/flink/streaming/api/operators/Output.java
@@ -45,7 +45,7 @@ public interface Output<T> extends Collector<T> {
 	void emitWatermark(Watermark mark);
 
 	/**
-	 * Emits a record the the side output identified by the given {@link OutputTag}.
+	 * Emits a record the side output identified by the given {@link OutputTag}.
 	 *
 	 * @param record The record to collect.
 	 */

http://git-wip-us.apache.org/repos/asf/flink/blob/19d484a7/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
----------------------------------------------------------------------
diff --git a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
index f6fd074..b5a7cd6 100644
--- a/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
+++ b/flink-streaming-scala/src/main/scala/org/apache/flink/streaming/api/scala/DataStream.scala
@@ -527,7 +527,7 @@ class DataStream[T](stream: JavaStream[T]) {
    * stream of the iterative part.
    *
    * The input stream of the iterate operator and the feedback stream will be treated
-   * as a ConnectedStreams where the the input is connected with the feedback stream.
+   * as a ConnectedStreams where the input is connected with the feedback stream.
    *
    * This allows the user to distinguish standard input from feedback inputs.
    *