You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by ch...@apache.org on 2018/06/11 13:03:00 UTC

flink git commit: [FLINK-9508][docs] Fix spelling/punctuation errors

Repository: flink
Updated Branches:
  refs/heads/master ed3890af0 -> dff03cac0


[FLINK-9508][docs] Fix spelling/punctuation errors

This closes #6112.


Project: http://git-wip-us.apache.org/repos/asf/flink/repo
Commit: http://git-wip-us.apache.org/repos/asf/flink/commit/dff03cac
Tree: http://git-wip-us.apache.org/repos/asf/flink/tree/dff03cac
Diff: http://git-wip-us.apache.org/repos/asf/flink/diff/dff03cac

Branch: refs/heads/master
Commit: dff03cac034de16ea24587819a73cd286b7b6945
Parents: ed3890a
Author: Yadan.JS <y_...@yahoo.com>
Authored: Mon May 28 23:13:59 2018 -0400
Committer: zentol <ch...@apache.org>
Committed: Mon Jun 11 15:02:48 2018 +0200

----------------------------------------------------------------------
 docs/dev/execution_configuration.md |  2 +-
 docs/ops/cli.md                     | 18 ++++++++--------
 docs/ops/filesystems.md             |  8 +++----
 docs/ops/security-ssl.md            | 36 ++++++++++++++++----------------
 docs/ops/upgrading.md               |  4 ++--
 5 files changed, 34 insertions(+), 34 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/flink/blob/dff03cac/docs/dev/execution_configuration.md
----------------------------------------------------------------------
diff --git a/docs/dev/execution_configuration.md b/docs/dev/execution_configuration.md
index 8fe1b63..f0103b0 100644
--- a/docs/dev/execution_configuration.md
+++ b/docs/dev/execution_configuration.md
@@ -55,7 +55,7 @@ With the closure cleaner disabled, it might happen that an anonymous user functi
 
 - `getExecutionMode()` / `setExecutionMode()`. The default execution mode is PIPELINED. Sets the execution mode to execute the program. The execution mode defines whether data exchanges are performed in a batch or on a pipelined manner.
 
-- `enableForceKryo()` / **`disableForceKryo`**. Kryo is not forced by default. Forces the GenericTypeInformation to use the Kryo serializer for POJOS even though we could analyze them as a POJO. In some cases this might be preferable. For example, when Flink's internal serializers fail to handle a POJO properly.
+- `enableForceKryo()` / **`disableForceKryo`**. Kryo is not forced by default. Forces the GenericTypeInformation to use the Kryo serializer for POJOs even though we could analyze them as a POJO. In some cases this might be preferable. For example, when Flink's internal serializers fail to handle a POJO properly.
 
 - `enableForceAvro()` / **`disableForceAvro()`**. Avro is not forced by default. Forces the Flink AvroTypeInformation to use the Avro serializer instead of Kryo for serializing Avro POJOs.
 

http://git-wip-us.apache.org/repos/asf/flink/blob/dff03cac/docs/ops/cli.md
----------------------------------------------------------------------
diff --git a/docs/ops/cli.md b/docs/ops/cli.md
index 0f2c685..95c3c45 100644
--- a/docs/ops/cli.md
+++ b/docs/ops/cli.md
@@ -23,8 +23,8 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-Flink provides a command-line interface to run programs that are packaged
-as JAR files, and control their execution.  The command line interface is part
+Flink provides a Command-Line Interface (CLI) to run programs that are packaged
+as JAR files, and control their execution.  The CLI is part
 of any Flink setup, available in local single node setups and in
 distributed setups. It is located under `<flink-home>/bin/flink`
 and connects by default to the running Flink master (JobManager) that was
@@ -49,25 +49,25 @@ The command line can be used to
 
 ## Examples
 
--   Run example program with no arguments.
+-   Run example program with no arguments:
 
         ./bin/flink run ./examples/batch/WordCount.jar
 
--   Run example program with arguments for input and result files
+-   Run example program with arguments for input and result files:
 
         ./bin/flink run ./examples/batch/WordCount.jar \
                              --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
 
--   Run example program with parallelism 16 and arguments for input and result files
+-   Run example program with parallelism 16 and arguments for input and result files:
 
         ./bin/flink run -p 16 ./examples/batch/WordCount.jar \
                              --input file:///home/user/hamlet.txt --output file:///home/user/wordcount_out
 
--   Run example program with flink log output disabled
+-   Run example program with flink log output disabled:
 
             ./bin/flink run -q ./examples/batch/WordCount.jar
 
--   Run example program in detached mode
+-   Run example program in detached mode:
 
             ./bin/flink run -d ./examples/batch/WordCount.jar
 
@@ -126,7 +126,7 @@ The command line can be used to
         ./bin/flink modify <jobID> -p <newParallelism>
 
 
-The difference between cancelling and stopping a (streaming) job is the following:
+**NOTE**: The difference between cancelling and stopping a (streaming) job is the following:
 
 On a cancel call, the operators in a job immediately receive a `cancel()` method call to cancel them as
 soon as possible.
@@ -170,7 +170,7 @@ Everything else is the same as described in the above **Trigger a Savepoint** se
 You can atomically trigger a savepoint and cancel a job.
 
 {% highlight bash %}
-./bin/flink cancel -s  [savepointDirectory] <jobID>
+./bin/flink cancel -s [savepointDirectory] <jobID>
 {% endhighlight %}
 
 If no savepoint directory is configured, you need to configure a default savepoint directory for the Flink installation (see [Savepoints]({{site.baseurl}}/ops/state/savepoints.html#configuration)).

http://git-wip-us.apache.org/repos/asf/flink/blob/dff03cac/docs/ops/filesystems.md
----------------------------------------------------------------------
diff --git a/docs/ops/filesystems.md b/docs/ops/filesystems.md
index 50e3d24..dab3817 100644
--- a/docs/ops/filesystems.md
+++ b/docs/ops/filesystems.md
@@ -73,7 +73,7 @@ That way, Flink seamlessly supports all of Hadoop file systems, and all Hadoop-c
 
 ## Common File System configurations
 
-The following configuration settings exist across different file systems
+The following configuration settings exist across different file systems.
 
 #### Default File System
 
@@ -83,8 +83,8 @@ If paths to files do not explicitly specify a file system scheme (and authority)
 fs.default-scheme: <default-fs>
 {% endhighlight %}
 
-For example, if the default file system configured as `fs.default-scheme: hdfs://localhost:9000/`, then a a file path of
-`/user/hugo/in.txt'` is interpreted as `hdfs://localhost:9000/user/hugo/in.txt'`
+For example, if the default file system configured as `fs.default-scheme: hdfs://localhost:9000/`, then a file path of
+`/user/hugo/in.txt` is interpreted as `hdfs://localhost:9000/user/hugo/in.txt`.
 
 #### Connection limiting
 
@@ -112,7 +112,7 @@ To prevent inactive streams from taking up the complete pool (preventing new con
 `fs.<scheme>.limit.stream-timeout`. If a stream does not read/write any bytes for at least that amount of time, it is forcibly closed.
 
 These limits are enforced per TaskManager, so each TaskManager in a Flink application or cluster will open up to that number of connections.
-In addition, the The limit are also enforced only per FileSystem instance. Because File Systems are created per scheme and authority, different
+In addition, the limits are also only enforced per FileSystem instance. Because File Systems are created per scheme and authority, different
 authorities will have their own connection pool. For example `hdfs://myhdfs:50010/` and `hdfs://anotherhdfs:4399/` will have separate pools.
 
 

http://git-wip-us.apache.org/repos/asf/flink/blob/dff03cac/docs/ops/security-ssl.md
----------------------------------------------------------------------
diff --git a/docs/ops/security-ssl.md b/docs/ops/security-ssl.md
index b70d867..c2ba7df 100644
--- a/docs/ops/security-ssl.md
+++ b/docs/ops/security-ssl.md
@@ -22,24 +22,24 @@ specific language governing permissions and limitations
 under the License.
 -->
 
-This page provides instructions on how to enable SSL for the network communication between different flink components.
+This page provides instructions on how to enable SSL for the network communication between different Flink components.
 
 ## SSL Configuration
 
-SSL can be enabled for all network communication between flink components. SSL keystores and truststore has to be deployed on each flink node and configured (conf/flink-conf.yaml) using keys in the security.ssl.* namespace (Please see the [configuration page](config.html) for details). SSL can be selectively enabled/disabled for different transports using the following flags. These flags are only applicable when security.ssl.enabled is set to true.
+SSL can be enabled for all network communication between Flink components. SSL keystores and truststore has to be deployed on each Flink node and configured (conf/flink-conf.yaml) using keys in the security.ssl.* namespace (Please see the [configuration page](config.html) for details). SSL can be selectively enabled/disabled for different transports using the following flags. These flags are only applicable when security.ssl.enabled is set to true.
 
 * **taskmanager.data.ssl.enabled**: SSL flag for data communication between task managers
 * **blob.service.ssl.enabled**: SSL flag for blob service client/server communication
-* **akka.ssl.enabled**: SSL flag for the akka based control connection between the flink client, jobmanager and taskmanager 
+* **akka.ssl.enabled**: SSL flag for akka based control connection between the Flink client, jobmanager and taskmanager 
 * **jobmanager.web.ssl.enabled**: Flag to enable https access to the jobmanager's web frontend
 
 ## Deploying Keystores and Truststores
 
-You need to have a Java Keystore generated and copied to each node in the flink cluster. The common name or subject alternative names in the certificate should match the node's hostname and IP address. Keystores and truststores can be generated using the [keytool utility](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html). All flink components should have read access to the keystore and truststore files.
+You need to have a Java Keystore generated and copied to each node in the Flink cluster. The common name or subject alternative names in the certificate should match the node's hostname and IP address. Keystores and truststores can be generated using the [keytool utility](https://docs.oracle.com/javase/8/docs/technotes/tools/unix/keytool.html). All Flink components should have read access to the keystore and truststore files.
 
-### Example: Creating self signed CA and keystores for a 2 node cluster
+### Example: Creating self signed CA and keystores for a two-node cluster
 
-Execute the following keytool commands to create a truststore with a self signed CA
+Execute the following keytool commands to create a truststore with a self signed CA.
 
 {% highlight bash %}
 keytool -genkeypair -alias ca -keystore ca.keystore -dname "CN=Sample CA" -storepass password -keypass password -keyalg RSA -ext bc=ca:true
@@ -70,10 +70,10 @@ keytool -importcert -keystore node2.keystore -storepass password -file node2.cer
 ## Standalone Deployment
 Configure each node in the standalone cluster to pick up the keystore and truststore files present in the local file system.
 
-### Example: 2 node cluster
+### Example: Two-node cluster
 
-* Generate 2 keystores, one for each node, and copy them to the filesystem on the respective node. Also copy the pulic key of the CA (which was used to sign the certificates in the keystore) as a Java truststore on both the nodes
-* Configure conf/flink-conf.yaml to pick up these files
+* Generate two keystores, one for each node, and copy them to the filesystem on the respective node. Also copy the public key of the CA (which was used to sign the certificates in the keystore) as a Java truststore on both the nodes.
+* Configure conf/flink-conf.yaml to pick up these files.
 
 #### Node 1
 {% highlight yaml %}
@@ -95,15 +95,15 @@ security.ssl.truststore: /usr/local/ca.truststore
 security.ssl.truststore-password: password
 {% endhighlight %}
 
-* Restart the flink components to enable SSL for all of flink's internal communication
-* Verify by accessing the jobmanager's UI using https url. The task manager's path in the UI should show akka.ssl.tcp:// as the protocol
-* The blob server and task manager's data communication can be verified from the log files
+* Restart the Flink components to enable SSL for all of Flink's internal communication
+* Verify by accessing the jobmanager's UI using https url. The taskmanager's path in the UI should show akka.ssl.tcp:// as the protocol
+* The blob server and taskmanager's data communication can be verified from the log files
 
 ## YARN Deployment
-The keystores and truststore can be deployed in a YARN setup in multiple ways depending on the cluster setup. Following are 2 ways to achieve this
+The keystores and truststore can be deployed in a YARN setup in multiple ways depending on the cluster setup. Following are two ways to achieve this.
 
 ### 1. Deploy keystores before starting the YARN session
-The keystores and truststore should be generated and deployed on all nodes in the YARN setup where flink components can potentially be executed. The same flink config file from the flink YARN client is used for all the flink components running in the YARN cluster. Therefore we need to ensure the keystore is deployed and accessible using the same filepath in all the YARN nodes.
+The keystores and truststore should be generated and deployed on all nodes in the YARN setup where Flink components can potentially be executed. The same Flink config file from the Flink YARN client is used for all the Flink components running in the YARN cluster. Therefore we need to ensure the keystore is deployed and accessible using the same filepath in all the YARN nodes.
 
 #### Example config
 {% highlight yaml %}
@@ -117,12 +117,12 @@ security.ssl.truststore-password: password
 
 Now you can start the YARN session from the CLI like you would normally do.
 
-### 2. Use YARN cli to deploy the keystores and truststore
-We can use the YARN client's ship files option (-yt) to distribute the keystores and truststore. Since the same keystore will be deployed at all nodes, we need to ensure a single certificate in the keystore can be served for all nodes. This can be done by either using the Subject Alternative Name(SAN) extension in the certificate and setting it to cover all nodes (hostname and ip addresses) in the cluster or by using wildcard subdomain names (if the cluster is setup accordingly). 
+### 2. Use YARN CLI to deploy the keystores and truststore
+We can use the YARN client's ship files option (-yt) to distribute the keystores and truststore. Since the same keystore will be deployed at all nodes, we need to ensure a single certificate in the keystore can be served for all nodes. This can be done by either using the Subject Alternative Name (SAN) extension in the certificate and setting it to cover all nodes (hostname and ip addresses) in the cluster or by using wildcard subdomain names (if the cluster is setup accordingly). 
 
 #### Example
 * Supply the following parameters to the keytool command when generating the keystore: -ext SAN=dns:node1.company.org,ip:192.168.1.1,dns:node2.company.org,ip:192.168.1.2
-* Copy the keystore and the CA's truststore into a local directory (at the cli's working directory), say deploy-keys/
+* Copy the keystore and the CA's truststore into a local directory (at the CLI's working directory), say deploy-keys/
 * Update the configuration to pick up the files from a relative path
 
 {% highlight yaml %}
@@ -140,6 +140,6 @@ security.ssl.truststore-password: password
 flink run -m yarn-cluster -yt deploy-keys/ TestJob.jar
 {% endhighlight %}
 
-When deployed using YARN, flink's web dashboard is accessible through YARN proxy's Tracking URL. To ensure that the YARN proxy is able to access flink's https url you need to configure YARN proxy to accept flink's SSL certificates. Add the custom CA certificate into Java's default truststore on the YARN Proxy node.
+When deployed using YARN, Flink's web dashboard is accessible through YARN proxy's Tracking URL. To ensure that the YARN proxy is able to access Flink's https url you need to configure YARN proxy to accept Flink's SSL certificates. Add the custom CA certificate into Java's default truststore on the YARN Proxy node.
 
 {% top %}

http://git-wip-us.apache.org/repos/asf/flink/blob/dff03cac/docs/ops/upgrading.md
----------------------------------------------------------------------
diff --git a/docs/ops/upgrading.md b/docs/ops/upgrading.md
index c9a7969..475dd40 100644
--- a/docs/ops/upgrading.md
+++ b/docs/ops/upgrading.md
@@ -68,7 +68,7 @@ When an application is restarted from a savepoint, Flink matches the operator st
 
 {% highlight scala%}
 val mappedEvents: DataStream[(Int, Long)] = events
-  .map(new MyStatefulMapFunc()).uid(“mapper-1”)
+  .map(new MyStatefulMapFunc()).uid("mapper-1")
 {% endhighlight %}
 
 **Note:** Since the operator IDs stored in a savepoint and IDs of operators in the application to start must be equal, it is highly recommended to assign unique IDs to all operators of an application that might be upgraded in the future. This advice applies to all operators, i.e., operators with and without explicitly declared operator state, because some operators have internal state that is not visible to the user. Upgrading an application without assigned operator IDs is significantly more difficult and may only be possible via a low-level workaround using the `setUidHash()` method.
@@ -141,7 +141,7 @@ about the steps that we outlined before.
 ### Preconditions
 
 Before starting the migration, please check that the jobs you are trying to migrate are following the
-best practises for [savepoints]({{ site.baseurl }}/ops/state/savepoints.html). Also, check out the 
+best practices for [savepoints]({{ site.baseurl }}/ops/state/savepoints.html). Also, check out the 
 [API Migration Guides]({{ site.baseurl }}/dev/migration.html) to see if there is any API changes related to migrating
 savepoints to newer versions.