You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@flink.apache.org by se...@apache.org on 2020/06/16 08:06:03 UTC

[flink] branch release-1.11 updated (030df18 -> 81c2511)

This is an automated email from the ASF dual-hosted git repository.

sewen pushed a change to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git.


    from 030df18  [FLINK-17976][docs][k8s/docker] Improvements about custom docker images
     new 3e3ea63  [hotfix] Remove obsolete .gitattributes file
     new e3dce3e  [FLINK-18307][scripts] Rename 'slaves' file to 'workers'
     new 81c2511  [hotfix][docs] Remove outdated confusing HDFS reference in cluster setup.

The 3 revisions listed above as "new" are entirely new to this
repository and will be described in separate emails.  The revisions
listed as "add" were already present in the repository and have only
been added to this reference.


Summary of changes:
 .gitattributes                                     |  3 --
 docs/ops/deployment/cluster_setup.md               |  6 +--
 docs/ops/deployment/cluster_setup.zh.md            |  6 +--
 flink-dist/src/main/flink-bin/bin/config.sh        | 46 +++++++++++-----------
 flink-dist/src/main/flink-bin/bin/start-cluster.sh |  2 +-
 flink-dist/src/main/flink-bin/bin/stop-cluster.sh  |  2 +-
 .../src/main/flink-bin/conf/{slaves => workers}    |  0
 .../flink/tests/util/flink/FlinkDistribution.java  |  2 +-
 pom.xml                                            |  2 +-
 9 files changed, 33 insertions(+), 36 deletions(-)
 delete mode 100644 .gitattributes
 rename flink-dist/src/main/flink-bin/conf/{slaves => workers} (100%)


[flink] 02/03: [FLINK-18307][scripts] Rename 'slaves' file to 'workers'

Posted by se...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit e3dce3e39c1ea212ab577217aa4b7f394c110332
Author: Stephan Ewen <se...@apache.org>
AuthorDate: Mon Jun 15 18:59:39 2020 +0200

    [FLINK-18307][scripts] Rename 'slaves' file to 'workers'
---
 docs/ops/deployment/cluster_setup.md               |  6 +--
 docs/ops/deployment/cluster_setup.zh.md            |  6 +--
 flink-dist/src/main/flink-bin/bin/config.sh        | 46 +++++++++++-----------
 flink-dist/src/main/flink-bin/bin/start-cluster.sh |  2 +-
 flink-dist/src/main/flink-bin/bin/stop-cluster.sh  |  2 +-
 .../src/main/flink-bin/conf/{slaves => workers}    |  0
 .../flink/tests/util/flink/FlinkDistribution.java  |  2 +-
 pom.xml                                            |  2 +-
 8 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/docs/ops/deployment/cluster_setup.md b/docs/ops/deployment/cluster_setup.md
index 75455d2..076c0d8 100644
--- a/docs/ops/deployment/cluster_setup.md
+++ b/docs/ops/deployment/cluster_setup.md
@@ -72,7 +72,7 @@ Set the `jobmanager.rpc.address` key to point to your master node. You should al
 
 These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting `taskmanager.memory.process.size` or `taskmanager.memory.flink.size` in *conf/flink-conf.yaml* on those specific nodes.
 
-Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file *conf/slaves* and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
+Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file *conf/workers* and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
 
 The following example illustrates the setup with three nodes (with IP addresses from _10.0.0.1_
 to _10.0.0.3_ and hostnames _master_, _worker1_, _worker2_) and shows the contents of the
@@ -91,7 +91,7 @@ configuration files (which need to be accessible at the same path on all machine
   </div>
 <div class="row" style="margin-top: 1em;">
   <p class="lead text-center">
-    /path/to/<strong>flink/<br>conf/slaves</strong>
+    /path/to/<strong>flink/<br>conf/workers</strong>
   <pre>
 10.0.0.2
 10.0.0.3</pre>
@@ -118,7 +118,7 @@ are very important configuration values.
 
 ### Starting Flink
 
-The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the *slaves* file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
+The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the *workers* file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
 
 Assuming that you are on the master node and inside the Flink directory:
 
diff --git a/docs/ops/deployment/cluster_setup.zh.md b/docs/ops/deployment/cluster_setup.zh.md
index f9d6356..bc1788f 100644
--- a/docs/ops/deployment/cluster_setup.zh.md
+++ b/docs/ops/deployment/cluster_setup.zh.md
@@ -72,7 +72,7 @@ Set the `jobmanager.rpc.address` key to point to your master node. You should al
 
 These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting setting `taskmanager.memory.process.size` or `taskmanager.memory.flink.size` in *conf/flink-conf.yaml* on those specific nodes.
 
-Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file *conf/slaves* and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
+Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file *conf/workers* and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
 
 The following example illustrates the setup with three nodes (with IP addresses from _10.0.0.1_
 to _10.0.0.3_ and hostnames _master_, _worker1_, _worker2_) and shows the contents of the
@@ -91,7 +91,7 @@ configuration files (which need to be accessible at the same path on all machine
   </div>
 <div class="row" style="margin-top: 1em;">
   <p class="lead text-center">
-    /path/to/<strong>flink/<br>conf/slaves</strong>
+    /path/to/<strong>flink/<br>conf/workers</strong>
   <pre>
 10.0.0.2
 10.0.0.3</pre>
@@ -118,7 +118,7 @@ are very important configuration values.
 
 ### Starting Flink
 
-The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the *slaves* file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
+The following script starts a JobManager on the local node and connects via SSH to all worker nodes listed in the *workers* file to start the TaskManager on each node. Now your Flink system is up and running. The JobManager running on the local node will now accept jobs at the configured RPC port.
 
 Assuming that you are on the master node and inside the Flink directory:
 
diff --git a/flink-dist/src/main/flink-bin/bin/config.sh b/flink-dist/src/main/flink-bin/bin/config.sh
index a42863f..99ef7c0 100755
--- a/flink-dist/src/main/flink-bin/bin/config.sh
+++ b/flink-dist/src/main/flink-bin/bin/config.sh
@@ -354,14 +354,14 @@ fi
 # also potentially includes topology information and the taskManager type
 extractHostName() {
     # handle comments: extract first part of string (before first # character)
-    SLAVE=`echo $1 | cut -d'#' -f 1`
+    WORKER=`echo $1 | cut -d'#' -f 1`
 
     # Extract the hostname from the network hierarchy
-    if [[ "$SLAVE" =~ ^.*/([0-9a-zA-Z.-]+)$ ]]; then
-            SLAVE=${BASH_REMATCH[1]}
+    if [[ "$WORKER" =~ ^.*/([0-9a-zA-Z.-]+)$ ]]; then
+            WORKER=${BASH_REMATCH[1]}
     fi
 
-    echo $SLAVE
+    echo $WORKER
 }
 
 # Auxilliary functions for log file rotation
@@ -422,52 +422,52 @@ readMasters() {
     done < "$MASTERS_FILE"
 }
 
-readSlaves() {
-    SLAVES_FILE="${FLINK_CONF_DIR}/slaves"
+readWorkers() {
+    WORKERS_FILE="${FLINK_CONF_DIR}/workers"
 
-    if [[ ! -f "$SLAVES_FILE" ]]; then
-        echo "No slaves file. Please specify slaves in 'conf/slaves'."
+    if [[ ! -f "$WORKERS_FILE" ]]; then
+        echo "No workers file. Please specify workers in 'conf/workers'."
         exit 1
     fi
 
-    SLAVES=()
+    WORKERS=()
 
-    SLAVES_ALL_LOCALHOST=true
+    WORKERS_ALL_LOCALHOST=true
     GOON=true
     while $GOON; do
         read line || GOON=false
         HOST=$( extractHostName $line)
         if [ -n "$HOST" ] ; then
-            SLAVES+=(${HOST})
+            WORKERS+=(${HOST})
             if [ "${HOST}" != "localhost" ] && [ "${HOST}" != "127.0.0.1" ] ; then
-                SLAVES_ALL_LOCALHOST=false
+                WORKERS_ALL_LOCALHOST=false
             fi
         fi
-    done < "$SLAVES_FILE"
+    done < "$WORKERS_FILE"
 }
 
-# starts or stops TMs on all slaves
-# TMSlaves start|stop
-TMSlaves() {
+# starts or stops TMs on all workers
+# TMWorkers start|stop
+TMWorkers() {
     CMD=$1
 
-    readSlaves
+    readWorkers
 
-    if [ ${SLAVES_ALL_LOCALHOST} = true ] ; then
+    if [ ${WORKERS_ALL_LOCALHOST} = true ] ; then
         # all-local setup
-        for slave in ${SLAVES[@]}; do
+        for worker in ${WORKERS[@]}; do
             "${FLINK_BIN_DIR}"/taskmanager.sh "${CMD}"
         done
     else
         # non-local setup
-        # Stop TaskManager instance(s) using pdsh (Parallel Distributed Shell) when available
+        # start/stop TaskManager instance(s) using pdsh (Parallel Distributed Shell) when available
         command -v pdsh >/dev/null 2>&1
         if [[ $? -ne 0 ]]; then
-            for slave in ${SLAVES[@]}; do
-                ssh -n $FLINK_SSH_OPTS $slave -- "nohup /bin/bash -l \"${FLINK_BIN_DIR}/taskmanager.sh\" \"${CMD}\" &"
+            for worker in ${WORKERS[@]}; do
+                ssh -n $FLINK_SSH_OPTS $worker -- "nohup /bin/bash -l \"${FLINK_BIN_DIR}/taskmanager.sh\" \"${CMD}\" &"
             done
         else
-            PDSH_SSH_ARGS="" PDSH_SSH_ARGS_APPEND=$FLINK_SSH_OPTS pdsh -w $(IFS=, ; echo "${SLAVES[*]}") \
+            PDSH_SSH_ARGS="" PDSH_SSH_ARGS_APPEND=$FLINK_SSH_OPTS pdsh -w $(IFS=, ; echo "${WORKERS[*]}") \
                 "nohup /bin/bash -l \"${FLINK_BIN_DIR}/taskmanager.sh\" \"${CMD}\""
         fi
     fi
diff --git a/flink-dist/src/main/flink-bin/bin/start-cluster.sh b/flink-dist/src/main/flink-bin/bin/start-cluster.sh
index 068577b..720b33c 100755
--- a/flink-dist/src/main/flink-bin/bin/start-cluster.sh
+++ b/flink-dist/src/main/flink-bin/bin/start-cluster.sh
@@ -50,4 +50,4 @@ fi
 shopt -u nocasematch
 
 # Start TaskManager instance(s)
-TMSlaves start
+TMWorkers start
diff --git a/flink-dist/src/main/flink-bin/bin/stop-cluster.sh b/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
index 7eb58be..d29b4f3 100755
--- a/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
+++ b/flink-dist/src/main/flink-bin/bin/stop-cluster.sh
@@ -23,7 +23,7 @@ bin=`cd "$bin"; pwd`
 . "$bin"/config.sh
 
 # Stop TaskManager instance(s)
-TMSlaves stop
+TMWorkers stop
 
 # Stop JobManager instance(s)
 shopt -s nocasematch
diff --git a/flink-dist/src/main/flink-bin/conf/slaves b/flink-dist/src/main/flink-bin/conf/workers
similarity index 100%
rename from flink-dist/src/main/flink-bin/conf/slaves
rename to flink-dist/src/main/flink-bin/conf/workers
diff --git a/flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/FlinkDistribution.java b/flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/FlinkDistribution.java
index b722e25..45a2afe 100644
--- a/flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/FlinkDistribution.java
+++ b/flink-end-to-end-tests/flink-end-to-end-tests-common/src/main/java/org/apache/flink/tests/util/flink/FlinkDistribution.java
@@ -268,7 +268,7 @@ final class FlinkDistribution {
 	}
 
 	public void setTaskExecutorHosts(Collection<String> taskExecutorHosts) throws IOException {
-		Files.write(conf.resolve("slaves"), taskExecutorHosts);
+		Files.write(conf.resolve("workers"), taskExecutorHosts);
 	}
 
 	public Stream<String> searchAllLogs(Pattern pattern, Function<Matcher, String> matchProcessor) throws IOException {
diff --git a/pom.xml b/pom.xml
index 5d76464..d2dd4ad 100644
--- a/pom.xml
+++ b/pom.xml
@@ -1476,7 +1476,7 @@ under the License.
 						<!-- netty test file, still Apache License 2.0 but with a different header -->
 						<exclude>flink-runtime/src/test/java/org/apache/flink/runtime/io/network/buffer/AbstractByteBufTest.java</exclude>
 						<!-- Configuration Files. -->
-						<exclude>**/flink-bin/conf/slaves</exclude>
+						<exclude>**/flink-bin/conf/workers</exclude>
 						<exclude>**/flink-bin/conf/masters</exclude>
 						<!-- Administrative files in the main trunk. -->
 						<exclude>**/README.md</exclude>


[flink] 03/03: [hotfix][docs] Remove outdated confusing HDFS reference in cluster setup.

Posted by se...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 81c2511a5bfe8e73bab3c559a5354eea980fb4bc
Author: Stephan Ewen <se...@apache.org>
AuthorDate: Mon Jun 15 23:31:43 2020 +0200

    [hotfix][docs] Remove outdated confusing HDFS reference in cluster setup.
---
 docs/ops/deployment/cluster_setup.md    | 2 +-
 docs/ops/deployment/cluster_setup.zh.md | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/docs/ops/deployment/cluster_setup.md b/docs/ops/deployment/cluster_setup.md
index 076c0d8..7a5313d 100644
--- a/docs/ops/deployment/cluster_setup.md
+++ b/docs/ops/deployment/cluster_setup.md
@@ -72,7 +72,7 @@ Set the `jobmanager.rpc.address` key to point to your master node. You should al
 
 These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting `taskmanager.memory.process.size` or `taskmanager.memory.flink.size` in *conf/flink-conf.yaml* on those specific nodes.
 
-Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file *conf/workers* and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
+Finally, you must provide a list of all nodes in your cluster that shall be used as worker nodes, i.e., nodes running a TaskManager. Edit the file *conf/workers* and enter the IP/host name of each worker node.
 
 The following example illustrates the setup with three nodes (with IP addresses from _10.0.0.1_
 to _10.0.0.3_ and hostnames _master_, _worker1_, _worker2_) and shows the contents of the
diff --git a/docs/ops/deployment/cluster_setup.zh.md b/docs/ops/deployment/cluster_setup.zh.md
index bc1788f..1b93c4e 100644
--- a/docs/ops/deployment/cluster_setup.zh.md
+++ b/docs/ops/deployment/cluster_setup.zh.md
@@ -72,7 +72,7 @@ Set the `jobmanager.rpc.address` key to point to your master node. You should al
 
 These values are given in MB. If some worker nodes have more main memory which you want to allocate to the Flink system you can overwrite the default value by setting setting `taskmanager.memory.process.size` or `taskmanager.memory.flink.size` in *conf/flink-conf.yaml* on those specific nodes.
 
-Finally, you must provide a list of all nodes in your cluster which shall be used as worker nodes. Therefore, similar to the HDFS configuration, edit the file *conf/workers* and enter the IP/host name of each worker node. Each worker node will later run a TaskManager.
+Finally, you must provide a list of all nodes in your cluster that shall be used as worker nodes, i.e., nodes running a TaskManager. Edit the file *conf/workers* and enter the IP/host name of each worker node.
 
 The following example illustrates the setup with three nodes (with IP addresses from _10.0.0.1_
 to _10.0.0.3_ and hostnames _master_, _worker1_, _worker2_) and shows the contents of the


[flink] 01/03: [hotfix] Remove obsolete .gitattributes file

Posted by se...@apache.org.
This is an automated email from the ASF dual-hosted git repository.

sewen pushed a commit to branch release-1.11
in repository https://gitbox.apache.org/repos/asf/flink.git

commit 3e3ea633dfad20a5552282a8326161cb4a2f0634
Author: Stephan Ewen <se...@apache.org>
AuthorDate: Mon Jun 15 23:55:53 2020 +0200

    [hotfix] Remove obsolete .gitattributes file
    
    This contained entries about bat scripts and vendored files from the old web UI.
    Both are not part of Flink any more.
---
 .gitattributes | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/.gitattributes b/.gitattributes
deleted file mode 100644
index ecc9cf2..0000000
--- a/.gitattributes
+++ /dev/null
@@ -1,3 +0,0 @@
-*.bat text eol=crlf
-flink-runtime-web/web-dashboard/web/* linguist-vendored -diff
-