You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2019/03/06 15:12:45 UTC

[spark] branch master updated: [SPARK-27047] Document stop-slave.sh in spark-standalone

This is an automated email from the ASF dual-hosted git repository.

srowen pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git


The following commit(s) were added to refs/heads/master by this push:
     new 190a3a4  [SPARK-27047] Document stop-slave.sh in spark-standalone
190a3a4 is described below

commit 190a3a4ad8e648d4ed4b38c5189b3baf75b1fc52
Author: Ajith <aj...@gmail.com>
AuthorDate: Wed Mar 6 09:12:24 2019 -0600

    [SPARK-27047] Document stop-slave.sh in spark-standalone
    
    ## What changes were proposed in this pull request?
    
    spark-standalone documentation do not mention about stop-slave.sh script
    
    ## How was this patch tested?
    
    Manually tested the changes
    
    Closes #23960 from ajithme/slavedoc.
    
    Authored-by: Ajith <aj...@gmail.com>
    Signed-off-by: Sean Owen <se...@databricks.com>
---
 docs/spark-standalone.md | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/docs/spark-standalone.md b/docs/spark-standalone.md
index 672a4d0..60b84d3 100644
--- a/docs/spark-standalone.md
+++ b/docs/spark-standalone.md
@@ -85,12 +85,13 @@ If you do not have a password-less setup, you can set the environment variable S
 Once you've set up this file, you can launch or stop your cluster with the following shell scripts, based on Hadoop's deploy scripts, and available in `SPARK_HOME/sbin`:
 
 - `sbin/start-master.sh` - Starts a master instance on the machine the script is executed on.
-- `sbin/start-slaves.sh` - Starts a slave instance on each machine specified in the `conf/slaves` file.
-- `sbin/start-slave.sh` - Starts a slave instance on the machine the script is executed on.
-- `sbin/start-all.sh` - Starts both a master and a number of slaves as described above.
+- `sbin/start-slaves.sh` - Starts a worker instance on each machine specified in the `conf/slaves` file.
+- `sbin/start-slave.sh` - Starts a worker instance on the machine the script is executed on.
+- `sbin/start-all.sh` - Starts both a master and a number of workers as described above.
 - `sbin/stop-master.sh` - Stops the master that was started via the `sbin/start-master.sh` script.
-- `sbin/stop-slaves.sh` - Stops all slave instances on the machines specified in the `conf/slaves` file.
-- `sbin/stop-all.sh` - Stops both the master and the slaves as described above.
+- `sbin/stop-slave.sh` - Stops all worker instances on the machine the script is executed on.
+- `sbin/stop-slaves.sh` - Stops all worker instances on the machines specified in the `conf/slaves` file.
+- `sbin/stop-all.sh` - Stops both the master and the workers as described above.
 
 Note that these scripts must be executed on the machine you want to run the Spark master on, not your local machine.
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org