You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by rx...@apache.org on 2014/09/27 08:00:44 UTC

git commit: stop, start and destroy require the EC2_REGION

Repository: spark
Updated Branches:
  refs/heads/master d8a9d1d44 -> 9e8ced784


stop, start and destroy require the EC2_REGION

i.e
./spark-ec2 --region=us-west-1 stop yourclustername

Author: Jeff Steinmetz <je...@gmail.com>

Closes #2473 from jeffsteinmetz/master and squashes the following commits:

7491f2c [Jeff Steinmetz] fix case in EC2 cluster setup documentation
bd3d777 [Jeff Steinmetz] standardized ec2 documenation to use <lower-case> sample args
2bf4a57 [Jeff Steinmetz] standardized ec2 documenation to use <lower-case> sample args
68d8372 [Jeff Steinmetz] standardized ec2 documenation to use <lower-case> sample args
d2ab6e2 [Jeff Steinmetz] standardized ec2 documenation to use <lower-case> sample args
520e6dc [Jeff Steinmetz] standardized ec2 documenation to use <lower-case> sample args
37fc876 [Jeff Steinmetz] stop, start and destroy require the EC2_REGION


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/9e8ced78
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/9e8ced78
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/9e8ced78

Branch: refs/heads/master
Commit: 9e8ced7847d84d63f0da08b15623d558a2407583
Parents: d8a9d1d
Author: Jeff Steinmetz <je...@gmail.com>
Authored: Fri Sep 26 23:00:40 2014 -0700
Committer: Reynold Xin <rx...@apache.org>
Committed: Fri Sep 26 23:00:40 2014 -0700

----------------------------------------------------------------------
 docs/ec2-scripts.md | 29 +++++++++++++++++++----------
 1 file changed, 19 insertions(+), 10 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/9e8ced78/docs/ec2-scripts.md
----------------------------------------------------------------------
diff --git a/docs/ec2-scripts.md b/docs/ec2-scripts.md
index b2ca6a9..530798f 100644
--- a/docs/ec2-scripts.md
+++ b/docs/ec2-scripts.md
@@ -48,6 +48,15 @@ by looking for the "Name" tag of the instance in the Amazon EC2 Console.
     key pair, `<num-slaves>` is the number of slave nodes to launch (try
     1 at first), and `<cluster-name>` is the name to give to your
     cluster.
+
+    For example:
+
+    ```bash
+    export AWS_SECRET_ACCESS_KEY=AaBbCcDdEeFGgHhIiJjKkLlMmNnOoPpQqRrSsTtU
+export AWS_ACCESS_KEY_ID=ABCDEFG1234567890123
+./spark-ec2 --key-pair=awskey --identity-file=awskey.pem --region=us-west-1 --zone=us-west-1a --spark-version=1.1.0 launch my-spark-cluster
+    ```
+
 -   After everything launches, check that the cluster scheduler is up and sees
     all the slaves by going to its web UI, which will be printed at the end of
     the script (typically `http://<master-hostname>:8080`).
@@ -55,27 +64,27 @@ by looking for the "Name" tag of the instance in the Amazon EC2 Console.
 You can also run `./spark-ec2 --help` to see more usage options. The
 following options are worth pointing out:
 
--   `--instance-type=<INSTANCE_TYPE>` can be used to specify an EC2
+-   `--instance-type=<instance-type>` can be used to specify an EC2
 instance type to use. For now, the script only supports 64-bit instance
 types, and the default type is `m1.large` (which has 2 cores and 7.5 GB
 RAM). Refer to the Amazon pages about [EC2 instance
 types](http://aws.amazon.com/ec2/instance-types) and [EC2
 pricing](http://aws.amazon.com/ec2/#pricing) for information about other
 instance types. 
--    `--region=<EC2_REGION>` specifies an EC2 region in which to launch
+-    `--region=<ec2-region>` specifies an EC2 region in which to launch
 instances. The default region is `us-east-1`.
--    `--zone=<EC2_ZONE>` can be used to specify an EC2 availability zone
+-    `--zone=<ec2-zone>` can be used to specify an EC2 availability zone
 to launch instances in. Sometimes, you will get an error because there
 is not enough capacity in one zone, and you should try to launch in
 another.
--    `--ebs-vol-size=GB` will attach an EBS volume with a given amount
+-    `--ebs-vol-size=<GB>` will attach an EBS volume with a given amount
      of space to each node so that you can have a persistent HDFS cluster
      on your nodes across cluster restarts (see below).
--    `--spot-price=PRICE` will launch the worker nodes as
+-    `--spot-price=<price>` will launch the worker nodes as
      [Spot Instances](http://aws.amazon.com/ec2/spot-instances/),
      bidding for the given maximum price (in dollars).
--    `--spark-version=VERSION` will pre-load the cluster with the
-     specified version of Spark. VERSION can be a version number
+-    `--spark-version=<version>` will pre-load the cluster with the
+     specified version of Spark. The `<version>` can be a version number
      (e.g. "0.7.3") or a specific git hash. By default, a recent
      version will be used.
 -    If one of your launches fails due to e.g. not having the right
@@ -137,11 +146,11 @@ cost you any EC2 cycles, but ***will*** continue to cost money for EBS
 storage.
 
 - To stop one of your clusters, go into the `ec2` directory and run
-`./spark-ec2 stop <cluster-name>`.
+`./spark-ec2 --region=<ec2-region> stop <cluster-name>`.
 - To restart it later, run
-`./spark-ec2 -i <key-file> start <cluster-name>`.
+`./spark-ec2 -i <key-file> --region=<ec2-region> start <cluster-name>`.
 - To ultimately destroy the cluster and stop consuming EBS space, run
-`./spark-ec2 destroy <cluster-name>` as described in the previous
+`./spark-ec2 --region=<ec2-region> destroy <cluster-name>` as described in the previous
 section.
 
 # Limitations


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org