You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/03/19 14:23:07 UTC

spark git commit: [MINOR][DOCS] Use `spark-submit` instead of `sparkR` to submit R script.

Repository: spark
Updated Branches:
  refs/heads/master 1970d911d -> 2082a4956


[MINOR][DOCS] Use `spark-submit` instead of `sparkR` to submit R script.

## What changes were proposed in this pull request?

Since `sparkR` is not used for submitting R Scripts from Spark 2.0, a user faces the following error message if he follows the instruction on `R/README.md`. This PR updates `R/README.md`.
```bash
$ ./bin/sparkR examples/src/main/r/dataframe.R
Running R applications through 'sparkR' is not supported as of Spark 2.0.
Use ./bin/spark-submit <R file>
```

## How was this patch tested?

Manual.

Author: Dongjoon Hyun <do...@apache.org>

Closes #11842 from dongjoon-hyun/update_r_readme.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/2082a495
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/2082a495
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/2082a495

Branch: refs/heads/master
Commit: 2082a49569cb5d900e318af9da1027821dfe93bc
Parents: 1970d91
Author: Dongjoon Hyun <do...@apache.org>
Authored: Sat Mar 19 13:23:34 2016 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Sat Mar 19 13:23:34 2016 +0000

----------------------------------------------------------------------
 R/README.md | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/2082a495/R/README.md
----------------------------------------------------------------------
diff --git a/R/README.md b/R/README.md
index bb3464b..810bfc1 100644
--- a/R/README.md
+++ b/R/README.md
@@ -40,7 +40,7 @@ To set other options like driver memory, executor memory etc. you can pass in th
 If you wish to use SparkR from RStudio or other R frontends you will need to set some environment variables which point SparkR to your Spark installation. For example 
 ```
 # Set this to where Spark is installed
-Sys.setenv(SPARK_HOME="/Users/shivaram/spark")
+Sys.setenv(SPARK_HOME="/Users/username/spark")
 # This line loads SparkR from the installed directory
 .libPaths(c(file.path(Sys.getenv("SPARK_HOME"), "R", "lib"), .libPaths()))
 library(SparkR)
@@ -51,7 +51,7 @@ sc <- sparkR.init(master="local")
 
 The [instructions](https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark) for making contributions to Spark also apply to SparkR.
 If you only make R file changes (i.e. no Scala changes) then you can just re-install the R package using `R/install-dev.sh` and test your changes.
-Once you have made your changes, please include unit tests for them and run existing unit tests using the `run-tests.sh` script as described below. 
+Once you have made your changes, please include unit tests for them and run existing unit tests using the `R/run-tests.sh` script as described below.
     
 #### Generating documentation
 
@@ -60,9 +60,9 @@ The SparkR documentation (Rd files and HTML files) are not a part of the source
 ### Examples, Unit tests
 
 SparkR comes with several sample programs in the `examples/src/main/r` directory.
-To run one of them, use `./bin/sparkR <filename> <args>`. For example:
+To run one of them, use `./bin/spark-submit <filename> <args>`. For example:
 
-    ./bin/sparkR examples/src/main/r/dataframe.R
+    ./bin/spark-submit examples/src/main/r/dataframe.R
 
 You can also run the unit-tests for SparkR by running (you need to install the [testthat](http://cran.r-project.org/web/packages/testthat/index.html) package first):
 
@@ -70,7 +70,7 @@ You can also run the unit-tests for SparkR by running (you need to install the [
     ./R/run-tests.sh
 
 ### Running on YARN
-The `./bin/spark-submit` and `./bin/sparkR` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
+The `./bin/spark-submit` can also be used to submit jobs to YARN clusters. You will need to set YARN conf dir before doing so. For example on CDH you can run
 ```
 export YARN_CONF_DIR=/etc/hadoop/conf
 ./bin/spark-submit --master yarn examples/src/main/r/dataframe.R


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org