You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2021/04/22 11:56:57 UTC
[spark] branch master updated: [SPARK-35180][BUILD] Allow to build
SparkR with SBT
This is an automated email from the ASF dual-hosted git repository.
gurwls223 pushed a commit to branch master
in repository https://gitbox.apache.org/repos/asf/spark.git
The following commit(s) were added to refs/heads/master by this push:
new c0972de [SPARK-35180][BUILD] Allow to build SparkR with SBT
c0972de is described below
commit c0972dec1d49417aa2ee2e18c638be8b976833c5
Author: Kousuke Saruta <sa...@oss.nttdata.com>
AuthorDate: Thu Apr 22 20:56:33 2021 +0900
[SPARK-35180][BUILD] Allow to build SparkR with SBT
### What changes were proposed in this pull request?
This PR proposes a change that allows us to build SparkR with SBT.
### Why are the changes needed?
In the current master, SparkR can be built only with Maven.
It's helpful if we can built it with SBT.
### Does this PR introduce _any_ user-facing change?
No.
### How was this patch tested?
I confirmed that I can build SparkR on Ubuntu 20.04 with the following command.
```
build/sbt -Psparkr package
```
Closes #32285 from sarutak/sbt-sparkr.
Authored-by: Kousuke Saruta <sa...@oss.nttdata.com>
Signed-off-by: hyukjinkwon <gu...@apache.org>
---
R/README.md | 6 +++++-
project/SparkBuild.scala | 23 +++++++++++++++++++++++
2 files changed, 28 insertions(+), 1 deletion(-)
diff --git a/R/README.md b/R/README.md
index 31174c7..da9f042 100644
--- a/R/README.md
+++ b/R/README.md
@@ -17,10 +17,14 @@ export R_HOME=/home/username/R
#### Build Spark
-Build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
+Build Spark with [Maven](https://spark.apache.org/docs/latest/building-spark.html#buildmvn) or [SBT](https://spark.apache.org/docs/latest/building-spark.html#building-with-sbt), and include the `-Psparkr` profile to build the R package. For example to use the default Hadoop versions you can run
```bash
+# Maven
./build/mvn -DskipTests -Psparkr package
+
+# SBT
+./build/sbt -Psparkr package
```
#### Running sparkR
diff --git a/project/SparkBuild.scala b/project/SparkBuild.scala
index 54ac3c1..b872668 100644
--- a/project/SparkBuild.scala
+++ b/project/SparkBuild.scala
@@ -414,6 +414,10 @@ object SparkBuild extends PomBuild {
enable(YARN.settings)(yarn)
+ if (profiles.contains("sparkr")) {
+ enable(SparkR.settings)(core)
+ }
+
/**
* Adds the ability to run the spark shell directly from SBT without building an assembly
* jar.
@@ -888,6 +892,25 @@ object PySparkAssembly {
}
+object SparkR {
+ import scala.sys.process.Process
+
+ val buildRPackage = taskKey[Unit]("Build the R package")
+ lazy val settings = Seq(
+ buildRPackage := {
+ val command = baseDirectory.value / ".." / "R" / "install-dev.sh"
+ Process(command.toString).!!
+ },
+ (Compile / compile) := (Def.taskDyn {
+ val c = (Compile / compile).value
+ Def.task {
+ (Compile / buildRPackage).value
+ c
+ }
+ }).value
+ )
+}
+
object Unidoc {
import BuildCommons._
---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org