You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@systemml.apache.org by de...@apache.org on 2016/09/23 19:59:43 UTC

incubator-systemml git commit: [DOCS] Add implicits import for Spark <= 1.6.1

Repository: incubator-systemml
Updated Branches:
  refs/heads/master f75be2008 -> 410f4179c


[DOCS] Add implicits import for Spark <= 1.6.1

Update spark-mlcontext-programming-guide.md example to include import
of sqlContext.implicits._

Closes #248.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/410f4179
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/410f4179
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/410f4179

Branch: refs/heads/master
Commit: 410f4179cd637da6b70d31b0b89ebc06f1714a71
Parents: f75be20
Author: Romeo Kienzler <ro...@gmail.com>
Authored: Fri Sep 23 12:55:46 2016 -0700
Committer: Deron Eriksson <de...@us.ibm.com>
Committed: Fri Sep 23 12:55:46 2016 -0700

----------------------------------------------------------------------
 docs/spark-mlcontext-programming-guide.md | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/410f4179/docs/spark-mlcontext-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/spark-mlcontext-programming-guide.md b/docs/spark-mlcontext-programming-guide.md
index c446d1e..fbc8f5b 100644
--- a/docs/spark-mlcontext-programming-guide.md
+++ b/docs/spark-mlcontext-programming-guide.md
@@ -2279,14 +2279,12 @@ The Spark `LinearDataGenerator` is used to generate test data for the Spark ML a
 {% highlight scala %}
 // Generate data
 import org.apache.spark.mllib.util.LinearDataGenerator
+import sqlContext.implicits._
 
 val numRows = 10000
 val numCols = 1000
 val rawData = LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 1).toDF()
 
-//For Spark version <= 1.6.0 you can use createDataFrame() (comment the line above and uncomment the line below), and for Spark version >= 1.6.1 use .toDF()
-//val rawData = sqlContext.createDataFrame(LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 1))
-
 // Repartition into a more parallelism-friendly number of partitions
 val data = rawData.repartition(64).cache()
 {% endhighlight %}