You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@systemml.apache.org by de...@apache.org on 2016/09/16 19:33:26 UTC

incubator-systemml git commit: Update spark-mlcontext-programming-guide.md

Repository: incubator-systemml
Updated Branches:
  refs/heads/master b2f3fd8e0 -> c9eda508d


Update spark-mlcontext-programming-guide.md

Add code for Spark 1.6 since the example breaks on Spark 1.6

Closes #245, #246.


Project: http://git-wip-us.apache.org/repos/asf/incubator-systemml/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-systemml/commit/c9eda508
Tree: http://git-wip-us.apache.org/repos/asf/incubator-systemml/tree/c9eda508
Diff: http://git-wip-us.apache.org/repos/asf/incubator-systemml/diff/c9eda508

Branch: refs/heads/master
Commit: c9eda508ddaec55e746875fd54a6e13e4ad647aa
Parents: b2f3fd8
Author: Romeo Kienzler <ro...@gmail.com>
Authored: Fri Sep 16 12:29:43 2016 -0700
Committer: Deron Eriksson <de...@us.ibm.com>
Committed: Fri Sep 16 12:29:43 2016 -0700

----------------------------------------------------------------------
 docs/spark-mlcontext-programming-guide.md | 3 +++
 1 file changed, 3 insertions(+)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/incubator-systemml/blob/c9eda508/docs/spark-mlcontext-programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/spark-mlcontext-programming-guide.md b/docs/spark-mlcontext-programming-guide.md
index c7b2bb6..c446d1e 100644
--- a/docs/spark-mlcontext-programming-guide.md
+++ b/docs/spark-mlcontext-programming-guide.md
@@ -2284,6 +2284,9 @@ val numRows = 10000
 val numCols = 1000
 val rawData = LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 1).toDF()
 
+//For Spark version <= 1.6.0 you can use createDataFrame() (comment the line above and uncomment the line below), and for Spark version >= 1.6.1 use .toDF()
+//val rawData = sqlContext.createDataFrame(LinearDataGenerator.generateLinearRDD(sc, numRows, numCols, 1))
+
 // Repartition into a more parallelism-friendly number of partitions
 val data = rawData.repartition(64).cache()
 {% endhighlight %}