You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by jo...@apache.org on 2014/07/26 22:56:01 UTC

git commit: [SPARK-2547]:The clustering documentaion example provided for spark 0.9....

Repository: spark
Updated Branches:
  refs/heads/branch-0.9 c37db1537 -> 7e4a0e1a0


[SPARK-2547]:The clustering documentaion example provided for spark 0.9....

I modified a trivial mistake in the MLlib documentation.
I checked that the python sample code for a k-means clustering can correctly run on a EC2 instance.

Author: Yuu ISHIKAWA <yu...@gmail.com>

Closes #1590 from yu-iskw/branch-0.9 and squashes the following commits:

06eeb94 [Yuu ISHIKAWA] [SPARK-2547]:The clustering documentaion example provided for spark 0.9.1/docs is having a error


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/7e4a0e1a
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/7e4a0e1a
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/7e4a0e1a

Branch: refs/heads/branch-0.9
Commit: 7e4a0e1a0681c9bd6984d040ddd3aef3fe291c43
Parents: c37db15
Author: Yuu ISHIKAWA <yu...@gmail.com>
Authored: Sat Jul 26 13:55:53 2014 -0700
Committer: Josh Rosen <jo...@apache.org>
Committed: Sat Jul 26 13:55:53 2014 -0700

----------------------------------------------------------------------
 docs/mllib-guide.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/7e4a0e1a/docs/mllib-guide.md
----------------------------------------------------------------------
diff --git a/docs/mllib-guide.md b/docs/mllib-guide.md
index ff5d39e..e8457bb 100644
--- a/docs/mllib-guide.md
+++ b/docs/mllib-guide.md
@@ -392,7 +392,7 @@ parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')]))
 
 # Build the model (cluster the data)
 clusters = KMeans.train(parsedData, 2, maxIterations=10,
-        runs=30, initialization_mode="random")
+        runs=30, initializationMode="random")
 
 # Evaluate clustering by computing Within Set Sum of Squared Errors
 def error(point):