You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/11/15 14:45:09 UTC

spark git commit: [SPARK-18427][DOC] Update docs of mllib.KMeans

Repository: spark
Updated Branches:
  refs/heads/master d89bfc923 -> 33be4da53


[SPARK-18427][DOC] Update docs of mllib.KMeans

## What changes were proposed in this pull request?
1,Remove `runs` from docs of mllib.KMeans
2,Add notes for `k` according to comments in sources
## How was this patch tested?
existing tests

Author: Zheng RuiFeng <ru...@foxmail.com>

Closes #15873 from zhengruifeng/update_doc_mllib_kmeans.


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/33be4da5
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/33be4da5
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/33be4da5

Branch: refs/heads/master
Commit: 33be4da5391b884191c405ffbce7d382ea8a2f66
Parents: d89bfc9
Author: Zheng RuiFeng <ru...@foxmail.com>
Authored: Tue Nov 15 15:44:50 2016 +0100
Committer: Sean Owen <so...@cloudera.com>
Committed: Tue Nov 15 15:44:50 2016 +0100

----------------------------------------------------------------------
 docs/mllib-clustering.md                          | 6 ++----
 examples/src/main/python/mllib/k_means_example.py | 3 +--
 2 files changed, 3 insertions(+), 6 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/33be4da5/docs/mllib-clustering.md
----------------------------------------------------------------------
diff --git a/docs/mllib-clustering.md b/docs/mllib-clustering.md
index d5f6ae3..8990e95 100644
--- a/docs/mllib-clustering.md
+++ b/docs/mllib-clustering.md
@@ -24,13 +24,11 @@ variant of the [k-means++](http://en.wikipedia.org/wiki/K-means%2B%2B) method
 called [kmeans||](http://theory.stanford.edu/~sergei/papers/vldb12-kmpar.pdf).
 The implementation in `spark.mllib` has the following parameters:
 
-* *k* is the number of desired clusters.
+* *k* is the number of desired clusters. Note that it is possible for fewer than k clusters to be returned, for example, if there are fewer than k distinct points to cluster.
 * *maxIterations* is the maximum number of iterations to run.
 * *initializationMode* specifies either random initialization or
 initialization via k-means\|\|.
-* *runs* is the number of times to run the k-means algorithm (k-means is not
-guaranteed to find a globally optimal solution, and when run multiple times on
-a given dataset, the algorithm returns the best clustering result).
+* *runs* This param has no effect since Spark 2.0.0.
 * *initializationSteps* determines the number of steps in the k-means\|\| algorithm.
 * *epsilon* determines the distance threshold within which we consider k-means to have converged.
 * *initialModel* is an optional set of cluster centers used for initialization. If this parameter is supplied, only one run is performed.

http://git-wip-us.apache.org/repos/asf/spark/blob/33be4da5/examples/src/main/python/mllib/k_means_example.py
----------------------------------------------------------------------
diff --git a/examples/src/main/python/mllib/k_means_example.py b/examples/src/main/python/mllib/k_means_example.py
index 5c397e6..d6058f4 100644
--- a/examples/src/main/python/mllib/k_means_example.py
+++ b/examples/src/main/python/mllib/k_means_example.py
@@ -36,8 +36,7 @@ if __name__ == "__main__":
     parsedData = data.map(lambda line: array([float(x) for x in line.split(' ')]))
 
     # Build the model (cluster the data)
-    clusters = KMeans.train(parsedData, 2, maxIterations=10,
-                            runs=10, initializationMode="random")
+    clusters = KMeans.train(parsedData, 2, maxIterations=10, initializationMode="random")
 
     # Evaluate clustering by computing Within Set Sum of Squared Errors
     def error(point):


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org