You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mahout.apache.org by ap...@apache.org on 2015/03/19 22:21:29 UTC

svn commit: r1667878 [2/4] - in /mahout/site/mahout_cms/trunk: content/users/algorithms/ content/users/environment/ content/users/mapreduce/ content/users/mapreduce/classification/ content/users/mapreduce/clustering/ content/users/mapreduce/recommender...

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-clustering.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-clustering.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-clustering.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-clustering.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,183 @@
+Title: Canopy Clustering
+
+<a name="CanopyClustering-CanopyClustering"></a>
+# Canopy Clustering
+
+[Canopy Clustering](http://www.kamalnigam.com/papers/canopy-kdd00.pdf)
+ is a very simple, fast and surprisingly accurate method for grouping
+objects into clusters. All objects are represented as a point in a
+multidimensional feature space. The algorithm uses a fast approximate
+distance metric and two distance thresholds T1 > T2 for processing. The
+basic algorithm is to begin with a set of points and remove one at random.
+Create a Canopy containing this point and iterate through the remainder of
+the point set. At each point, if its distance from the first point is < T1,
+then add the point to the cluster. If, in addition, the distance is < T2,
+then remove the point from the set. This way points that are very close to
+the original will avoid all further processing. The algorithm loops until
+the initial set is empty, accumulating a set of Canopies, each containing
+one or more points. A given point may occur in more than one Canopy.
+
+Canopy Clustering is often used as an initial step in more rigorous
+clustering techniques, such as [K-Means Clustering](k-means-clustering.html)
+. By starting with an initial clustering the number of more expensive
+distance measurements can be significantly reduced by ignoring points
+outside of the initial canopies.
+
+**WARNING**: Canopy is deprecated in the latest release and will be removed once streaming k-means becomes stable enough.
+ 
+<a name="CanopyClustering-Strategyforparallelization"></a>
+## Strategy for parallelization
+
+Looking at the sample Hadoop implementation in [http://code.google.com/p/canopy-clustering/](http://code.google.com/p/canopy-clustering/)
+ the processing is done in 3 M/R steps:
+1. The data is massaged into suitable input format
+1. Each mapper performs canopy clustering on the points in its input set and
+outputs its canopies' centers
+1. The reducer clusters the canopy centers to produce the final canopy
+centers
+1. The points are then clustered into these final canopies
+
+Some ideas can be found in [Cluster computing and MapReduce](https://www.youtube.com/watch?v=yjPBkvYh-ss&list=PLEFAB97242917704A)
+ lecture video series \[by Google(r)\]; Canopy Clustering is discussed in [lecture #4](https://www.youtube.com/watch?v=1ZDybXl212Q)
+. Finally here is the [Wikipedia page](http://en.wikipedia.org/wiki/Canopy_clustering_algorithm)
+.
+
+<a name="CanopyClustering-Designofimplementation"></a>
+## Design of implementation
+
+The implementation accepts as input Hadoop SequenceFiles containing
+multidimensional points (VectorWritable). Points may be expressed either as
+dense or sparse Vectors and processing is done in two phases: Canopy
+generation and, optionally, Clustering.
+
+<a name="CanopyClustering-Canopygenerationphase"></a>
+### Canopy generation phase
+
+During the map step, each mapper processes a subset of the total points and
+applies the chosen distance measure and thresholds to generate canopies. In
+the mapper, each point which is found to be within an existing canopy will
+be added to an internal list of Canopies. After observing all its input
+vectors, the mapper updates all of its Canopies and normalizes their totals
+to produce canopy centroids which are output, using a constant key
+("centroid") to a single reducer. The reducer receives all of the initial
+centroids and again applies the canopy measure and thresholds to produce a
+final set of canopy centroids which is output (i.e. clustering the cluster
+centroids). The reducer output format is: SequenceFile(Text, Canopy) with
+the _key_ encoding the canopy identifier. 
+
+<a name="CanopyClustering-Clusteringphase"></a>
+### Clustering phase
+
+During the clustering phase, each mapper reads the Canopies produced by the
+first phase. Since all mappers have the same canopy definitions, their
+outputs will be combined during the shuffle so that each reducer (many are
+allowed here) will see all of the points assigned to one or more canopies.
+The output format will then be: SequenceFile(IntWritable,
+WeightedVectorWritable) with the _key_ encoding the canopyId. The
+WeightedVectorWritable has two fields: a double weight and a VectorWritable
+vector. Together they encode the probability that each vector is a member
+of the given canopy.
+
+<a name="CanopyClustering-RunningCanopyClustering"></a>
+## Running Canopy Clustering
+
+The canopy clustering algorithm may be run using a command-line invocation
+on CanopyDriver.main or by making a Java call to CanopyDriver.run(...).
+Both require several arguments:
+
+Invocation using the command line takes the form:
+
+
+    bin/mahout canopy \
+        -i <input vectors directory> \
+        -o <output working directory> \
+        -dm <DistanceMeasure> \
+        -t1 <T1 threshold> \
+        -t2 <T2 threshold> \
+        -t3 <optional reducer T1 threshold> \
+        -t4 <optional reducer T2 threshold> \
+        -cf <optional cluster filter size (default: 0)> \
+        -ow <overwrite output directory if present>
+        -cl <run input vector clustering after computing Canopies>
+        -xm <execution method: sequential or mapreduce>
+
+
+Invocation using Java involves supplying the following arguments:
+
+1. input: a file path string to a directory containing the input data set a
+SequenceFile(WritableComparable, VectorWritable). The sequence file _key_
+is not used.
+1. output: a file path string to an empty directory which is used for all
+output from the algorithm.
+1. measure: the fully-qualified class name of an instance of DistanceMeasure
+which will be used for the clustering.
+1. t1: the T1 distance threshold used for clustering.
+1. t2: the T2 distance threshold used for clustering.
+1. t3: the optional T1 distance threshold used by the reducer for
+clustering. If not specified, T1 is used by the reducer.
+1. t4: the optional T2 distance threshold used by the reducer for
+clustering. If not specified, T2 is used by the reducer.
+1. clusterFilter: the minimum size for canopies to be output by the
+algorithm. Affects both sequential and mapreduce execution modes, and
+mapper and reducer outputs.
+1. runClustering: a boolean indicating, if true, that the clustering step is
+to be executed after clusters have been determined.
+1. runSequential: a boolean indicating, if true, that the computation is to
+be run in memory using the reference Canopy implementation. Note: that the
+sequential implementation performs a single pass through the input vectors
+whereas the MapReduce implementation performs two passes (once in the
+mapper and again in the reducer). The MapReduce implementation will
+typically produce less clusters than the sequential implementation as a
+result.
+
+After running the algorithm, the output directory will contain:
+1. clusters-0: a directory containing SequenceFiles(Text, Canopy) produced
+by the algorithm. The Text _key_ contains the cluster identifier of the
+Canopy.
+1. clusteredPoints: (if runClustering enabled) a directory containing
+SequenceFile(IntWritable, WeightedVectorWritable). The IntWritable _key_ is
+the canopyId. The WeightedVectorWritable _value_ is a bean containing a
+double _weight_ and a VectorWritable _vector_ where the weight indicates
+the probability that the vector is a member of the canopy. For canopy
+clustering, the weights are computed as 1/(1+distance) where the distance
+is between the cluster center and the vector using the chosen
+DistanceMeasure.
+
+<a name="CanopyClustering-Examples"></a>
+# Examples
+
+The following images illustrate Canopy clustering applied to a set of
+randomly-generated 2-d data points. The points are generated using a normal
+distribution centered at a mean location and with a constant standard
+deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt](https://github.com/apache/mahout/blob/master/examples/src/main/java/org/apache/mahout/clustering/display/README.txt)
+ for details on running similar examples.
+
+The points are generated as follows:
+
+* 500 samples m=\[1.0, 1.0\](1.0,-1.0\.html)
+ sd=3.0
+* 300 samples m=\[1.0, 0.0\](1.0,-0.0\.html)
+ sd=0.5
+* 300 samples m=\[0.0, 2.0\](0.0,-2.0\.html)
+ sd=0.1
+
+In the first image, the points are plotted and the 3-sigma boundaries of
+their generator are superimposed. 
+
+![sample data](../../images/SampleData.png)
+
+In the second image, the resulting canopies are shown superimposed upon the
+sample data. Each canopy is represented by two circles, with radius T1 and
+radius T2.
+
+![canopy](../../images/Canopy.png)
+
+The third image uses the same values of T1 and T2 but only superimposes
+canopies covering more than 10% of the population. This is a bit better
+representation of the data but it still has lots of room for improvement.
+The advantage of Canopy clustering is that it is single-pass and fast
+enough to iterate runs using different T1, T2 parameters and display
+thresholds.
+
+![canopy](../../images/Canopy10.png)
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-commandline.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-commandline.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-commandline.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/canopy-commandline.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,65 @@
+Title: canopy-commandline
+
+<a name="canopy-commandline-RunningCanopyClusteringfromtheCommandLine"></a>
+# Running Canopy Clustering from the Command Line
+Mahout's Canopy clustering can be launched from the same command line
+invocation whether you are running on a single machine in stand-alone mode
+or on a larger Hadoop cluster. The difference is determined by the
+$HADOOP_HOME and $HADOOP_CONF_DIR environment variables. If both are set to
+an operating Hadoop cluster on the target machine then the invocation will
+run Canopy on that cluster. If either of the environment variables are
+missing then the stand-alone Hadoop configuration will be invoked instead.
+
+
+    ./bin/mahout canopy <OPTIONS>
+
+
+* In $MAHOUT_HOME/, build the jar containing the job (mvn install) The job
+will be generated in $MAHOUT_HOME/core/target/ and it's name will contain
+the Mahout version number. For example, when using Mahout 0.3 release, the
+job will be mahout-core-0.3.job
+
+
+<a name="canopy-commandline-Testingitononesinglemachinew/ocluster"></a>
+## Testing it on one single machine w/o cluster
+
+* Put the data: cp <PATH TO DATA> testdata
+* Run the Job: 
+
+    ./bin/mahout canopy -i testdata -o output -dm
+org.apache.mahout.common.distance.CosineDistanceMeasure -ow -t1 5 -t2 2
+
+
+<a name="canopy-commandline-Runningitonthecluster"></a>
+## Running it on the cluster
+
+* (As needed) Start up Hadoop: $HADOOP_HOME/bin/start-all.sh
+* Put the data: $HADOOP_HOME/bin/hadoop fs -put <PATH TO DATA> testdata
+* Run the Job: 
+
+    export HADOOP_HOME=<Hadoop Home Directory>
+    export HADOOP_CONF_DIR=$HADOOP_HOME/conf
+    ./bin/mahout canopy -i testdata -o output -dm
+org.apache.mahout.common.distance.CosineDistanceMeasure -ow -t1 5 -t2 2
+
+* Get the data out of HDFS and have a look. Use bin/hadoop fs -lsr output
+to view all outputs.
+
+<a name="canopy-commandline-Commandlineoptions"></a>
+# Command line options
+
+      --input (-i) input			     Path to job input directory.Must  
+    					     be a SequenceFile of	    
+    					     VectorWritable		    
+      --output (-o) output			     The directory pathname for output. 
+      --overwrite (-ow)			     If present, overwrite the output	 
+    					     directory before running job   
+      --distanceMeasure (-dm) distanceMeasure    The classname of the	    
+    					     DistanceMeasure. Default is    
+    					     SquaredEuclidean		    
+      --t1 (-t1) t1 			     T1 threshold value 	    
+      --t2 (-t2) t2 			     T2 threshold value 	    
+      --clustering (-cl)			     If present, run clustering after	
+    					     the iterations have taken place	 
+      --help (-h)				     Print out help		    
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/cluster-dumper.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/cluster-dumper.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/cluster-dumper.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/cluster-dumper.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,82 @@
+Title: Cluster Dumper
+
+<a name="ClusterDumper-Introduction"></a>
+# Cluster Dumper - Introduction
+
+Clustering tasks in Mahout will output data in the format of a SequenceFile
+(Text, Cluster) and the Text is a cluster identifier string. To analyze
+this output we need to convert the sequence files to a human readable
+format and this is achieved using the clusterdump utility.
+
+<a name="ClusterDumper-Stepsforanalyzingclusteroutputusingclusterdumputility"></a>
+# Steps for analyzing cluster output using clusterdump utility
+
+After you've executed a clustering tasks (either examples or real-world),
+you can run clusterdumper in 2 modes.
+
+1. [Hadoop Environment](#hadoopenvironment.html)
+1. [Standalone Java Program ](#standalonejavaprogram.html)
+
+<a name="ClusterDumper-HadoopEnvironment{anchor:HadoopEnvironment}"></a>
+### Hadoop Environment
+
+If you have setup your HADOOP_HOME environment variable, you can use the
+command line utility "mahout" to execute the ClusterDumper on Hadoop. In
+this case we wont need to get the output clusters to our local machines.
+The utility will read the output clusters present in HDFS and output the
+human-readable cluster values into our local file system. Say you've just
+executed the [synthetic control example ](clustering-of-synthetic-control-data.html)
+ and want to analyze the output, you can execute
+
+    
+### Standalone Java Program {anchor:StandaloneJavaProgram}
+    
+ClusterDumper can be run using CLI. If your HADOOP_HOME environment
+variable is not set, you can execute ClusterDumper using "mahout" command
+line utility.
+
+Get the output data from hadoop into your local machine. For example, in
+the case where you've executed a clustering example use
+
+This will create a folder called output inside your $MAHOUT_HOME/examples
+and will have sub-folders for each cluster outputs and ClusteredPoints
+
+Run the clusterdump utility as follows as a standalone Java Program through Eclipse - if you are using eclipse, setup mahout-utils as a project as specified in [Working with Maven in Eclipse](../developers/buildingmahout.html).
+    To execute ClusterDumper.java,
+    
+* Under mahout-utils, Right-Click on ClusterDumper.java
+* Choose Run-As, Run Configurations
+* On the left menu, click on Java Application
+* On the top-bar click on "New Launch Configuration"
+* A new launch should be automatically created with project as
+
+    "mahout-utils" and Main Class as "org.apache.mahout.utils.clustering.ClusterDumper"
+
+In the arguments tab, specify the below arguments
+
+
+    --seqFileDir <MAHOUT_HOME>/examples/output/clusters-10 
+    --pointsDir <MAHOUT_HOME>/examples/output/clusteredPoints 
+    --output <MAHOUT_HOME>/examples/output/clusteranalyze.txt
+    replace <MAHOUT_HOME> with the actual path of your $MAHOUT_HOME
+
+* Hit run to execute the ClusterDumper using Eclipse. Setting breakpoints etc should just work fine.
+    
+Reading the output file
+    
+This will output the clusters into a file called clusteranalyze.txt inside $MAHOUT_HOME/examples/output
+Sample data will look like
+
+CL-0 { n=116 c=[29.922, 30.407, 30.373, 30.094, 29.886, 29.937, 29.751, 30.054, 30.039, 30.126, 29.764, 29.835, 30.503, 29.876, 29.990, 29.605, 29.379, 30.120, 29.882, 30.161, 29.825, 30.074, 30.001, 30.421, 29.867, 29.736, 29.760, 30.192, 30.134, 30.082, 29.962, 29.512, 29.736, 29.594, 29.493, 29.761, 29.183, 29.517, 29.273, 29.161, 29.215, 29.731, 29.154, 29.113, 29.348, 28.981, 29.543, 29.192, 29.479, 29.406, 29.715, 29.344, 29.628, 29.074, 29.347, 29.812, 29.058, 29.177, 29.063, 29.607](29.922,-30.407,-30.373,-30.094,-29.886,-29.937,-29.751,-30.054,-30.039,-30.126,-29.764,-29.835,-30.503,-29.876,-29.990,-29.605,-29.379,-30.120,-29.882,-30.161,-29.825,-30.074,-30.001,-30.421,-29.867,-29.736,-29.760,-30.192,-30.134,-30.082,-29.962,-29.512,-29.736,-29.594,-29.493,-29.761,-29.183,-29.517,-29.273,-29.161,-29.215,-29.731,-29.154,-29.113,-29.348,-28.981,-29.543,-29.192,-29.479,-29.406,-29.715,-29.344,-29.628,-29.074,-29.347,-29.812,-29.058,-29.177,-29.063,-29.607.html)
+ r=[3.463, 3.351, 3.452, 3.438, 3.371, 3.569, 3.253, 3.531, 3.439, 3.472,
+3.402, 3.459, 3.320, 3.260, 3.430, 3.452, 3.320, 3.499, 3.302, 3.511,
+3.520, 3.447, 3.516, 3.485, 3.345, 3.178, 3.492, 3.434, 3.619, 3.483,
+3.651, 3.833, 3.812, 3.433, 4.133, 3.855, 4.123, 3.999, 4.467, 4.731,
+4.539, 4.956, 4.644, 4.382, 4.277, 4.918, 4.784, 4.582, 4.915, 4.607,
+4.672, 4.577, 5.035, 5.241, 4.731, 4.688, 4.685, 4.657, 4.912, 4.300] }
+
+and on...
+
+where CL-0 is the Cluster 0 and n=116 refers to the number of points observed by this cluster and c = \[29.922 ...\]
+ refers to the center of Cluster as a vector and r = \[3.463 ..\] refers to
+the radius of the cluster as a vector.
\ No newline at end of file

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-of-synthetic-control-data.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-of-synthetic-control-data.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-of-synthetic-control-data.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-of-synthetic-control-data.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,48 @@
+Title: Clustering of synthetic control data
+
+# Clustering synthetic control data
+
+## Introduction
+
+This example will demonstrate clustering of time series data, specifically control charts. [Control charts](http://en.wikipedia.org/wiki/Control_chart) are tools used to determine whether a manufacturing or business process is in a state of statistical control. Such control charts are generated / simulated repeatedly at equal time intervals. A [simulated dataset](http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data.html) is available for use in UCI machine learning repository.
+
+A time series of control charts needs to be clustered into their close knit groups. The data set we use is synthetic and is meant to resemble real world information in an anonymized format. It contains six different classes: Normal, Cyclic, Increasing trend, Decreasing trend, Upward shift, Downward shift. In this example we will use Mahout to cluster the data into corresponding class buckets. 
+
+*For the sake of simplicity, we won't use a cluster in this example, but instead show you the commands to run the clustering examples locally with Hadoop*.
+
+## Setup
+
+We need to do some initial setup before we are able to run the example. 
+
+
+  1. Start out by downloading the dataset to be clustered from the UCI Machine Learning Repository: [http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data](http://archive.ics.uci.edu/ml/databases/synthetic_control/synthetic_control.data).
+
+  2. Download the [latest release of Mahout](/general/downloads.html).
+
+  3. Unpack the release binary and switch to the *mahout-distribution-0.x* folder
+
+  4. Make sure that the *JAVA_HOME* environment variable points to your local java installation
+
+  5. Create a folder called *testdata* in the current directory and copy the dataset into this folder.
+
+
+## Clustering Examples
+
+Depending on the clustering algorithm you want to run, the following commands can be used:
+
+
+   * [Canopy Clustering](/users/clustering/canopy-clustering.html)
+
+    bin/mahout org.apache.mahout.clustering.syntheticcontrol.canopy.Job
+
+   * [k-Means Clustering](/users/clustering/k-means-clustering.html)
+
+    bin/mahout org.apache.mahout.clustering.syntheticcontrol.kmeans.Job
+
+
+   * [Fuzzy k-Means Clustering](/users/clustering/fuzzy-k-means.html)
+
+    bin/mahout org.apache.mahout.clustering.syntheticcontrol.fuzzykmeans.Job
+
+The clustering output will be produced in the *output* directory. The output data points are in vector format. In order to read/analyze the output, you can use the [clusterdump](/users/clustering/cluster-dumper.html) utility provided by Mahout.
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-seinfeld-episodes.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-seinfeld-episodes.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-seinfeld-episodes.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clustering-seinfeld-episodes.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,5 @@
+Title: Clustering Seinfeld Episodes
+Below is short tutorial on how to cluster Seinfeld episode transcripts with
+Mahout.
+
+http://blog.jteam.nl/2011/04/04/how-to-cluster-seinfeld-episodes-with-mahout/

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clusteringyourdata.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clusteringyourdata.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clusteringyourdata.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/clusteringyourdata.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,121 @@
+Title: ClusteringYourData
+
+# Clustering your data
+
+After you've done the [Quickstart](quickstart.html) and are familiar with the basics of Mahout, it is time to cluster your own
+data. See also [Wikipedia on cluster analysis](en.wikipedia.org/wiki/Cluster_analysis) for more background.
+
+The following pieces *may* be useful for in getting started:
+
+<a name="ClusteringYourData-Input"></a>
+# Input
+
+For starters, you will need your data in an appropriate Vector format, see [Creating Vectors](../basics/creating-vectors.html).
+In particular for text preparation check out [Creating Vectors from Text](../basics/creating-vectors-from-text.html).
+
+
+<a name="ClusteringYourData-RunningtheProcess"></a>
+# Running the Process
+
+* [Canopy background](canopy-clustering.html) and [canopy-commandline](canopy-commandline.html).
+
+* [K-Means background](k-means-clustering.html), [k-means-commandline](k-means-commandline.html), and
+[fuzzy-k-means-commandline](fuzzy-k-means-commandline.html).
+
+* [Dirichlet background](dirichlet-process-clustering.html) and [dirichlet-commandline](dirichlet-commandline.html).
+
+* [Meanshift background](mean-shift-clustering.html) and [mean-shift-commandline](mean-shift-commandline.html).
+
+* [LDA (Latent Dirichlet Allocation) background](-latent-dirichlet-allocation.html) and [lda-commandline](lda-commandline.html).
+
+* TODO: kmeans++/ streaming kMeans documentation
+
+
+<a name="ClusteringYourData-RetrievingtheOutput"></a>
+# Retrieving the Output
+
+Mahout has a cluster dumper utility that can be used to retrieve and evaluate your clustering data.
+
+    ./bin/mahout clusterdump <OPTIONS>
+
+
+<a name="ClusteringYourData-Theclusterdumperoptionsare:"></a>
+## The cluster dumper options are:
+
+      --help (-h)				   Print out help	
+	    
+      --input (-i) input			   The directory containing Sequence    
+    					   Files for the Clusters	    
+
+      --output (-o) output			   The output file.  If not specified,  
+    					   dumps to the console.
+
+      --outputFormat (-of) outputFormat	   The optional output format to write
+    					   the results as. Options: TEXT, CSV, or GRAPH_ML		 
+
+      --substring (-b) substring		   The number of chars of the	    
+    					   asFormatString() to print	
+    
+      --pointsDir (-p) pointsDir		   The directory containing points  
+ 					   sequence files mapping input vectors     					   to their cluster.  If specified, 
+    					   then the program will output the 
+    					   points associated with a cluster 
+
+      --dictionary (-d) dictionary		   The dictionary file. 	    
+
+      --dictionaryType (-dt) dictionaryType    The dictionary file type	    
+    					   (text|sequencefile)
+
+      --distanceMeasure (-dm) distanceMeasure  The classname of the DistanceMeasure.
+    					   Default is SquaredEuclidean.     
+
+      --numWords (-n) numWords		   The number of top terms to print 
+
+      --tempDir tempDir			   Intermediate output directory
+
+      --startPhase startPhase		   First phase to run
+
+      --endPhase endPhase			   Last phase to run
+
+      --evaluate (-e)			   Run ClusterEvaluator and CDbwEvaluator over the
+    					   input. The output will be appended to the rest of
+    					   the output at the end.   
+
+
+More information on using clusterdump utility can be found [here](cluster-dumper.html)
+
+<a name="ClusteringYourData-ValidatingtheOutput"></a>
+# Validating the Output
+
+{quote}
+Ted Dunning: A principled approach to cluster evaluation is to measure how well the
+cluster membership captures the structure of unseen data.  A natural
+measure for this is to measure how much of the entropy of the data is
+captured by cluster membership.  For k-means and its natural L_2 metric,
+the natural cluster quality metric is the squared distance from the nearest
+centroid adjusted by the log_2 of the number of clusters.  This can be
+compared to the squared magnitude of the original data or the squared
+deviation from the centroid for all of the data.  The idea is that you are
+changing the representation of the data by allocating some of the bits in
+your original representation to represent which cluster each point is in. 
+If those bits aren't made up by the residue being small then your
+clustering is making a bad trade-off.
+
+In the past, I have used other more heuristic measures as well.  One of the
+key characteristics that I would like to see out of a clustering is a
+degree of stability.  Thus, I look at the fractions of points that are
+assigned to each cluster or the distribution of distances from the cluster
+centroid. These values should be relatively stable when applied to held-out
+data.
+
+For text, you can actually compute perplexity which measures how well
+cluster membership predicts what words are used.  This is nice because you
+don't have to worry about the entropy of real valued numbers.
+
+Manual inspection and the so-called laugh test is also important.  The idea
+is that the results should not be so ludicrous as to make you laugh.
+Unfortunately, it is pretty easy to kid yourself into thinking your system
+is working using this kind of inspection.  The problem is that we are too
+good at seeing (making up) patterns.
+{quote}
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/expectation-maximization.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/expectation-maximization.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/expectation-maximization.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/expectation-maximization.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,57 @@
+Title: Expectation Maximization
+<a name="ExpectationMaximization-ExpectationMaximization"></a>
+# Expectation Maximization
+
+The principle of EM can be applied to several learning settings, but is
+most commonly associated with clustering. The main principle of the
+algorithm is comparable to k-Means. Yet in contrast to hard cluster
+assignments, each object is given some probability to belong to a cluster.
+Accordingly cluster centers are recomputed based on the average of all
+objects weighted by their probability of belonging to the cluster at hand.
+
+<a name="ExpectationMaximization-Canopy-modifiedEM"></a>
+## Canopy-modified EM
+
+One can also use the canopies idea to speed up prototypebased clustering
+methods like K-means and Expectation-Maximization (EM). In general, neither
+K-means nor EMspecify how many clusters to use. The canopies technique does
+not help this choice.
+
+Prototypes (our estimates of the cluster centroids) are associated with the
+canopies that contain them, and the prototypes are only influenced by data
+that are inside their associated canopies. After creating the canopies, we
+decide how many prototypes will be created for each canopy. This could be
+done, for example, using the number of data points in a canopy and AIC or
+BIC where points that occur in more than one canopy are counted
+fractionally. Then we place prototypesinto each canopy. This initial
+placement can be random, as long as it is within the canopy in question, as
+determined by the inexpensive distance metric.
+
+Then, instead of calculating the distance from each prototype to every
+point (as is traditional, a O(nk) operation), theE-step instead calculates
+the distance from each prototype to a much smaller number of points. For
+each prototype, we find the canopies that contain it (using the cheap
+distance metric), and only calculate distances (using the expensive
+distance metric) from that prototype to points within those canopies.
+
+Note that by this procedure prototypes may move across canopy boundaries
+when canopies overlap. Prototypes may move to cover the data in the
+overlapping region, and then move entirely into another canopy in order to
+cover data there.
+
+The canopy-modified EM algorithm behaves very similarly to traditional EM,
+with the slight difference that points outside the canopy have no influence
+on points in the canopy, rather than a minute influence. If the canopy
+property holds, and points in the same cluster fall in the same canopy,
+then the canopy-modified EM will almost always converge to the same maximum
+in likelihood as the traditional EM. In fact, the difference in each
+iterative step (apart from the enormous computational savings of computing
+fewer terms) will be negligible since points outside the canopy will have
+exponentially small influence.
+
+<a name="ExpectationMaximization-StrategyforParallelization"></a>
+## Strategy for Parallelization
+
+<a name="ExpectationMaximization-Map/ReduceImplementation"></a>
+## Map/Reduce Implementation
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means-commandline.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means-commandline.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means-commandline.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means-commandline.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,92 @@
+Title: fuzzy-k-means-commandline
+
+<a name="fuzzy-k-means-commandline-RunningFuzzyk-MeansClusteringfromtheCommandLine"></a>
+# Running Fuzzy k-Means Clustering from the Command Line
+Mahout's Fuzzy k-Means clustering can be launched from the same command
+line invocation whether you are running on a single machine in stand-alone
+mode or on a larger Hadoop cluster. The difference is determined by the
+$HADOOP_HOME and $HADOOP_CONF_DIR environment variables. If both are set to
+an operating Hadoop cluster on the target machine then the invocation will
+run FuzzyK on that cluster. If either of the environment variables are
+missing then the stand-alone Hadoop configuration will be invoked instead.
+
+
+    ./bin/mahout fkmeans <OPTIONS>
+
+
+* In $MAHOUT_HOME/, build the jar containing the job (mvn install) The job
+will be generated in $MAHOUT_HOME/core/target/ and it's name will contain
+the Mahout version number. For example, when using Mahout 0.3 release, the
+job will be mahout-core-0.3.job
+
+
+<a name="fuzzy-k-means-commandline-Testingitononesinglemachinew/ocluster"></a>
+## Testing it on one single machine w/o cluster
+
+* Put the data: cp <PATH TO DATA> testdata
+* Run the Job: 
+
+    ./bin/mahout fkmeans -i testdata <OPTIONS>
+
+
+<a name="fuzzy-k-means-commandline-Runningitonthecluster"></a>
+## Running it on the cluster
+
+* (As needed) Start up Hadoop: $HADOOP_HOME/bin/start-all.sh
+* Put the data: $HADOOP_HOME/bin/hadoop fs -put <PATH TO DATA> testdata
+* Run the Job: 
+
+    export HADOOP_HOME=<Hadoop Home Directory>
+    export HADOOP_CONF_DIR=$HADOOP_HOME/conf
+    ./bin/mahout fkmeans -i testdata <OPTIONS>
+
+* Get the data out of HDFS and have a look. Use bin/hadoop fs -lsr output
+to view all outputs.
+
+<a name="fuzzy-k-means-commandline-Commandlineoptions"></a>
+# Command line options
+
+      --input (-i) input			       Path to job input directory. 
+    					       Must be a SequenceFile of    
+    					       VectorWritable		    
+      --clusters (-c) clusters		       The input centroids, as Vectors. 
+    					       Must be a SequenceFile of    
+    					       Writable, Cluster/Canopy. If k  
+    					       is also specified, then a random 
+    					       set of vectors will be selected  
+    					       and written out to this path 
+    					       first			    
+      --output (-o) output			       The directory pathname for   
+    					       output.			    
+      --distanceMeasure (-dm) distanceMeasure      The classname of the	    
+    					       DistanceMeasure. Default is  
+    					       SquaredEuclidean 	    
+      --convergenceDelta (-cd) convergenceDelta    The convergence delta value. 
+    					       Default is 0.5		    
+      --maxIter (-x) maxIter		       The maximum number of	    
+    					       iterations.		    
+      --k (-k) k				       The k in k-Means.  If specified, 
+    					       then a random selection of k 
+    					       Vectors will be chosen as the
+        					       Centroid and written to the  
+    					       clusters input path.	    
+      --m (-m) m				       coefficient normalization    
+    					       factor, must be greater than 1   
+      --overwrite (-ow)			       If present, overwrite the output 
+    					       directory before running job 
+      --help (-h)				       Print out help		    
+      --numMap (-u) numMap			       The number of map tasks.     
+    					       Defaults to 10		    
+      --maxRed (-r) maxRed			       The number of reduce tasks.  
+    					       Defaults to 2		    
+      --emitMostLikely (-e) emitMostLikely	       True if clustering should emit   
+    					       the most likely point only,  
+    					       false for threshold clustering.  
+    					       Default is true		    
+      --threshold (-t) threshold		       The pdf threshold used for   
+    					       cluster determination. Default   
+    					       is 0 
+      --clustering (-cl)			       If present, run clustering after 
+    					       the iterations have taken place  
+                                
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/fuzzy-k-means.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,181 @@
+Title: Fuzzy K-Means
+
+# Fuzzy K-Means
+
+Fuzzy K-Means (also called Fuzzy C-Means) is an extension of [K-Means](http://mahout.apache.org/users/clustering/k-means-clustering.html)
+, the popular simple clustering technique. While K-Means discovers hard
+clusters (a point belong to only one cluster), Fuzzy K-Means is a more
+statistically formalized method and discovers soft clusters where a
+particular point can belong to more than one cluster with certain
+probability.
+
+<a name="FuzzyK-Means-Algorithm"></a>
+#### Algorithm
+
+Like K-Means, Fuzzy K-Means works on those objects which can be represented
+in n-dimensional vector space and a distance measure is defined.
+The algorithm is similar to k-means.
+
+* Initialize k clusters
+* Until converged
+    * Compute the probability of a point belong to a cluster for every <point,cluster> pair
+    * Recompute the cluster centers using above probability membership values of points to clusters
+
+<a name="FuzzyK-Means-DesignImplementation"></a>
+#### Design Implementation
+
+The design is similar to K-Means present in Mahout. It accepts an input
+file containing vector points. User can either provide the cluster centers
+as input or can allow canopy algorithm to run and create initial clusters.
+
+Similar to K-Means, the program doesn't modify the input directories. And
+for every iteration, the cluster output is stored in a directory cluster-N.
+The code has set number of reduce tasks equal to number of map tasks. So,
+those many part-0
+  
+  
+Files are created in clusterN directory. The code uses
+driver/mapper/combiner/reducer as follows:
+
+FuzzyKMeansDriver - This is similar to&nbsp; KMeansDriver. It iterates over
+input points and cluster points for specified number of iterations or until
+it is converged.During every iteration i, a new cluster-i directory is
+created which contains the modified cluster centers obtained during
+FuzzyKMeans iteration. This will be feeded as input clusters in the next
+iteration.&nbsp; Once Fuzzy KMeans is run for specified number of
+iterations or until it is converged, a map task is run to output "the point
+and the cluster membership to each cluster" pair as final output to a
+directory named "points".
+
+FuzzyKMeansMapper - reads the input cluster during its configure() method,
+then&nbsp; computes cluster membership probability of a point to each
+cluster.Cluster membership is inversely propotional to the distance.
+Distance is computed using&nbsp; user supplied distance measure. Output key
+is encoded clusterId. Output values are ClusterObservations containing
+observation statistics.
+
+FuzzyKMeansCombiner - receives all key:value pairs from the mapper and
+produces partial sums of the cluster membership probability times input
+vectors for each cluster. Output key is: encoded cluster identifier. Output
+values are ClusterObservations containing observation statistics.
+
+FuzzyKMeansReducer - Multiple reducers receives certain keys and all values
+associated with those keys. The reducer sums the values to produce a new
+centroid for the cluster which is output. Output key is: encoded cluster
+identifier (e.g. "C14". Output value is: formatted cluster identifier (e.g.
+"C14"). The reducer encodes unconverged clusters with a 'Cn' cluster Id and
+converged clusters with 'Vn' clusterId.
+
+<a name="FuzzyK-Means-RunningFuzzyk-MeansClustering"></a>
+## Running Fuzzy k-Means Clustering
+
+The Fuzzy k-Means clustering algorithm may be run using a command-line
+invocation on FuzzyKMeansDriver.main or by making a Java call to
+FuzzyKMeansDriver.run(). 
+
+Invocation using the command line takes the form:
+
+
+    bin/mahout fkmeans \
+        -i <input vectors directory> \
+        -c <input clusters directory> \
+        -o <output working directory> \
+        -dm <DistanceMeasure> \
+        -m <fuzziness argument >1> \
+        -x <maximum number of iterations> \
+        -k <optional number of initial clusters to sample from input vectors> \
+        -cd <optional convergence delta. Default is 0.5> \
+        -ow <overwrite output directory if present>
+        -cl <run input vector clustering after computing Clusters>
+        -e <emit vectors to most likely cluster during clustering>
+        -t <threshold to use for clustering if -e is false>
+        -xm <execution method: sequential or mapreduce>
+
+
+*Note:* if the -k argument is supplied, any clusters in the -c directory
+will be overwritten and -k random points will be sampled from the input
+vectors to become the initial cluster centers.
+
+Invocation using Java involves supplying the following arguments:
+
+1. input: a file path string to a directory containing the input data set a
+SequenceFile(WritableComparable, VectorWritable). The sequence file _key_
+is not used.
+1. clustersIn: a file path string to a directory containing the initial
+clusters, a SequenceFile(key, SoftCluster | Cluster | Canopy). Fuzzy
+k-Means SoftClusters, k-Means Clusters and Canopy Canopies may be used for
+the initial clusters.
+1. output: a file path string to an empty directory which is used for all
+output from the algorithm.
+1. measure: the fully-qualified class name of an instance of DistanceMeasure
+which will be used for the clustering.
+1. convergence: a double value used to determine if the algorithm has
+converged (clusters have not moved more than the value in the last
+iteration)
+1. max-iterations: the maximum number of iterations to run, independent of
+the convergence specified
+1. m: the "fuzzyness" argument, a double > 1. For m equal to 2, this is
+equivalent to normalising the coefficient linearly to make their sum 1.
+When m is close to 1, then the cluster center closest to the point is given
+much more weight than the others, and the algorithm is similar to k-means.
+1. runClustering: a boolean indicating, if true, that the clustering step is
+to be executed after clusters have been determined.
+1. emitMostLikely: a boolean indicating, if true, that the clustering step
+should only emit the most likely cluster for each clustered point.
+1. threshold: a double indicating, if emitMostLikely is false, the cluster
+probability threshold used for emitting multiple clusters for each point. A
+value of 0 will emit all clusters with their associated probabilities for
+each vector.
+1. runSequential: a boolean indicating, if true, that the algorithm is to
+use the sequential reference implementation running in memory.
+
+After running the algorithm, the output directory will contain:
+1. clusters-N: directories containing SequenceFiles(Text, SoftCluster)
+produced by the algorithm for each iteration. The Text _key_ is a cluster
+identifier string.
+1. clusteredPoints: (if runClustering enabled) a directory containing
+SequenceFile(IntWritable, WeightedVectorWritable). The IntWritable _key_ is
+the clusterId. The WeightedVectorWritable _value_ is a bean containing a
+double _weight_ and a VectorWritable _vector_ where the weights are
+computed as 1/(1+distance) where the distance is between the cluster center
+and the vector using the chosen DistanceMeasure. 
+
+<a name="FuzzyK-Means-Examples"></a>
+# Examples
+
+The following images illustrate Fuzzy k-Means clustering applied to a set
+of randomly-generated 2-d data points. The points are generated using a
+normal distribution centered at a mean location and with a constant
+standard deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt](https://github.com/apache/mahout/blob/master/examples/src/main/java/org/apache/mahout/clustering/display/README.txt)
+ for details on running similar examples.
+
+The points are generated as follows:
+
+* 500 samples m=\[1.0, 1.0\](1.0,-1.0\.html)
+ sd=3.0
+* 300 samples m=\[1.0, 0.0\](1.0,-0.0\.html)
+ sd=0.5
+* 300 samples m=\[0.0, 2.0\](0.0,-2.0\.html)
+ sd=0.1
+
+In the first image, the points are plotted and the 3-sigma boundaries of
+their generator are superimposed. 
+
+![fuzzy](../../images/SampleData.png)
+
+In the second image, the resulting clusters (k=3) are shown superimposed upon the sample data. As Fuzzy k-Means is an iterative algorithm, the centers of the clusters in each recent iteration are shown using different colors. Bold red is the final clustering and previous iterations are shown in \[orange, yellow, green, blue, violet and gray\](orange,-yellow,-green,-blue,-violet-and-gray\.html)
+. Although it misses a lot of the points and cannot capture the original,
+superimposed cluster centers, it does a decent job of clustering this data.
+
+![fuzzy](../../images/FuzzyKMeans.png)
+
+The third image shows the results of running Fuzzy k-Means on a different
+data set which is generated using asymmetrical standard deviations.
+Fuzzy k-Means does a fair job handling this data set as well.
+
+![fuzzy](../../images/2dFuzzyKMeans.png)
+
+<a name="FuzzyK-Means-References&nbsp;"></a>
+#### References&nbsp;
+
+* [http://en.wikipedia.org/wiki/Fuzzy_clustering](http://en.wikipedia.org/wiki/Fuzzy_clustering)
\ No newline at end of file

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/hierarchical-clustering.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/hierarchical-clustering.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/hierarchical-clustering.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/hierarchical-clustering.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,10 @@
+Title: Hierarchical Clustering
+Hierarchical clustering is the process or finding bigger clusters, and also
+the smaller clusters inside the bigger clusters.
+
+In Apache Mahout, separate algorithms can be used for finding clusters at
+different levels. 
+
+See [Top Down Clustering](https://cwiki.apache.org/confluence/display/MAHOUT/Top+Down+Clustering)
+.
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-clustering.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-clustering.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-clustering.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-clustering.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,177 @@
+Title: K-Means Clustering
+
+# k-Means clustering - basics
+
+[k-Means](http://en.wikipedia.org/wiki/Kmeans) is a simple but well-known algorithm for grouping objects, clustering. All objects need to be represented
+as a set of numerical features. In addition, the user has to specify the
+number of groups (referred to as *k*) she wishes to identify.
+
+Each object can be thought of as being represented by some feature vector
+in an _n_ dimensional space, _n_ being the number of all features used to
+describe the objects to cluster. The algorithm then randomly chooses _k_
+points in that vector space, these point serve as the initial centers of
+the clusters. Afterwards all objects are each assigned to the center they
+are closest to. Usually the distance measure is chosen by the user and
+determined by the learning task.
+
+After that, for each cluster a new center is computed by averaging the
+feature vectors of all objects assigned to it. The process of assigning
+objects and recomputing centers is repeated until the process converges.
+The algorithm can be proven to converge after a finite number of
+iterations.
+
+Several tweaks concerning distance measure, initial center choice and
+computation of new average centers have been explored, as well as the
+estimation of the number of clusters _k_. Yet the main principle always
+remains the same.
+
+
+
+<a name="K-MeansClustering-Quickstart"></a>
+## Quickstart
+
+[Here](https://github.com/apache/mahout/blob/master/examples/bin/cluster-reuters.sh)
+ is a short shell script outline that will get you started quickly with
+k-means. This does the following:
+
+* Accepts clustering type: *kmeans*, *fuzzykmeans*, *lda*, or *streamingkmeans*
+* Gets the Reuters dataset
+* Runs org.apache.lucene.benchmark.utils.ExtractReuters to generate
+reuters-out from reuters-sgm (the downloaded archive)
+* Runs seqdirectory to convert reuters-out to SequenceFile format
+* Runs seq2sparse to convert SequenceFiles to sparse vector format
+* Runs k-means with 20 clusters
+* Runs clusterdump to show results
+
+After following through the output that scrolls past, reading the code will
+offer you a better understanding.
+
+
+<a name="K-MeansClustering-Designofimplementation"></a>
+## Implementation
+
+The implementation accepts two input directories: one for the data points
+and one for the initial clusters. The data directory contains multiple
+input files of SequenceFile(Key, VectorWritable), while the clusters
+directory contains one or more SequenceFiles(Text, Cluster)
+containing _k_ initial clusters or canopies. None of the input directories
+are modified by the implementation, allowing experimentation with initial
+clustering and convergence values.
+
+Canopy clustering can be used to compute the initial clusters for k-KMeans:
+
+    // run the CanopyDriver job
+    CanopyDriver.runJob("testdata", "output"
+    ManhattanDistanceMeasure.class.getName(), (float) 3.1, (float) 2.1, false);
+
+    // now run the KMeansDriver job
+    KMeansDriver.runJob("testdata", "output/clusters-0", "output",
+    EuclideanDistanceMeasure.class.getName(), "0.001", "10", true);
+
+
+In the above example, the input data points are stored in 'testdata' and
+the CanopyDriver is configured to output to the 'output/clusters-0'
+directory. Once the driver executes it will contain the canopy definition
+files. Upon running the KMeansDriver the output directory will have two or
+more new directories: 'clusters-N'' containining the clusters for each
+iteration and 'clusteredPoints' will contain the clustered data points.
+
+This diagram shows the examplary dataflow of the k-Means example
+implementation provided by Mahout:
+<img src="../../images/Example implementation of k-Means provided with Mahout.png">
+
+
+<a name="K-MeansClustering-Runningk-MeansClustering"></a>
+## Running k-Means Clustering
+
+The k-Means clustering algorithm may be run using a command-line invocation
+on KMeansDriver.main or by making a Java call to KMeansDriver.runJob().
+
+Invocation using the command line takes the form:
+
+
+    bin/mahout kmeans \
+        -i <input vectors directory> \
+        -c <input clusters directory> \
+        -o <output working directory> \
+        -k <optional number of initial clusters to sample from input vectors> \
+        -dm <DistanceMeasure> \
+        -x <maximum number of iterations> \
+        -cd <optional convergence delta. Default is 0.5> \
+        -ow <overwrite output directory if present>
+        -cl <run input vector clustering after computing Canopies>
+        -xm <execution method: sequential or mapreduce>
+
+
+Note: if the \-k argument is supplied, any clusters in the \-c directory
+will be overwritten and \-k random points will be sampled from the input
+vectors to become the initial cluster centers.
+
+Invocation using Java involves supplying the following arguments:
+
+1. input: a file path string to a directory containing the input data set a
+SequenceFile(WritableComparable, VectorWritable). The sequence file _key_
+is not used.
+1. clusters: a file path string to a directory containing the initial
+clusters, a SequenceFile(key, Cluster \| Canopy). Both KMeans clusters and
+Canopy canopies may be used for the initial clusters.
+1. output: a file path string to an empty directory which is used for all
+output from the algorithm.
+1. distanceMeasure: the fully-qualified class name of an instance of
+DistanceMeasure which will be used for the clustering.
+1. convergenceDelta: a double value used to determine if the algorithm has
+converged (clusters have not moved more than the value in the last
+iteration)
+1. maxIter: the maximum number of iterations to run, independent of the
+convergence specified
+1. runClustering: a boolean indicating, if true, that the clustering step is
+to be executed after clusters have been determined.
+1. runSequential: a boolean indicating, if true, that the k-means sequential
+implementation is to be used to process the input data.
+
+After running the algorithm, the output directory will contain:
+1. clusters-N: directories containing SequenceFiles(Text, Cluster) produced
+by the algorithm for each iteration. The Text _key_ is a cluster identifier
+string.
+1. clusteredPoints: (if \--clustering enabled) a directory containing
+SequenceFile(IntWritable, WeightedVectorWritable). The IntWritable _key_ is
+the clusterId. The WeightedVectorWritable _value_ is a bean containing a
+double _weight_ and a VectorWritable _vector_ where the weight indicates
+the probability that the vector is a member of the cluster. For k-Means
+clustering, the weights are computed as 1/(1+distance) where the distance
+is between the cluster center and the vector using the chosen
+DistanceMeasure.
+
+<a name="K-MeansClustering-Examples"></a>
+# Examples
+
+The following images illustrate k-Means clustering applied to a set of
+randomly-generated 2-d data points. The points are generated using a normal
+distribution centered at a mean location and with a constant standard
+deviation. See the README file in the [/examples/src/main/java/org/apache/mahout/clustering/display/README.txt](https://github.com/apache/mahout/blob/master/examples/src/main/java/org/apache/mahout/clustering/display/README.txt)
+ for details on running similar examples.
+
+The points are generated as follows:
+
+* 500 samples m=\[1.0, 1.0\](1.0,-1.0\.html)
+ sd=3.0
+* 300 samples m=\[1.0, 0.0\](1.0,-0.0\.html)
+ sd=0.5
+* 300 samples m=\[0.0, 2.0\](0.0,-2.0\.html)
+ sd=0.1
+
+In the first image, the points are plotted and the 3-sigma boundaries of
+their generator are superimposed.
+
+![Sample data graph](../../images/SampleData.png)
+
+In the second image, the resulting clusters (k=3) are shown superimposed upon the sample data. As k-Means is an iterative algorithm, the centers of the clusters in each recent iteration are shown using different colors. Bold red is the final clustering and previous iterations are shown in \[orange, yellow, green, blue, violet and gray\](orange,-yellow,-green,-blue,-violet-and-gray\.html)
+. Although it misses a lot of the points and cannot capture the original,
+superimposed cluster centers, it does a decent job of clustering this data.
+
+![kmeans](../../images/KMeans.png)
+
+The third image shows the results of running k-Means on a different dataset, which is generated using asymmetrical standard deviations.
+K-Means does a fair job handling this data set as well.
+
+![2d kmeans](../../images/2dKMeans.png)
\ No newline at end of file

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-commandline.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-commandline.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-commandline.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/k-means-commandline.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,89 @@
+Title: k-means-commandline
+
+<a name="k-means-commandline-Introduction"></a>
+# kMeans commandline introduction
+
+This quick start page describes how to run the kMeans clustering algorithm
+on a Hadoop cluster. 
+
+<a name="k-means-commandline-Steps"></a>
+# Steps
+
+Mahout's k-Means clustering can be launched from the same command line
+invocation whether you are running on a single machine in stand-alone mode
+or on a larger Hadoop cluster. The difference is determined by the
+$HADOOP_HOME and $HADOOP_CONF_DIR environment variables. If both are set to
+an operating Hadoop cluster on the target machine then the invocation will
+run k-Means on that cluster. If either of the environment variables are
+missing then the stand-alone Hadoop configuration will be invoked instead.
+
+
+    ./bin/mahout kmeans <OPTIONS>
+
+
+In $MAHOUT_HOME/, build the jar containing the job (mvn install) The job
+will be generated in $MAHOUT_HOME/core/target/ and it's name will contain
+the Mahout version number. For example, when using Mahout 0.3 release, the
+job will be mahout-core-0.3.job
+
+
+<a name="k-means-commandline-Testingitononesinglemachinew/ocluster"></a>
+## Testing it on one single machine w/o cluster
+
+* Put the data: cp <PATH TO DATA> testdata
+* Run the Job: 
+
+    ./bin/mahout kmeans -i testdata -o output -c clusters -dm
+org.apache.mahout.common.distance.CosineDistanceMeasure -x 5 -ow -cd 1 -k
+25
+
+
+<a name="k-means-commandline-Runningitonthecluster"></a>
+## Running it on the cluster
+
+* (As needed) Start up Hadoop: $HADOOP_HOME/bin/start-all.sh
+* Put the data: $HADOOP_HOME/bin/hadoop fs -put <PATH TO DATA> testdata
+* Run the Job: 
+
+    export HADOOP_HOME=<Hadoop Home Directory>
+    export HADOOP_CONF_DIR=$HADOOP_HOME/conf
+    ./bin/mahout kmeans -i testdata -o output -c clusters -dm org.apache.mahout.common.distance.CosineDistanceMeasure -x 5 -ow -cd 1 -k 25
+
+* Get the data out of HDFS and have a look. Use bin/hadoop fs -lsr output
+to view all outputs.
+
+<a name="k-means-commandline-Commandlineoptions"></a>
+# Command line options
+
+      --input (-i) input			       Path to job input directory. 
+    					       Must be a SequenceFile of    
+    					       VectorWritable		    
+      --clusters (-c) clusters		       The input centroids, as Vectors. 
+    					       Must be a SequenceFile of    
+    					       Writable, Cluster/Canopy. If k  
+    					       is also specified, then a random 
+    					       set of vectors will be selected  
+    					       and written out to this path 
+    					       first			    
+      --output (-o) output			       The directory pathname for   
+    					       output.			    
+      --distanceMeasure (-dm) distanceMeasure      The classname of the	    
+    					       DistanceMeasure. Default is  
+    					       SquaredEuclidean 	    
+      --convergenceDelta (-cd) convergenceDelta    The convergence delta value. 
+    					       Default is 0.5		    
+      --maxIter (-x) maxIter		       The maximum number of	    
+    					       iterations.		    
+      --maxRed (-r) maxRed			       The number of reduce tasks.  
+    					       Defaults to 2		    
+      --k (-k) k				       The k in k-Means.  If specified, 
+    					       then a random selection of k 
+    					       Vectors will be chosen as the    
+    					       Centroid and written to the  
+    					       clusters input path.	    
+      --overwrite (-ow)			       If present, overwrite the output 
+    					       directory before running job 
+      --help (-h)				       Print out help		    
+      --clustering (-cl)			       If present, run clustering after 
+    					       the iterations have taken place  
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/latent-dirichlet-allocation.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/latent-dirichlet-allocation.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/latent-dirichlet-allocation.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/latent-dirichlet-allocation.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,150 @@
+Title: Latent Dirichlet Allocation
+
+<a name="LatentDirichletAllocation-Overview"></a>
+# Overview
+
+Latent Dirichlet Allocation (Blei et al, 2003) is a powerful learning
+algorithm for automatically and jointly clustering words into "topics" and
+documents into mixtures of topics. It has been successfully applied to
+model change in scientific fields over time (Griffiths and Steyvers, 2004;
+Hall, et al. 2008). 
+
+A topic model is, roughly, a hierarchical Bayesian model that associates
+with each document a probability distribution over "topics", which are in
+turn distributions over words. For instance, a topic in a collection of
+newswire might include words about "sports", such as "baseball", "home
+run", "player", and a document about steroid use in baseball might include
+"sports", "drugs", and "politics". Note that the labels "sports", "drugs",
+and "politics", are post-hoc labels assigned by a human, and that the
+algorithm itself only assigns associate words with probabilities. The task
+of parameter estimation in these models is to learn both what the topics
+are, and which documents employ them in what proportions.
+
+Another way to view a topic model is as a generalization of a mixture model
+like [Dirichlet Process Clustering](http://en.wikipedia.org/wiki/Dirichlet_process)
+. Starting from a normal mixture model, in which we have a single global
+mixture of several distributions, we instead say that _each_ document has
+its own mixture distribution over the globally shared mixture components.
+Operationally in Dirichlet Process Clustering, each document has its own
+latent variable drawn from a global mixture that specifies which model it
+belongs to, while in LDA each word in each document has its own parameter
+drawn from a document-wide mixture.
+
+The idea is that we use a probabilistic mixture of a number of models that
+we use to explain some observed data. Each observed data point is assumed
+to have come from one of the models in the mixture, but we don't know
+which.	The way we deal with that is to use a so-called latent parameter
+which specifies which model each data point came from.
+
+<a name="LatentDirichletAllocation-CollapsedVariationalBayes"></a>
+# Collapsed Variational Bayes
+The CVB algorithm which is implemented in Mahout for LDA combines
+advantages of both regular Variational Bayes and Gibbs Sampling.  The
+algorithm relies on modeling dependence of parameters on latest variables
+which are in turn mutually independent.   The algorithm uses 2
+methodologies to marginalize out parameters when calculating the joint
+distribution and the other other is to model the posterior of theta and phi
+given the inputs z and x.
+
+A common solution to the CVB algorithm is to compute each expectation term
+by using simple Gaussian approximation which is accurate and requires low
+computational overhead.  The specifics behind the approximation involve
+computing the sum of the means and variances of the individual Bernoulli
+variables.
+
+CVB with Gaussian approximation is implemented by tracking the mean and
+variance and subtracting the mean and variance of the corresponding
+Bernoulli variables.  The computational cost for the algorithm scales on
+the order of O(K) with each update to q(z(i,j)).  Also for each
+document/word pair only 1 copy of the variational posterior is required
+over the latent variable.
+
+<a name="LatentDirichletAllocation-InvocationandUsage"></a>
+# Invocation and Usage
+
+Mahout's implementation of LDA operates on a collection of SparseVectors of
+word counts. These word counts should be non-negative integers, though
+things will-- probably --work fine if you use non-negative reals. (Note
+that the probabilistic model doesn't make sense if you do!) To create these
+vectors, it's recommended that you follow the instructions in [Creating Vectors From Text](../basics/creating-vectors-from-text.html)
+, making sure to use TF and not TFIDF as the scorer.
+
+Invocation takes the form:
+
+
+    bin/mahout cvb \
+        -i <input path for document vectors> \
+        -dict <path to term-dictionary file(s) , glob expression supported> \
+        -o <output path for topic-term distributions>
+        -dt <output path for doc-topic distributions> \
+        -k <number of latent topics> \
+        -nt <number of unique features defined by input document vectors> \
+        -mt <path to store model state after each iteration> \
+        -maxIter <max number of iterations> \
+        -mipd <max number of iterations per doc for learning> \
+        -a <smoothing for doc topic distributions> \
+        -e <smoothing for term topic distributions> \
+        -seed <random seed> \
+        -tf <fraction of data to hold for testing> \
+        -block <number of iterations per perplexity check, ignored unless
+test_set_percentage>0> \
+
+
+Topic smoothing should generally be about 50/K, where K is the number of
+topics. The number of words in the vocabulary can be an upper bound, though
+it shouldn't be too high (for memory concerns). 
+
+Choosing the number of topics is more art than science, and it's
+recommended that you try several values.
+
+After running LDA you can obtain an output of the computed topics using the
+LDAPrintTopics utility:
+
+
+    bin/mahout ldatopics \
+        -i <input vectors directory> \
+        -d <input dictionary file> \
+        -w <optional number of words to print> \
+        -o <optional output working directory. Default is to console> \
+        -h <print out help> \
+        -dt <optional dictionary type (text|sequencefile). Default is text>
+
+
+
+<a name="LatentDirichletAllocation-Example"></a>
+# Example
+
+An example is located in mahout/examples/bin/build-reuters.sh. The script
+automatically downloads the Reuters-21578 corpus, builds a Lucene index and
+converts the Lucene index to vectors. By uncommenting the last two lines
+you can then cause it to run LDA on the vectors and finally print the
+resultant topics to the console. 
+
+To adapt the example yourself, you should note that Lucene has specialized
+support for Reuters, and that building your own index will require some
+adaptation. The rest should hopefully not differ too much.
+
+<a name="LatentDirichletAllocation-ParameterEstimation"></a>
+# Parameter Estimation
+
+We use mean field variational inference to estimate the models. Variational
+inference can be thought of as a generalization of [EM](expectation-maximization.html)
+ for hierarchical Bayesian models. The E-Step takes the form of, for each
+document, inferring the posterior probability of each topic for each word
+in each document. We then take the sufficient statistics and emit them in
+the form of (log) pseudo-counts for each word in each topic. The M-Step is
+simply to sum these together and (log) normalize them so that we have a
+distribution over the entire vocabulary of the corpus for each topic. 
+
+In implementation, the E-Step is implemented in the Map, and the M-Step is
+executed in the reduce step, with the final normalization happening as a
+post-processing step.
+
+<a name="LatentDirichletAllocation-References"></a>
+# References
+
+[David M. Blei, Andrew Y. Ng, Michael I. Jordan, John Lafferty. 2003. Latent Dirichlet Allocation. JMLR.](-http://machinelearning.wustl.edu/mlpapers/paper_files/BleiNJ03.pdf)
+
+[Thomas L. Griffiths and Mark Steyvers. 2004. Finding scientific topics. PNAS.  ](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf)
+
+[David Hall, Dan Jurafsky, and Christopher D. Manning. 2008. Studying the History of Ideas Using Topic Models ](-http://aclweb.org/anthology//D/D08/D08-1038.pdf)

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/lda-commandline.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/lda-commandline.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/lda-commandline.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/lda-commandline.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,78 @@
+Title: lda-commandline
+
+<a name="lda-commandline-RunningLatentDirichletAllocation(algorithm)fromtheCommandLine"></a>
+# Running Latent Dirichlet Allocation (algorithm) from the Command Line
+[Since Mahout v0.6](https://issues.apache.org/jira/browse/MAHOUT-897)
+ lda has been implemented as Collapsed Variable Bayes (cvb). 
+
+Mahout's LDA can be launched from the same command line invocation whether
+you are running on a single machine in stand-alone mode or on a larger
+Hadoop cluster. The difference is determined by the $HADOOP_HOME and
+$HADOOP_CONF_DIR environment variables. If both are set to an operating
+Hadoop cluster on the target machine then the invocation will run the LDA
+algorithm on that cluster. If either of the environment variables are
+missing then the stand-alone Hadoop configuration will be invoked instead.
+
+
+
+    ./bin/mahout cvb <OPTIONS>
+
+
+* In $MAHOUT_HOME/, build the jar containing the job (mvn install) The job
+will be generated in $MAHOUT_HOME/core/target/ and it's name will contain
+the Mahout version number. For example, when using Mahout 0.3 release, the
+job will be mahout-core-0.3.job
+
+
+<a name="lda-commandline-Testingitononesinglemachinew/ocluster"></a>
+## Testing it on one single machine w/o cluster
+
+* Put the data: cp <PATH TO DATA> testdata
+* Run the Job: 
+
+    ./bin/mahout cvb -i testdata <OTHER OPTIONS>
+
+
+<a name="lda-commandline-Runningitonthecluster"></a>
+## Running it on the cluster
+
+* (As needed) Start up Hadoop: $HADOOP_HOME/bin/start-all.sh
+* Put the data: $HADOOP_HOME/bin/hadoop fs -put <PATH TO DATA> testdata
+* Run the Job: 
+
+    export HADOOP_HOME=<Hadoop Home Directory>
+    export HADOOP_CONF_DIR=$HADOOP_HOME/conf
+    ./bin/mahout cvb -i testdata <OTHER OPTIONS>
+
+* Get the data out of HDFS and have a look. Use bin/hadoop fs -lsr output
+to view all outputs.
+
+<a name="lda-commandline-CommandlineoptionsfromMahoutcvbversion0.8"></a>
+# Command line options from Mahout cvb version 0.8
+
+    mahout cvb -h 
+      --input (-i) input					  Path to job input directory.	      
+      --output (-o) output					  The directory pathname for output.  
+      --maxIter (-x) maxIter				  The maximum number of iterations.		
+      --convergenceDelta (-cd) convergenceDelta		  The convergence delta value		    
+      --overwrite (-ow)					  If present, overwrite the output directory before running job    
+      --num_topics (-k) num_topics				  Number of topics to learn		 
+      --num_terms (-nt) num_terms				  Vocabulary size   
+      --doc_topic_smoothing (-a) doc_topic_smoothing	  Smoothing for document/topic distribution	     
+      --term_topic_smoothing (-e) term_topic_smoothing	  Smoothing for topic/term distribution 	 
+      --dictionary (-dict) dictionary			  Path to term-dictionary file(s) (glob expression supported) 
+      --doc_topic_output (-dt) doc_topic_output		  Output path for the training doc/topic distribution	     
+      --topic_model_temp_dir (-mt) topic_model_temp_dir	  Path to intermediate model path (useful for restarting)       
+      --iteration_block_size (-block) iteration_block_size	  Number of iterations per perplexity check  
+      --random_seed (-seed) random_seed			  Random seed	    
+      --test_set_fraction (-tf) test_set_fraction		  Fraction of data to hold out for testing  
+      --num_train_threads (-ntt) num_train_threads		  number of threads per mapper to train with  
+      --num_update_threads (-nut) num_update_threads	  number of threads per mapper to update the model with	       
+      --max_doc_topic_iters (-mipd) max_doc_topic_iters	  max number of iterations per doc for p(topic|doc) learning		  
+      --num_reduce_tasks num_reduce_tasks			  number of reducers to use during model estimation 	   
+      --backfill_perplexity 				  enable backfilling of missing perplexity values		
+      --help (-h)						  Print out help    
+      --tempDir tempDir					  Intermediate output directory	     
+      --startPhase startPhase				  First phase to run    
+      --endPhase endPhase					  Last phase to run
+

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/llr---log-likelihood-ratio.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/llr---log-likelihood-ratio.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/llr---log-likelihood-ratio.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/llr---log-likelihood-ratio.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,41 @@
+Title: LLR - Log-likelihood Ratio
+
+# Likelihood ratio test
+
+_Likelihood ratio test is used to compare the fit of two models one
+of which is nested within the other._
+
+In the context of machine learning and the Mahout project in particular,
+the term LLR is usually meant to refer to a test of significance for two
+binomial distributions, also known as the G squared statistic.	This is a
+special case of the multinomial test and is closely related to mutual
+information.  The value of this statistic is not normally used in this
+context as a true frequentist test of significance since there would be
+obvious and dreadful problems to do with multiple comparisons, but rather
+as a heuristic score to order pairs of items with the most interestingly
+connected items having higher scores.  In this usage, the LLR has proven
+very useful for discriminating pairs of features that have interesting
+degrees of cooccurrence and those that do not with usefully small false
+positive and false negative rates.  The LLR is typically far more suitable
+in the case of small than many other measures such as Pearson's
+correlation, Pearson's chi squared statistic or z statistics.  The LLR as
+stated does not, however, make any use of rating data which can limit its
+applicability in problems such as the Netflix competition. 
+
+The actual value of the LLR is not usually very helpful other than as a way
+of ordering pairs of items.  As such, it is often used to determine a
+sparse set of coefficients to be estimated by other means such as TF-IDF. 
+Since the actual estimation of these coefficients can be done in a way that
+is independent of the training data such as by general corpus statistics,
+and since the ordering imposed by the LLR is relatively robust to counting
+fluctuation, this technique can provide very strong results in very sparse
+problems where the potential number of features vastly out-numbers the
+number of training examples and where features are highly interdependent.
+
+ See Also: 
+
+* [Blog post "surprise and coincidence"](http://tdunning.blogspot.com/2008/03/surprise-and-coincidence.html)
+* [G-Test](http://en.wikipedia.org/wiki/G-test)
+* [Likelihood Ratio Test](http://en.wikipedia.org/wiki/Likelihood-ratio_test)
+
+      
\ No newline at end of file

Added: mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/spectral-clustering.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/spectral-clustering.mdtext?rev=1667878&view=auto
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/spectral-clustering.mdtext (added)
+++ mahout/site/mahout_cms/trunk/content/users/mapreduce/clustering/spectral-clustering.mdtext Thu Mar 19 21:21:28 2015
@@ -0,0 +1,79 @@
+Title: Spectral Clustering
+
+# Spectral Clustering Overview
+
+Spectral clustering, as its name implies, makes use of the spectrum (or eigenvalues) of the similarity matrix of the data. It examines the _connectedness_ of the data, whereas other clustering algorithms such as k-means use the _compactness_ to assign clusters. Consequently, in situations where k-means performs well, spectral clustering will also perform well. Additionally, there are situations in which k-means will underperform (e.g. concentric circles), but spectral clustering will be able to segment the underlying clusters. Spectral clustering is also very useful for image segmentation.
+
+At its simplest, spectral clustering relies on the following four steps:
+
+ 1. Computing a similarity (or _affinity_) matrix `\(\mathbf{A}\)` from the data. This involves determining a pairwise distance function `\(f\)` that takes a pair of data points and returns a scalar.
+
+ 2. Computing a graph Laplacian `\(\mathbf{L}\)` from the affinity matrix. There are several types of graph Laplacians; which is used will often depends on the situation.
+
+ 3. Computing the eigenvectors and eigenvalues of `\(\mathbf{L}\)`. The degree of this decomposition is often modulated by `\(k\)`, or the number of clusters. Put another way, `\(k\)` eigenvectors and eigenvalues are computed.
+
+ 4. The `\(k\)` eigenvectors are used as "proxy" data for the original dataset, and fed into k-means clustering. The resulting cluster assignments are transparently passed back to the original data.
+
+For more theoretical background on spectral clustering, such as how affinity matrices are computed, the different types of graph Laplacians, and whether the top or bottom eigenvectors and eigenvalues are computed, please read [Ulrike von Luxburg's article in _Statistics and Computing_ from December 2007](http://link.springer.com/article/10.1007/s11222-007-9033-z). It provides an excellent description of the linear algebra operations behind spectral clustering, and imbues a thorough understanding of the types of situations in which it can be used.
+
+# Mahout Spectral Clustering
+
+As of Mahout 0.3, spectral clustering has been implemented to take advantage of the MapReduce framework. It uses [SSVD](http://mahout.apache.org/users/dim-reduction/ssvd.html) for dimensionality reduction of the input data set, and [k-means](http://mahout.apache.org/users/clustering/k-means-clustering.html) to perform the final clustering.
+
+**([MAHOUT-1538](https://issues.apache.org/jira/browse/MAHOUT-1538) will port the existing Hadoop MapReduce implementation to Mahout DSL, allowing for one of several distinct distributed back-ends to conduct the computation)**
+
+## Input
+
+The input format for the algorithm currently takes the form of a Hadoop-backed affinity matrix in the form of text files. Each line of the text file specifies a single element of the affinity matrix: the row index `\(i\)`, the column index `\(j\)`, and the value:
+
+`i, j, value`
+
+The affinity matrix is symmetric, and any unspecified `\(i, j\)` pairs are assumed to be 0 for sparsity. The row and column indices are 0-indexed. Thus, only the non-zero entries of either the upper or lower triangular need be specified.
+
+The matrix elements specified in the text files are collected into a Mahout `DistributedRowMatrix`.
+
+**([MAHOUT-1539](https://issues.apache.org/jira/browse/MAHOUT-1539) will allow for the creation of the affinity matrix to occur as part of the core spectral clustering algorithm, as opposed to the current requirement that the user create this matrix themselves and provide it, rather than the original data, to the algorithm)**
+
+## Running spectral clustering
+
+**([MAHOUT-1540](https://issues.apache.org/jira/browse/MAHOUT-1540) will provide a running example of this algorithm and this section will be updated to show how to run the example and what the expected output should be; until then, this section provides a how-to for simply running the algorithm on arbitrary input)**
+
+Spectral clustering can be invoked with the following arguments.
+
+    bin/mahout spectralkmeans \
+        -i <affinity matrix directory> \
+        -o <output working directory> \
+        -d <number of data points> \
+        -k <number of clusters AND number of top eigenvectors to use> \
+        -x <maximum number of k-means iterations>
+
+The affinity matrix can be contained in a single text file (using the aforementioned one-line-per-entry format) or span many text files [per (MAHOUT-978](https://issues.apache.org/jira/browse/MAHOUT-978), do not prefix text files with a leading underscore '_' or period '.'). The `-d` flag is required for the algorithm to know the dimensions of the affinity matrix. `-k` is the number of top eigenvectors from the normalized graph Laplacian in the SSVD step, and also the number of clusters given to k-means after the SSVD step.
+
+## Example
+
+To provide a simple example, take the following affinity matrix, contained in a text file called `affinity.txt`:
+
+    0, 0, 0
+    0, 1, 0.8
+    0, 2, 0.5
+    1, 0, 0.8
+    1, 1, 0
+    1, 2, 0.9
+    2, 0, 0.5
+    2, 1, 0.9
+    2, 2, 0
+
+With this 3-by-3 matrix, `-d` would be `3`. Furthermore, since all affinity matrices are assumed to be symmetric, the entries specifying both `1, 2, 0.9` and `2, 1, 0.9` are redundant; only one of these is needed. Additionally, any entries that are 0, such as those along the diagonal, also need not be specified at all. They are provided here for completeness.
+
+In general, larger values indicate a stronger "connectedness", whereas smaller values indicate a weaker connectedness. This will vary somewhat depending on the distance function used, though a common one is the [RBF kernel](http://en.wikipedia.org/wiki/RBF_kernel) (used in the above example) which returns values in the range [0, 1], where 0 indicates completely disconnected (or completely dissimilar) and 1 is fully connected (or identical).
+
+The call signature with this matrix could be as follows:
+
+    bin/mahout spectralkmeans \
+        -i s3://mahout-example/input/ \
+        -o s3://mahout-example/output/ \
+        -d 3 \
+        -k 2 \
+        -x 10
+
+There are many other optional arguments, in particular for tweaking the SSVD process (block size, number of power iterations, etc) and the k-means clustering step (distance measure, convergence delta, etc).
\ No newline at end of file