You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mahout.apache.org by bu...@apache.org on 2014/08/29 19:30:26 UTC

svn commit: r920728 - in /websites/staging/mahout/trunk/content: ./ users/recommender/intro-cooccurrence-spark.html

Author: buildbot
Date: Fri Aug 29 17:30:25 2014
New Revision: 920728

Log:
Staging update by buildbot for mahout

Added:
    websites/staging/mahout/trunk/content/users/recommender/intro-cooccurrence-spark.html
Modified:
    websites/staging/mahout/trunk/content/   (props changed)

Propchange: websites/staging/mahout/trunk/content/
------------------------------------------------------------------------------
--- cms:source-revision (original)
+++ cms:source-revision Fri Aug 29 17:30:25 2014
@@ -1 +1 @@
-1619235
+1621349

Added: websites/staging/mahout/trunk/content/users/recommender/intro-cooccurrence-spark.html
==============================================================================
--- websites/staging/mahout/trunk/content/users/recommender/intro-cooccurrence-spark.html (added)
+++ websites/staging/mahout/trunk/content/users/recommender/intro-cooccurrence-spark.html Fri Aug 29 17:30:25 2014
@@ -0,0 +1,524 @@
+<!DOCTYPE html>
+<!--
+
+    Licensed to the Apache Software Foundation (ASF) under one or more
+    contributor license agreements.  See the NOTICE file distributed with
+    this work for additional information regarding copyright ownership.
+    The ASF licenses this file to You under the Apache License, Version 2.0
+    (the "License"); you may not use this file except in compliance with
+    the License.  You may obtain a copy of the License at
+
+       http://www.apache.org/licenses/LICENSE-2.0
+
+    Unless required by applicable law or agreed to in writing, software
+    distributed under the License is distributed on an "AS IS" BASIS,
+    WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+    See the License for the specific language governing permissions and
+    limitations under the License.
+-->
+
+<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8">
+  <title>Apache Mahout: Scalable machine learning and data mining</title>
+  <meta http-equiv="Content-Type" content="text/html; charset=utf-8">
+  <meta name="Distribution" content="Global">
+  <meta name="Robots" content="index,follow">
+  <meta name="keywords" content="apache, apache hadoop, apache lucene,
+        business data mining, cluster analysis,
+        collaborative filtering, data extraction, data filtering, data framework, data integration,
+        data matching, data mining, data mining algorithms, data mining analysis, data mining data,
+        data mining introduction, data mining software,
+        data mining techniques, data representation, data set, datamining,
+        feature extraction, fuzzy k means, genetic algorithm, hadoop,
+        hierarchical clustering, high dimensional, introduction to data mining, kmeans,
+        knowledge discovery, learning approach, learning approaches, learning methods,
+        learning techniques, lucene, machine learning, machine translation, mahout apache,
+        mahout taste, map reduce hadoop, mining data, mining methods, naive bayes,
+        natural language processing,
+        supervised, text mining, time series data, unsupervised, web data mining">
+  <link rel="shortcut icon" type="image/x-icon" href="http://mahout.apache.org/images/favicon.ico">
+  <script type="text/javascript" src="/js/prototype.js"></script>
+  <script type="text/javascript" src="/js/effects.js"></script>
+  <script type="text/javascript" src="/js/search.js"></script>
+  <script type="text/javascript" src="/js/slides.js"></script>
+
+  <link href="/css/bootstrap.min.css" rel="stylesheet" media="screen">
+  <link href="/css/bootstrap-responsive.css" rel="stylesheet">
+  <link rel="stylesheet" href="/css/global.css" type="text/css">
+
+  <!-- mathJax stuff -- use `\(...\)` for inline style math in markdown -->
+  <script type="text/x-mathjax-config">
+  MathJax.Hub.Config({
+    tex2jax: {
+      skipTags: ['script', 'noscript', 'style', 'textarea', 'pre']
+    }
+  });
+  MathJax.Hub.Queue(function() {
+    var all = MathJax.Hub.getAllJax(), i;
+    for(i = 0; i < all.length; i += 1) {
+      all[i].SourceElement().parentNode.className += ' has-jax';
+    }
+  });
+  </script>
+  <script type="text/javascript">
+    var mathjax = document.createElement('script'); 
+    mathjax.type = 'text/javascript'; 
+    mathjax.async = true;
+
+    mathjax.src = ('https:' == document.location.protocol) ?
+        'https://c328740.ssl.cf1.rackcdn.com/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML' : 
+        'http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML';
+	
+	  var s = document.getElementsByTagName('script')[0]; 
+    s.parentNode.insertBefore(mathjax, s);
+  </script>
+</head>
+
+<body id="home" data-twttr-rendered="true">
+  <div id="wrap">
+   <div id="header">
+    <div id="logo"><a href="/overview.html"></a></div>
+  <div id="search">
+    <form id="search-form" action="http://www.google.com/search" method="get" class="navbar-search pull-right">    
+      <input value="http://mahout.apache.org" name="sitesearch" type="hidden">
+      <input class="search-query" name="q" id="query" type="text">
+      <input id="submission" type="image" src="/images/mahout-lupe.png" alt="Search" />
+    </form>
+  </div>
+
+    <div class="navbar navbar-inverse" style="position:absolute;top:133px;padding-right:0px;padding-left:0px;">
+      <div class="navbar-inner" style="border: none; background: #999; border: none; border-radius: 0px;">
+        <div class="container">
+          <button type="button" class="btn btn-navbar" data-toggle="collapse" data-target=".nav-collapse">
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+            <span class="icon-bar"></span>
+          </button>
+          <!-- <a class="brand" href="#">Apache Community Development Project</a> -->
+          <div class="nav-collapse collapse">
+            <ul class="nav">
+              <li><a href="/">Home</a></li>
+              <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">General<b class="caret"></b></a>
+                <ul class="dropdown-menu">
+                  <li><a href="/general/downloads.html">Downloads</a>
+                  <li><a href="/general/who-we-are.html">Who we are</a>
+                  <li><a href="/general/mailing-lists,-irc-and-archives.html">Mailing Lists</a>
+                  <li><a href="/general/release-notes.html">Release Notes</a> 
+                  <li><a href="/general/books-tutorials-and-talks.html">Books, Tutorials, Talks</a></li>
+                  <li><a href="/general/powered-by-mahout.html">Powered By Mahout</a>
+                  <li><a href="/general/professional-support.html">Professional Support</a>
+                  <li class="divider"></li>
+                  <li class="nav-header">Resources</li>
+                  <li><a href="/general/reference-reading.html">Reference Reading</a>
+                  <li><a href="/general/faq.html">FAQ</a>
+                  <li class="divider"></li>
+                  <li class="nav-header">Legal</li>
+                  <li><a href="http://www.apache.org/licenses/">License</a></li>
+                  <li><a href="http://www.apache.org/security/">Security</a></li>
+                  <li><a href="/general/privacy-policy.html">Privacy Policy</a>
+                </ul>
+              </li>
+              <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Developers<b class="caret"></b></a>
+                <ul class="dropdown-menu">
+                  <li><a href="/developers/developer-resources.html">Developer resources</a></li>
+                  <li><a href="/developers/version-control.html">Version control</a></li>
+                  <li><a href="/developers/buildingmahout.html">Build from source</a></li>
+                  <li><a href="/developers/issue-tracker.html">Issue tracker</a></li>
+                  <li><a href="https://builds.apache.org/job/Mahout-Quality/" target="_blank">Code quality reports</a></li>
+                  <li class="divider"></li>
+                  <li class="nav-header">Contributions</li>
+                  <li><a href="/developers/how-to-contribute.html">How to contribute</a></li>
+                  <li><a href="/developers/how-to-become-a-committer.html">How to become a committer</a></li>
+                  <li><a href="/developers/gsoc.html">GSoC</a></li>
+                  <li class="divider"></li>
+                  <li class="nav-header">For committers</li>
+                  <li><a href="/developers/how-to-update-the-website.html">How to update the website</a></li>
+                  <li><a href="/developers/patch-check-list.html">Patch check list</a></li>
+                  <li><a href="/developers/github.html">Handling Github PRs</a></li>
+                  <li><a href="/developers/how-to-release.html">How to release</a></li>
+                  <li><a href="/developers/thirdparty-dependencies.html">Third party dependencies</a></li>
+                </ul>
+               </li>
+               <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Basics<b class="caret"></b></a>
+                 <ul class="dropdown-menu">
+                  <li><a href="/users/basics/algorithms.html">List of algorithms</a>
+                  <li><a href="/users/basics/quickstart.html">Quickstart</a>
+                  <li class="divider"></li>
+                  <li class="nav-header">Working with text</li>
+                  <li><a href="/users/basics/creating-vectors-from-text.html">Creating vectors from text</a>
+                  <li><a href="/users/basics/collocations.html">Collocations</a>
+                  <li class="divider"></li>
+                  <li class="nav-header">Dimensionality reduction</li>
+                  <li><a href="/users/dim-reduction/dimensional-reduction.html">Singular Value Decomposition</a></li>
+                  <li><a href="/users/dim-reduction/ssvd.html">Stochastic SVD</a></li>
+                  <li class="divider"></li>
+                  <li class="nav-header">Topic Models</li>      
+                  <li><a href="/users/clustering/latent-dirichlet-allocation.html">Latent Dirichlet Allocation</a></li>
+                </ul>
+                 </li>
+               <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Spark<b class="caret"></b></a>
+                <ul class="dropdown-menu">
+                  <li><a href="/users/sparkbindings/home.html">Scala &amp; Spark Bindings Overview</a></li>
+                  <li><a href="/users/sparkbindings/play-with-shell.html">Playing with Mahout's Spark Shell</a></li>
+			      <li class="divider"></li>
+                  <li><a href="/users/sparkbindings/faq.html">FAQ</a></li>
+                </ul>
+               </li>
+              <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Classification<b class="caret"></b></a>
+                <ul class="dropdown-menu">
+                  <li><a href="/users/classification/bayesian.html">Naive Bayes</a></li>
+                  <li><a href="/users/classification/hidden-markov-models.html">Hidden Markov Models</a></li>
+                  <li><a href="/users/classification/logistic-regression.html">Logistic Regression</a></li>
+                  <li><a href="/users/classification/partial-implementation.html">Random Forest</a></li>
+
+                  <li class="divider"></li>
+                  <li class="nav-header">Examples</li>
+                  <li><a href="/users/classification/breiman-example.html">Breiman example</a></li>
+                  <li><a href="/users/classification/twenty-newsgroups.html">20 newsgroups example</a></li>
+                </ul></li>
+               <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Clustering<b class="caret"></b></a>
+                <ul class="dropdown-menu">
+                <li><a href="/users/clustering/k-means-clustering.html">k-Means</a></li>
+                <li><a href="/users/clustering/canopy-clustering.html">Canopy</a></li>
+                <li><a href="/users/clustering/fuzzy-k-means.html">Fuzzy k-Means</a></li>
+                <li><a href="/users/clustering/streaming-k-means.html">Streaming KMeans</a></li>
+                <li><a href="/users/clustering/spectral-clustering.html">Spectral Clustering</a></li>
+                <li class="divider"></li>
+                <li class="nav-header">Commandline usage</li>
+                <li><a href="/users/clustering/k-means-commandline.html">Options for k-Means</a></li>
+                <li><a href="/users/clustering/canopy-commandline.html">Options for Canopy</a></li>
+                <li><a href="/users/clustering/fuzzy-k-means-commandline.html">Options for Fuzzy k-Means</a></li>
+                <li class="divider"></li>
+                <li class="nav-header">Examples</li>
+                <li><a href="/users/clustering/clustering-of-synthetic-control-data.html">Synthetic data</a></li>
+                <li class="divider"></li>
+                <li class="nav-header">Post processing</li>
+                <li><a href="/users/clustering/cluster-dumper.html">Cluster Dumper tool</a></li>
+                <li><a href="/users/clustering/visualizing-sample-clusters.html">Cluster visualisation</a></li>
+                </ul></li>
+                <li class="dropdown"> <a href="#" class="dropdown-toggle" data-toggle="dropdown">Recommendations<b class="caret"></b></a>
+                <ul class="dropdown-menu">
+                <li><a href="/users/recommender/quickstart.html">Quickstart</a></li>
+                <li><a href="/users/recommender/recommender-first-timer-faq.html">First Timer FAQ</a></li>
+                <li><a href="/users/recommender/userbased-5-minutes.html">A user-based recommender <br/>in 5 minutes</a></li>
+				<li><a href="/users/recommender/matrix-factorization.html">Matrix factorization-based<br/> recommenders</a></li>
+                <li><a href="/users/recommender/recommender-documentation.html">Overview</a></li>
+                <li class="divider"></li>
+                <li class="nav-header">Hadoop</li>
+                <li><a href="/users/recommender/intro-itembased-hadoop.html">Intro to item-based recommendations<br/> with Hadoop</a></li>
+                <li><a href="/users/recommender/intro-als-hadoop.html">Intro to ALS recommendations<br/> with Hadoop</a></li>
+
+              </ul>
+            </li>
+           </ul>
+          </div><!--/.nav-collapse -->
+        </div>
+      </div>
+    </div>
+
+</div>
+
+ <div id="sidebar">
+  <div id="sidebar-wrap">
+    <h2>Twitter</h2>
+	<ul class="sidemenu">
+		<li>
+<a class="twitter-timeline" href="https://twitter.com/ApacheMahout" data-widget-id="422861673444028416">Tweets by @ApacheMahout</a>
+<script>!function(d,s,id){var js,fjs=d.getElementsByTagName(s)[0],p=/^http:/.test(d.location)?'http':'https';if(!d.getElementById(id)){js=d.createElement(s);js.id=id;js.src=p+"://platform.twitter.com/widgets.js";fjs.parentNode.insertBefore(js,fjs);}}(document,"script","twitter-wjs");</script>
+</li>
+	</ul>
+    <h2>Apache Software Foundation</h2>
+    <ul class="sidemenu">
+      <li><a href="http://www.apache.org/foundation/how-it-works.html">How the ASF works</a></li>
+      <li><a href="http://www.apache.org/foundation/getinvolved.html">Get Involved</a></li>
+      <li><a href="http://www.apache.org/dev/">Developer Resources</a></li>
+      <li><a href="http://www.apache.org/foundation/sponsorship.html">Sponsorship</a></li>
+      <li><a href="http://www.apache.org/foundation/thanks.html">Thanks</a></li>
+    </ul>
+    <h2>Related Projects</h2>
+    <ul class="sidemenu">
+      <li><a href="http://lucene.apache.org/">Lucene</a></li>
+      <li><a href="http://hadoop.apache.org/">Hadoop</a></li>
+    </ul>
+  </div>
+</div>
+
+  <div id="content-wrap" class="clearfix">
+   <div id="main">
+    <h1 id="intro-to-cooccurrence-recommenders-with-spark">Intro to Cooccurrence Recommenders with Spark</h1>
+<p>Mahout provides several important building blocks for creating recommendations using Spark. <em>spark-itemsimilarity</em> can be used to create "other people also liked these things" type recommendations and paired with a search engine can personalize recommendations for individual users. <em>spark-rowsimilarity</em> can provide non-personalized content based recommendations, using textual content for example.</p>
+<p>Below are the command line jobs but the drivers and associated code can also be customized and accessed from the Scala APIs.</p>
+<h2 id="1-spark-itemsimilarity">1. spark-itemsimilarity</h2>
+<p><em>spark-itemsimilarity</em> is the Spark counterpart of the of the Mahout mapreduce job called <em>itemsimilarity</em>. It takes in elements of interactions, which have userID, itemID, and optionally a value. It will produce one of more indicator matrices created by comparing every user's interactions with every other user. The indicator matrix is an item x item matrix where the values are log-likelihood ratio strengths. For the legacy mapreduce version, there were several possible similarity measures but these are being deprecated in favor of LLR because in practice it performs the best.</p>
+<p>Mahout's mapreduce version of itemsimilarity takes a text file that is expected to have user and item IDs that conform to Mahout's ID requirements--they are non-negative integer that can be viewed as row and column numbers in a matrix.</p>
+<p><em>spark-itemsimilarity</em> also extends the notion of cooccurrence to cross-cooccurrence, in other words the Spark version will account for multi-modal interactions and create cross-indicator matrices allowing users to make use of much more data in creating recommendations or similar item lists.</p>
+<p>```
+spark-itemsimilarity Mahout 1.0-SNAPSHOT
+Usage: spark-itemsimilarity [options]</p>
+<p>Input, output options
+  -i <value> | --input <value>
+        Input path, may be a filename, directory name, or comma delimited list of 
+        HDFS supported URIs (required)
+  -i2 <value> | --input2 <value>
+        Secondary input path for cross-similarity calculation, same restrictions 
+        as "--input" (optional). Default: empty.
+  -o <value> | --output <value>
+        Path for output, any local or HDFS supported URI (required)</p>
+<p>Algorithm control options:
+  -mppu <value> | --maxPrefs <value>
+        Max number of preferences to consider per user (optional). Default: 500
+  -m <value> | --maxSimilaritiesPerItem <value>
+        Limit the number of similarities per item to this number (optional). 
+        Default: 100</p>
+<p>Note: Only the Log Likelihood Ratio (LLR) is supported as a similarity measure.</p>
+<p>Input text file schema options:
+  -id <value> | --inDelim <value>
+        Input delimiter character (optional). Default: "[,\t]"
+  -f1 <value> | --filter1 <value>
+        String (or regex) whose presence indicates a datum for the primary item 
+        set (optional). Default: no filter, all data is used
+  -f2 <value> | --filter2 <value>
+        String (or regex) whose presence indicates a datum for the secondary item 
+        set (optional). If not present no secondary dataset is collected
+  -rc <value> | --rowIDPosition <value>
+        Column number (0 based Int) containing the row ID string (optional). 
+        Default: 0
+  -ic <value> | --itemIDPosition <value>
+        Column number (0 based Int) containing the item ID string (optional). 
+        Default: 1
+  -fc <value> | --filterPosition <value>
+        Column number (0 based Int) containing the filter string (optional). 
+        Default: -1 for no filter</p>
+<p>Using all defaults the input is expected of the form: "userID<tab>itemId" or "userID<tab>itemID<tab>any-text..." and all rows will be used</p>
+<p>File discovery options:
+  -r | --recursive
+        Searched the -i path recursively for files that match --filenamePattern 
+        (optional), default: false
+  -fp <value> | --filenamePattern <value>
+        Regex to match in determining input files (optional). Default: filename 
+        in the --input option or "^part-.*" if --input is a directory</p>
+<p>Output text file schema options:
+  -rd <value> | --rowKeyDelim <value>
+        Separates the rowID key from the vector values list (optional). Default: 
+\t"
+  -cd <value> | --columnIdStrengthDelim <value>
+        Separates column IDs from their values in the vector values list (optional). 
+        Default: ":"
+  -td <value> | --elementDelim <value>
+        Separates vector element values in the values list (optional). Default: " "
+  -os | --omitStrength
+        Do not write the strength to the output files (optional), Default: false.
+        This option is used to output indexable data for creating a search engine 
+        recommender.</p>
+<p>Default delimiters will produce output of the form: "itemID1<tab>itemID2:value2<space>itemID10:value10..."</p>
+<p>Spark config options:
+  -ma <value> | --master <value>
+        Spark Master URL (optional). Default: "local". Note that you can specify 
+        the number of cores to get a performance improvement, for example "local[4]"
+  -sem <value> | --sparkExecutorMem <value>
+        Max Java heap available as "executor memory" on each node (optional). 
+        Default: 4g</p>
+<p>General config options:
+  -rs <value> | --randomSeed <value></p>
+<p>-h | --help
+        prints this usage text
+```</p>
+<p>This looks daunting but defaults to simple fairly sane values to take exactly the same input as legacy code and is pretty flexible. It allows the user to point to a single text file, a directory full of files, or a tree of directories to be traversed recursively. The files included can be specified with either a regex-style pattern or filename. The schema for the file is defined by column numbers, which map to the important bits of data including IDs and values. The files can even contain filters, which allow unneeded rows to be discarded or used for cross-cooccurrence calculations.</p>
+<p>See ItemSimilarityDriver.scala in Mahout's spark module if you want to customize the code. </p>
+<h3 id="defaults-in-the-spark-itemsimilarity-cli">Defaults in the <em>spark-itemsimilarity</em> CLI</h3>
+<p>If all defaults are used the input can be as simple as:</p>
+<p><code>userID1,itemID1
+userID2,itemID2
+...</code></p>
+<p>With the command line:</p>
+<p><code>bash$ mahout spark-itemsimilarity --input in-file --output out-dir</code></p>
+<p>This will use the "local" Spark context and will output the standard text version of a DRM</p>
+<p><code>itemID1&lt;tab&gt;itemID2:value2&lt;space&gt;itemID10:value10...</code></p>
+<h3 id="more-complex-input">More Complex Input</h3>
+<p>For input of the form:</p>
+<p><code>u1,purchase,iphone
+u1,purchase,ipad
+u2,purchase,nexus
+u2,purchase,galaxy
+u3,purchase,surface
+u4,purchase,iphone
+u4,purchase,galaxy
+u1,view,iphone
+u1,view,ipad
+u1,view,nexus
+u1,view,galaxy
+u2,view,iphone
+u2,view,ipad
+u2,view,nexus
+u2,view,galaxy
+u3,view,surface
+u3,view,nexus
+u4,view,iphone
+u4,view,ipad
+u4,view,galaxy</code></p>
+<h3 id="command-line">Command Line</h3>
+<p>Use the following options can be used:</p>
+<p><code>bash$ mahout spark-itemsimilarity \
+    --input in-file \     # where to look for data
+    --output out-path \   # root dir for output
+    --master masterUrl \  # URL of the Spark master server
+    --filter1 purchase \  # word that flags input for the primary action
+    --filter2 view \      # word that flags input for the secondary action
+    --itemIDPosition 2 \  # column that has the item ID
+    --rowIDPosition 0 \   # column that has the user ID
+    --filterPosition 1    # column that has the filter word</code></p>
+<h3 id="output">Output</h3>
+<p>The output of the job will be the standard text version of two Mahout DRMs. This is a case where we are calculating cross-cooccurrence so a primary indicator matrix and cross-indicator matrix will be created</p>
+<p>```
+out-path
+  |-- indicator-matrix - TDF part files
+  -- cross-indicator-matrix - TDF part-files</p>
+<p>```
+The indicator matrix will contain the lines:</p>
+<p><code>galaxy\tnexus:1.7260924347106847
+ipad\tiphone:1.7260924347106847
+nexus\tgalaxy:1.7260924347106847
+iphone\tipad:1.7260924347106847
+surface</code></p>
+<p>The cross-indicator matrix will contain:</p>
+<p><code>iphone\tnexus:1.7260924347106847 iphone:1.7260924347106847 ipad:1.7260924347106847 galaxy:1.7260924347106847
+ipad\tnexus:0.6795961471815897 iphone:0.6795961471815897 ipad:0.6795961471815897 galaxy:0.6795961471815897
+nexus\tnexus:0.6795961471815897 iphone:0.6795961471815897 ipad:0.6795961471815897 galaxy:0.6795961471815897
+galaxy\tnexus:1.7260924347106847 iphone:1.7260924347106847 ipad:1.7260924347106847 galaxy:1.7260924347106847
+surface\tsurface:4.498681156950466 nexus:0.6795961471815897</code></p>
+<h3 id="log-file-input">Log File Input</h3>
+<p>A common method of storing data is in log files. If they are written using some delimiter they can be consumed directly by spark-itemsimilarity. For instance input of the form:</p>
+<p><code>2014-06-23 14:46:53.115\tu1\tpurchase\trandom text\tiphone
+2014-06-23 14:46:53.115\tu1\tpurchase\trandom text\tipad
+2014-06-23 14:46:53.115\tu2\tpurchase\trandom text\tnexus
+2014-06-23 14:46:53.115\tu2\tpurchase\trandom text\tgalaxy
+2014-06-23 14:46:53.115\tu3\tpurchase\trandom text\tsurface
+2014-06-23 14:46:53.115\tu4\tpurchase\trandom text\tiphone
+2014-06-23 14:46:53.115\tu4\tpurchase\trandom text\tgalaxy
+2014-06-23 14:46:53.115\tu1\tview\trandom text\tiphone
+2014-06-23 14:46:53.115\tu1\tview\trandom text\tipad
+2014-06-23 14:46:53.115\tu1\tview\trandom text\tnexus
+2014-06-23 14:46:53.115\tu1\tview\trandom text\tgalaxy
+2014-06-23 14:46:53.115\tu2\tview\trandom text\tiphone
+2014-06-23 14:46:53.115\tu2\tview\trandom text\tipad
+2014-06-23 14:46:53.115\tu2\tview\trandom text\tnexus
+2014-06-23 14:46:53.115\tu2\tview\trandom text\tgalaxy
+2014-06-23 14:46:53.115\tu3\tview\trandom text\tsurface
+2014-06-23 14:46:53.115\tu3\tview\trandom text\tnexus
+2014-06-23 14:46:53.115\tu4\tview\trandom text\tiphone
+2014-06-23 14:46:53.115\tu4\tview\trandom text\tipad
+2014-06-23 14:46:53.115\tu4\tview\trandom text\tgalaxy</code></p>
+<p>Can be parsed with the following CLI and run on the cluster producing the same output as the above example.</p>
+<p><code>bash$ mahout spark-itemsimilarity \
+    --input in-file \
+    --output out-path \
+    --master spark://sparkmaster:4044 \
+    --filter1 purchase \
+    --filter2 view \
+    --inDelim "\t" \
+    --itemIDPosition 4 \
+    --rowIDPosition 1 \
+    --filterPosition 2 \</code></p>
+<h2 id="2-spark-rowsimilarity">2. spark-rowsimilarity</h2>
+<p><em>spark-rowsimilarity</em> is the companion to <em>spark-itemsimilarity</em> the primary difference is that it takes a text file version of a DRM with optional application specific IDs. The input is in text-delimited form where there are three delimiters used. By default it reads (rowID<tab>columnID1:strength1<space>columnID2:strength2...) Since this job only supports LLR similarity, which does not use the input strengths, they may be omitted in the input. It writes (columnID<tab>columnID1:strength1<space>columnID2:strength2...) The output is sorted by strength descending. The output can be interpreted as a column id from the primary input followed by a list of the most similar columns. For a discussion of the output layout and formatting see <em>spark-itemsimilarity</em>. </p>
+<p>One significant output option is --omitStrength. This allows output of the form (columnID<tab>columnID2<space>columnID2<space>...) This is a tab-delimited file containing a columnID token followed by a space delimited string of tokens. It can be directly indexed by search engines to create an item-based recommender.</p>
+<p>The command line interface is:</p>
+<p>```
+spark-rowsimilarity Mahout 1.0-SNAPSHOT
+Usage: spark-rowsimilarity [options]</p>
+<p>Input, output options
+  -i <value> | --input <value>
+        Input path, may be a filename, directory name, or comma delimited list 
+        of HDFS supported URIs (required)
+  -i2 <value> | --input2 <value>
+        Secondary input path for cross-similarity calculation, same restrictions 
+        as "--input" (optional). Default: empty.
+  -o <value> | --output <value>
+        Path for output, any local or HDFS supported URI (required)</p>
+<p>Algorithm control options:
+  -mo <value> | --maxObservations <value>
+        Max number of observations to consider per row (optional). Default: 500
+  -m <value> | --maxSimilaritiesPerRow <value>
+        Limit the number of similarities per item to this number (optional). 
+        Default: 100</p>
+<p>Note: Only the Log Likelihood Ratio (LLR) is supported as a similarity measure.</p>
+<p>Output text file schema options:
+  -rd <value> | --rowKeyDelim <value>
+        Separates the rowID key from the vector values list (optional). 
+        Default: "\t"
+  -cd <value> | --columnIdStrengthDelim <value>
+        Separates column IDs from their values in the vector values list 
+        (optional). Default: ":"
+  -td <value> | --elementDelim <value>
+        Separates vector element values in the values list (optional). 
+        Default: " "
+  -os | --omitStrength
+        Do not write the strength to the output files (optional), Default: 
+        false.
+This option is used to output indexable data for creating a search engine 
+recommender.</p>
+<p>Default delimiters will produce output of the form: "itemID1<tab>itemID2:value2<space>itemID10:value10..."</p>
+<p>File discovery options:
+  -r | --recursive
+        Searched the -i path recursively for files that match 
+        --filenamePattern (optional), Default: false
+  -fp <value> | --filenamePattern <value>
+        Regex to match in determining input files (optional). Default: 
+        filename in the --input option or "^part-.*" if --input is a directory</p>
+<p>Spark config options:
+  -ma <value> | --master <value>
+        Spark Master URL (optional). Default: "local". Note that you can 
+        specify the number of cores to get a performance improvement, for 
+        example "local[4]"
+  -sem <value> | --sparkExecutorMem <value>
+        Max Java heap available as "executor memory" on each node (optional). 
+        Default: 4g</p>
+<p>General config options:
+  -rs <value> | --randomSeed <value></p>
+<p>-h | --help
+        prints this usage text
+```
+See RowSimilarityDriver.scala in Mahout's spark module if you want to customize the code. </p>
+<h1 id="3-creating-a-recommender">3. Creating a Recommender</h1>
+<p>One significant output option for the spark-itemsimilarity job is --omitStrength. This is a tab-delimited file containing a itemID token followed by a space delimited string of tokens of the form:</p>
+<p><code>itemID&lt;tab&gt;itemsIDs-from-the-indicator-matrix</code></p>
+<p>To create a cooccurrence type collaborative filtering recommender using a search engine simply index this output created with --omitStrength. Then at runtime query the indexed data with the current user's history of the primary action on the index field that contains the primary indicator tokens. The result will be an ordered list of itemIDs as recommendations.</p>
+<p>It is possible to include the indicator strengths by attaching them to the tokens before indexing but that is engine specific and beyond this description. Using without weights generally provides good results since the indicators have been downsampled by strength so the indicator matrix has some degree of quality guarantee. </p>
+<h2 id="multi-action-recommendations">Multi-action Recommendations</h2>
+<p>Optionally the query can contain the user's history of a secondary action (input with --input2) against the cross-indicator tokens as a second field.</p>
+<p>In this case the indicator-matrix and the cross-indicator-matrix should be combined and indexed as two fields. The data will be of the form:</p>
+<p><code>itemID, itemIDs-from-indicator-matrix, itemIDs-from-cross-indicator-matrix</code></p>
+<p>Now the query will have one string of the user's primary action history and a second of the user's secondary action history against two fields in the index.</p>
+<p>It is probably better to index the two (or more) fields as multi-valued fields (arrays) and query them as such but the above works in much the same way if the indexed tokens are space delimited as is the query string. </p>
+<p><strong>Note:</strong> Using the underlying code it is possible to use as many actions as you have data for to create a multi-action recommender that makes the most of available data. The CLI only supports two actions.</p>
+<h1 id="4-using-spark-rowsimilarity-with-text-data">4. Using <em>spark-rowsimilarity</em> with Text Data</h1>
+<p>Another use case for these jobs is in finding similar textual content. For instance given the content of a blog post, which other posts are similar. In this case the columns are tokenized words and the rows are documents. Since LLR is being used there is no need to attach TF-IDF weights to the tokens&mdash;they will not be used. The Apache <a href="http://lucene.apache.org">Lucene</a> project provides several methods of <a href="http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/analysis/package-summary.html#package_description">analyzing and tokenizing</a> documents.</p>
+   </div>
+  </div>     
+</div> 
+  <footer class="footer" align="center">
+    <div class="container">
+      <p>
+        Copyright &copy; 2014 The Apache Software Foundation, Licensed under
+        the <a href="http://www.apache.org/licenses/LICENSE-2.0">Apache License, Version 2.0</a>.
+        <br />
+        Apache and the Apache feather logos are trademarks of The Apache Software Foundation.
+      </p>
+    </div>
+  </footer>
+  
+  <script src="/js/jquery-1.9.1.min.js"></script>
+  <script src="/js/bootstrap.min.js"></script>
+  <script>
+    (function() {
+      var cx = '012254517474945470291:vhsfv7eokdc';
+      var gcse = document.createElement('script');
+      gcse.type = 'text/javascript';
+      gcse.async = true;
+      gcse.src = (document.location.protocol == 'https:' ? 'https:' : 'http:') +
+          '//www.google.com/cse/cse.js?cx=' + cx;
+      var s = document.getElementsByTagName('script')[0];
+      s.parentNode.insertBefore(gcse, s);
+    })();
+  </script>
+</body>
+</html>