You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mahout.apache.org by pa...@apache.org on 2014/10/01 18:52:50 UTC

svn commit: r1628770 - /mahout/site/mahout_cms/trunk/content/users/recommender/intro-cooccurrence-spark.mdtext

Author: pat
Date: Wed Oct  1 16:52:49 2014
New Revision: 1628770

URL: http://svn.apache.org/r1628770
Log:
Added more description for integrating a search engine

Modified:
    mahout/site/mahout_cms/trunk/content/users/recommender/intro-cooccurrence-spark.mdtext

Modified: mahout/site/mahout_cms/trunk/content/users/recommender/intro-cooccurrence-spark.mdtext
URL: http://svn.apache.org/viewvc/mahout/site/mahout_cms/trunk/content/users/recommender/intro-cooccurrence-spark.mdtext?rev=1628770&r1=1628769&r2=1628770&view=diff
==============================================================================
--- mahout/site/mahout_cms/trunk/content/users/recommender/intro-cooccurrence-spark.mdtext (original)
+++ mahout/site/mahout_cms/trunk/content/users/recommender/intro-cooccurrence-spark.mdtext Wed Oct  1 16:52:49 2014
@@ -108,7 +108,7 @@ This will use the "local" Spark context 
 
     itemID1<tab>itemID2:value2<space>itemID10:value10...
 
-###How to use Multiple User Actions
+###<a name="multiple-actions">How to use Multiple User Actions</a>
 
 Often we record various actions the user takes for later analytics. These can now be used to make recommendations. 
 The idea of a recommender is to recommend the action you want the user to make. For an ecom app this might be 
@@ -288,45 +288,82 @@ The command line interface is:
 
 See RowSimilarityDriver.scala in Mahout's spark module if you want to customize the code. 
 
-#3. Creating a Recommender
+#3. Using *spark-rowsimilarity* with Text Data
 
-One significant output option for the spark-itemsimilarity job is --omitStrength. This creates a tab-delimited file containing a itemID token followed by a space delimited string of tokens of the form:
+Another use case for *spark-rowsimilarity* is in finding similar textual content. For instance given the content of a blog post, which other posts are similar. In this case the columns are terms and the rows are documents. Since LLR is the only similarity method supported this is not the optimal way to determine document similarity. LLR is used more as a quality of similarity filter than as a similarity measure. However *spark-rowsimilarity* will produce lists of similar docs for every doc. The Apache [Lucene](http://lucene.apache.org) project provides several methods of [analyzing and tokenizing](http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/analysis/package-summary.html#package_description) documents.
 
-    itemID<tab>itemsIDs-from-the-indicator-matrix
+#4. Creating a Unified Recommender
 
+Using the output of *spark-itemsimilarity* and *spark-rowsimilarity* you can build a unified cooccurrnce and content based recommender that can be used in both or either mode depending on indicators available and the history available at runtime for a user.
 
-To create a cooccurrence type collaborative filtering recommender using a search engine simply index this output created with --omitStrength. Then at runtime query the indexed data with the current user's history of the primary action on the index field that contains the primary indicator tokens. The result will be an ordered list of itemIDs as recommendations.
+##Requirements
 
-It is possible to include the indicator strengths by attaching them to the tokens before indexing but that is engine specific and beyond this description. Using without weights generally provides good results since the indicators have been downsampled by strength so the indicator matrix has some degree of quality guarantee. 
+1. Mahout SNAPSHOT-1.0 or later
+2. Hadoop
+3. Spark, the correct version for your version of Mahout and Hadoop
+4. A search engine like Solr or Elasticsearch
 
-##Multi-action Recommendations  
+##Example with 3 Indicators
 
-Optionally the query can contain the user's history of a secondary action (input with --input2) against the cross-indicator tokens as a second field.
+You will need to decide how you store user action data so they can be processed by the item and row similarity jobs and this is most easily done by using text files as described above. The data that is processed by these jobs is considered the **training data**. You will need some amount of user history in your recs query. It is typical to use the most recent user history but need not be exactly what is in the training set, which may include more historical data. Keeping the user history for query purposes could be done with a database by referencing some history from a users table. In the example above the two collaborative filtering actions are "purchase" and "view", but let's also add tags (taken from catalog categories or other descriptive metadata). 
 
-In this case the indicator-matrix and the cross-indicator-matrix should be combined and indexed as two fields. The data will be of the form:
+We will need to create 1 indicator from the primary action (purchase) 1 cross-indicator from the secondary action (view) and 1 content-indicator for (tags). We'll have to run *spark-itemsimilarity* once and *spark-rowsimilarity* once.
 
+We have described how to create the indicator and cross-indicator for purchase and view (the [How to use Multiple User 
+Actions](#multiple-actions) section) but tags will be a slightly different process. We want to use the fact that 
+certain items have tags similar to the ones associated with a user's purchases. This is not a collaborative filtering indicator 
+but rather a "content" or "metadata" type indicator since you are not using other users' tag viewing history, only the 
+individual that you are making recs for. This means that this method will make recommendations for items that have 
+no collaborative filtering data, as happens with new items in a catalog. New items may have tags assigned but no one
+ has purchased or viewed them yet. 
 
-    itemID, itemIDs-from-indicator-matrix, itemIDs-from-cross-indicator-matrix
+We could have treated viewing tags as a collaborative filtering cross-indicator by recording other users tag viewing history and that would probably give better results but here we are trying to illustrate recommending without CF data and using content-indicators. In the final query we will mix all 3 indicators.
 
+##Content Indicator
 
-Now the query will have one string of the user's primary action history and a second of the user's secondary action history against two fields in the index.
+To create a content-indicator we'll make use of the fact that the user has purchased items with certain tags. We want to find items with the most similar tags. Notice that other users' behavior is not considered--only other item's tags. This defines a content or metadata indicator. They are used when you want to find items that are similar to other items by using their content or metadata, not by which users interacted with them.
 
-It is probably better to index the two (or more) fields as multi-valued fields (arrays) and query them as such but the above works in much the same way if the indexed tokens are space delimited as is the query string. 
+For this we need input of the form:
 
-**Note:** Using the underlying code it is possible to use as many actions as you have data for to create a multi-action recommender that makes the most of available data. The CLI only supports two actions.
-
-#4. Using *spark-rowsimilarity* with Text Data
-
-Another use case for these jobs is in finding similar textual content. For instance given the content of a blog post, which other posts are similar. In this case the columns are tokenized words and the rows are documents. Since LLR is being used there is no need to attach TF-IDF weights to the tokens&mdash;they will not be used. The Apache [Lucene](http://lucene.apache.org) project provides several methods of [analyzing and tokenizing](http://lucene.apache.org/core/4_9_0/core/org/apache/lucene/analysis/package-summary.html#package_description) documents.
+    itemID<tab>list-of-tags
+    ...
 
+The full collection will look like the tags column from a catalog DB. For our ecom example it might be:
 
+    3459860b<tab>men long-sleeve chambray clothing casual
+    9446577d<tab>women tops chambray clothing casual
+    ...
 
+We'll use *spark-rowimilairity* because we are looking for similar rows, which encode items in this case. As with the indicator and cross-indicator we use the --omitStrength option. The strengths created are probabilistic log-likelihood ratios and so are used to filter unimportant similarities. Once the filtering or downsampling are finished we no longer need the strengths. We will get an indicator matrix of the form:
 
+    itemID<tab>list-of-item IDs
+    ...
 
+This is a content indicator since it has found other items with similar content or metadata.
 
+    3459860b<tab>3459860b 3459860b 6749860c 5959860a 3434860a 3477860a
+    9446577d<tab>9446577d 9496577d 0943577d 8346577d 9442277d 9446577e
+    ...  
+    
+We now have three indicators, two collaborative filtering type and one content type. Notice that purchase, view, and tags can all be recorded for users and so can be used in a recommendations query.
 
+##Unified Recommender Query
 
+The actual form of the query for recommendations will vary depending on your search engine but the intent is the same. For a given user, map their history of an action or content to the correct indicator field and preform and OR'd the query. This will allow matches from any indicator where AND queries require that an item have some similarity to all indicator fields.
 
+We have 3 indicators, these are indexed by the search engine into 3 fields, we'll call them "purchase", "view", and "tags". We take the user's history that corresponds to each indicator and create a query of the form:
 
+    Query:
+      field: purchase; q:user's-purchase-history
+      field: view; q:user's view-history
+      field: tags; q:user's-tags-associated-with-purchases
+      
+The query will result in an ordered list of items recommended for purchase but skewed towards items with similar tags to the ones the user has already purchased. 
 
+This is only an example and not necessarily the optimal way to create recs. It illustrates how business decisions can be translated into recommendations. This technique can be used to skew recommendations towards intrinsic indicators also. For instance you may want to put personalized popular item recs in a special place in the UI. Create a popularity indicator using whatever method you want and index that as a new indicator field and include the corresponding value in a query on the popularity field. 
 
+##Notes
+1. Use as much user action history as you can gather. Choose a primary action that is closest to what you want to recommend and the others will be used to create cross-indicators. Using more data in this fashion will almost always produce better recommendations.
+2. Content can be used where there is no recorded user behavior or when items change too quickly to get much interaction history. They can be used alone or mixed with other indicators.
+3. Most search engines support "boost" factors so you can favor one or more indicators. In the example query, if you want tags to only have a small effect you could boost the CF indicators.
+4. In the examples we have used space delimited strings for lists of IDs in indicators and in queries. It may be better to use arrays of strings if your storage system and search engine support them. For instance Solr allows multi-valued fields, which correspond to arrays.