You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@mahout.apache.org by gi...@apache.org on 2017/11/30 00:23:42 UTC

[4/4] mahout git commit: Automatic Site Publish by Buildbot

Automatic Site Publish by Buildbot


Project: http://git-wip-us.apache.org/repos/asf/mahout/repo
Commit: http://git-wip-us.apache.org/repos/asf/mahout/commit/d9686c8b
Tree: http://git-wip-us.apache.org/repos/asf/mahout/tree/d9686c8b
Diff: http://git-wip-us.apache.org/repos/asf/mahout/diff/d9686c8b

Branch: refs/heads/asf-site
Commit: d9686c8ba68c9126f93a0dfab0fe16e877fb493b
Parents: f496120
Author: jenkins <bu...@apache.org>
Authored: Thu Nov 30 00:23:36 2017 +0000
Committer: jenkins <bu...@apache.org>
Committed: Thu Nov 30 00:23:36 2017 +0000

----------------------------------------------------------------------
 developers/buildingmahout.html                  |  82 +++----
 developers/github.html                          |  54 ++---
 developers/githubPRs.html                       |  16 +-
 developers/how-to-release.html                  |  18 +-
 developers/how-to-update-the-website.html       |   4 +-
 docs/latest/sitemap.xml                         |   2 +-
 general/downloads.html                          |  18 +-
 general/mahout-wiki.html                        |   4 +-
 sitemap.xml                                     |   6 +-
 users/algorithms/d-als.html                     |  14 +-
 users/algorithms/d-qr.html                      |  14 +-
 users/algorithms/d-spca.html                    |  54 ++---
 users/algorithms/d-ssvd.html                    |  60 ++---
 users/algorithms/intro-cooccurrence-spark.html  |  72 +++---
 users/algorithms/spark-naive-bayes.html         |  66 +++---
 users/basics/collocations.html                  |  28 +--
 users/basics/creating-vectors-from-text.html    |  32 +--
 users/basics/quickstart.html                    |   4 +-
 users/classification/bayesian-commandline.html  |   8 +-
 users/classification/bayesian.html              |  64 ++---
 users/classification/breiman-example.html       |  16 +-
 users/classification/class-discovery.html       |   8 +-
 users/classification/hidden-markov-models.html  |  20 +-
 users/classification/mlp.html                   |  44 ++--
 .../classification/partial-implementation.html  |  12 +-
 users/classification/twenty-newsgroups.html     |  52 ++---
 .../wikipedia-classifier-example.html           |  18 +-
 users/clustering/canopy-clustering.html         |   4 +-
 users/clustering/canopy-commandline.html        |   8 +-
 users/clustering/cluster-dumper.html            |  12 +-
 .../clustering-of-synthetic-control-data.html   |  12 +-
 users/clustering/clusteringyourdata.html        |   8 +-
 users/clustering/fuzzy-k-means-commandline.html |   8 +-
 users/clustering/fuzzy-k-means.html             |   4 +-
 users/clustering/k-means-clustering.html        |   8 +-
 users/clustering/k-means-commandline.html       |   8 +-
 .../clustering/latent-dirichlet-allocation.html |   8 +-
 users/clustering/lda-commandline.html           |   8 +-
 users/clustering/spectral-clustering.html       |  34 +--
 users/clustering/streaming-k-means.html         |  48 ++--
 users/clustering/viewing-results.html           |   8 +-
 .../clustering/visualizing-sample-clusters.html |   4 +-
 users/dim-reduction/ssvd.html                   |  52 ++---
 .../classify-a-doc-from-the-shell.html          |  88 +++----
 users/environment/h2o-internals.html            |   8 +-
 users/environment/how-to-build-an-app.html      |  78 +++----
 users/environment/in-core-reference.html        | 234 +++++++++----------
 users/environment/out-of-core-reference.html    | 192 +++++++--------
 .../playing-with-samsara-flink.html             |  48 ++--
 .../misc/parallel-frequent-pattern-mining.html  |   4 +-
 .../using-mahout-with-python-via-jpype.html     |  16 +-
 users/recommender/intro-als-hadoop.html         |   8 +-
 users/recommender/intro-cooccurrence-spark.html |  72 +++---
 users/recommender/intro-itembased-hadoop.html   |   4 +-
 users/recommender/matrix-factorization.html     |  52 ++---
 .../recommender/recommender-documentation.html  |  36 +--
 users/sparkbindings/faq.html                    |   6 +-
 users/sparkbindings/home.html                   |   6 +-
 users/sparkbindings/play-with-shell.html        |  68 +++---
 59 files changed, 972 insertions(+), 972 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/developers/buildingmahout.html
----------------------------------------------------------------------
diff --git a/developers/buildingmahout.html b/developers/buildingmahout.html
index f7df86c..e47030a 100644
--- a/developers/buildingmahout.html
+++ b/developers/buildingmahout.html
@@ -286,10 +286,10 @@
 <p>Checkout the sources from the <a href="https://github.com/apache/mahout">Mahout GitHub repository</a>
 either via</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone git@github.com:apache/mahout.git or
+<pre><code>git clone git@github.com:apache/mahout.git or
  
 git clone https://github.com/apache/mahout.git
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="building-from-source">Building From Source</h2>
 
@@ -320,49 +320,49 @@ Choose package type: Pre-Built for Hadoop 2.4</p>
 <p>Install ViennaCL 1.7.0+
 If running Ubuntu 16.04+</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt-get install libviennacl-dev
-</code></pre></div></div>
+<pre><code>sudo apt-get install libviennacl-dev
+</code></pre>
 
 <p>Otherwise if your distribution’s package manager does not have a viennniacl-dev package &gt;1.7.0, clone it directly into the directory which will be included in when  being compiled by Mahout:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mkdir ~/tmp
+<pre><code>mkdir ~/tmp
 cd ~/tmp &amp;&amp; git clone https://github.com/viennacl/viennacl-dev.git
 cp -r viennacl/ /usr/local/
 cp -r CL/ /usr/local/
-</code></pre></div></div>
+</code></pre>
 
 <p>Ensure that the OpenCL 1.2+ drivers are installed (packed with most consumer grade NVIDIA drivers).  Not sure about higher end cards.</p>
 
-<p>Clone mahout repository into <code class="highlighter-rouge">~/apache</code>.</p>
+<p>Clone mahout repository into <code>~/apache</code>.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/apache/mahout.git
-</code></pre></div></div>
+<pre><code>git clone https://github.com/apache/mahout.git
+</code></pre>
 
 <h6 id="configuration">Configuration</h6>
 
 <p>When building mahout for a spark backend, we need four System Environment variables set:</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>    export MAHOUT_HOME=/home/&lt;user&gt;/apache/mahout
+<pre><code>    export MAHOUT_HOME=/home/&lt;user&gt;/apache/mahout
     export HADOOP_HOME=/home/&lt;user&gt;/apache/hadoop-2.4.1
     export SPARK_HOME=/home/&lt;user&gt;/apache/spark-1.6.3-bin-hadoop2.4    
     export JAVA_HOME=/home/&lt;user&gt;/java/jdk-1.8.121
-</code></pre></div></div>
+</code></pre>
 
 <p>Mahout on Spark regularly uses one more env variable, the IP of the Spark cluster’s master node (usually the node which one would be logged into).</p>
 
 <p>To use 4 local cores (Spark master need not be running)</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export MASTER=local[4]
-</code></pre></div></div>
+<pre><code>export MASTER=local[4]
+</code></pre>
 <p>To use all available local cores (again, Spark master need not be running)</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export MASTER=local[*]
-</code></pre></div></div>
+<pre><code>export MASTER=local[*]
+</code></pre>
 <p>To point to a cluster with spark running:</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export MASTER=spark://master.ip.address:7077
-</code></pre></div></div>
+<pre><code>export MASTER=spark://master.ip.address:7077
+</code></pre>
 
 <p>We then add these to the path:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>   PATH=$PATH$:MAHOUT_HOME/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin:$JAVA_HOME/bin
-</code></pre></div></div>
+<pre><code>   PATH=$PATH$:MAHOUT_HOME/bin:$HADOOP_HOME/bin:$SPARK_HOME/bin:$JAVA_HOME/bin
+</code></pre>
 
 <p>These should be added to the your ~/.bashrc file.</p>
 
@@ -371,35 +371,35 @@ cp -r CL/ /usr/local/
 <p>From the  $MAHOUT_HOME directory we may issue the commands to build each using mvn profiles.</p>
 
 <p>JVM only:</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install -DskipTests
-</code></pre></div></div>
+<pre><code>mvn clean install -DskipTests
+</code></pre>
 
 <p>JVM with native OpenMP level 2 and level 3 matrix/vector Multiplication</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install -Pviennacl-omp -Phadoop2 -DskipTests
-</code></pre></div></div>
+<pre><code>mvn clean install -Pviennacl-omp -Phadoop2 -DskipTests
+</code></pre>
 <p>JVM with native OpenMP and OpenCL for Level 2 and level 3 matrix/vector Multiplication.  (GPU errors fall back to OpenMP, currently only a single GPU/node is supported).</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install -Pviennacl -Phadoop2 -DskipTests
-</code></pre></div></div>
+<pre><code>mvn clean install -Pviennacl -Phadoop2 -DskipTests
+</code></pre>
 
 <h3 id="changing-scala-version">Changing Scala Version</h3>
 
 <p>To change the Scala version used it is possible to use profiles, however the resulting artifacts seem to have trouble being resolved with SBT.</p>
 
-<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install <span class="nt">-Pscala-2</span>.11
-</code></pre></div></div>
+<pre><code class="language-bash">mvn clean install -Pscala-2.11
+</code></pre>
 
 <p>Maven is able to resolve the resulting artifacts effectively, this will also work if the goal is simply to use the Mahout-Shell. However if the goal is to build with SBT, the following tool should be used</p>
 
-<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">cd</span> <span class="nv">$MAHOUT_HOME</span>/buildtools
+<pre><code class="language-bash">cd $MAHOUT_HOME/buildtools
 ./change-scala-version.sh 2.11
-</code></pre></div></div>
+</code></pre>
 
-<p>Now go back to <code class="highlighter-rouge">$MAHOUT_HOME</code> and execute</p>
+<p>Now go back to <code>$MAHOUT_HOME</code> and execute</p>
 
-<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install <span class="nt">-Pscala-2</span>.11
-</code></pre></div></div>
+<pre><code class="language-bash">mvn clean install -Pscala-2.11
+</code></pre>
 
-<p><strong>NOTE:</strong> you still need to pass the <code class="highlighter-rouge">-Pscala-2.11</code> profile, as this determines and propegates the minor scala version (e.g. 2.11.8)</p>
+<p><strong>NOTE:</strong> you still need to pass the <code>-Pscala-2.11</code> profile, as this determines and propegates the minor scala version (e.g. 2.11.8)</p>
 
 <h3 id="the-distribution-profile">The Distribution Profile</h3>
 
@@ -426,21 +426,21 @@ cp -r CL/ /usr/local/
   <li>H2O Scala-2.11</li>
 </ul>
 
-<p>Note: * ViennaCLs are only created if the <code class="highlighter-rouge">viennacl</code> or <code class="highlighter-rouge">viennacl-omp</code> profiles are activated.</p>
+<p>Note: * ViennaCLs are only created if the <code>viennacl</code> or <code>viennacl-omp</code> profiles are activated.</p>
 
-<p>By default, this phase will execute the <code class="highlighter-rouge">package</code> lifecycle goal on all built “extra” varients.</p>
+<p>By default, this phase will execute the <code>package</code> lifecycle goal on all built “extra” varients.</p>
 
 <p>E.g. if you were to run</p>
 
-<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install <span class="nt">-Pdistribution</span>
-</code></pre></div></div>
+<pre><code class="language-bash">mvn clean install -Pdistribution
+</code></pre>
 
-<p>You will <code class="highlighter-rouge">install</code> all of the “Default Targets” but only <code class="highlighter-rouge">package</code> the “Also created”.</p>
+<p>You will <code>install</code> all of the “Default Targets” but only <code>package</code> the “Also created”.</p>
 
-<p>If you wish to <code class="highlighter-rouge">install</code> all of the above, you can set the <code class="highlighter-rouge">lifecycle.target</code> switch as follows:</p>
+<p>If you wish to <code>install</code> all of the above, you can set the <code>lifecycle.target</code> switch as follows:</p>
 
-<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code>mvn clean install <span class="nt">-Pdistribution</span> <span class="nt">-Dlifecycle</span>.target<span class="o">=</span>install
-</code></pre></div></div>
+<pre><code class="language-bash">mvn clean install -Pdistribution -Dlifecycle.target=install
+</code></pre>
 
 
    </div>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/developers/github.html
----------------------------------------------------------------------
diff --git a/developers/github.html b/developers/github.html
index 24fe0e2..0ea9f59 100644
--- a/developers/github.html
+++ b/developers/github.html
@@ -289,29 +289,29 @@ So if you perform “git push origin master” it will go to github.</p>
 
 <p>To attach to the apache git repo do the following:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git remote add apache https://git-wip-us.apache.org/repos/asf/mahout.git
-</code></pre></div></div>
+<pre><code>git remote add apache https://git-wip-us.apache.org/repos/asf/mahout.git
+</code></pre>
 
 <p>To check your remote setup</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git remote -v
-</code></pre></div></div>
+<pre><code>git remote -v
+</code></pre>
 
 <p>you should see something like this:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>origin    https://github.com/your-github-id/mahout.git (fetch)
+<pre><code>origin    https://github.com/your-github-id/mahout.git (fetch)
 origin    https://github.com/your-github-id/mahout.git (push)
 apache    https://git-wip-us.apache.org/repos/asf/mahout.git (fetch)
 apache    https://git-wip-us.apache.org/repos/asf/mahout.git (push)
-</code></pre></div></div>
+</code></pre>
 
 <p>Now if you want to experiment with a branch everything, by default, points to your github account because ‘origin’ is default. You can work as normal using only github until you are ready to merge with the apache remote. Some conventions will integrate with Apache Jira ticket numbers.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout -b mahout-xxxx #xxxx typically is a Jira ticket number
+<pre><code>git checkout -b mahout-xxxx #xxxx typically is a Jira ticket number
 #do some work on the branch
 git commit -a -m "doing some work"
 git push origin mahout-xxxx # notice pushing to **origin** not **apache**
-</code></pre></div></div>
+</code></pre>
 
 <p>Once you are ready to commit to the apache remote you can merge and push them directly or better yet create a PR.</p>
 
@@ -319,9 +319,9 @@ git push origin mahout-xxxx # notice pushing to **origin** not **apache**
 
 <p>Push your branch to Github:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout mahout-xxxx
+<pre><code>git checkout mahout-xxxx
 git push origin mahout-xxxx
-</code></pre></div></div>
+</code></pre>
 
 <p>Go to your mahout-xxxx branch on Github. Since you forked it from Github’s apache/mahout it will default
 any PR to go to apache/master.</p>
@@ -362,45 +362,45 @@ same time, it is recommended to use <strong>squash commits</strong>.</p>
 
 <p>Merging pull requests are equivalent to a “pull” of a contributor’s branch:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout master      # switch to local master branch
+<pre><code>git checkout master      # switch to local master branch
 git pull apache master   # fast-forward to current remote HEAD
 git pull --squash https://github.com/cuser/mahout cbranch  # merge to master 
-</code></pre></div></div>
+</code></pre>
 
 <p>–squash ensures all PR history is squashed into single commit, and allows committer to use his/her own
-message. Read git help for merge or pull for more information about <code class="highlighter-rouge">--squash</code> option. In this example we 
+message. Read git help for merge or pull for more information about <code>--squash</code> option. In this example we 
 assume that the contributor’s Github handle is “cuser” and the PR branch name is “cbranch”. 
 Next, resolve conflicts, if any, or ask a contributor to rebase on top of master, if PR went out of sync.</p>
 
 <p>If you are ready to merge your own (committer’s) PR you probably only need to merge (not pull), since you have a local copy 
 that you’ve been working on. This is the branch that you used to create the PR.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout master      # switch to local master branch
+<pre><code>git checkout master      # switch to local master branch
 git pull apache master   # fast-forward to current remote HEAD
 git merge --squash mahout-xxxx
-</code></pre></div></div>
+</code></pre>
 
 <p>Remember to run regular patch checks, build with tests enabled, and change CHANGELOG.</p>
 
 <p>If everything is fine, you now can commit the squashed request along the lines</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git commit --author &lt;contributor_email&gt; -a -m "MAHOUT-XXXX description closes apache/mahout#ZZ"
-</code></pre></div></div>
+<pre><code>git commit --author &lt;contributor_email&gt; -a -m "MAHOUT-XXXX description closes apache/mahout#ZZ"
+</code></pre>
 
-<p>MAHOUT-XXXX is all caps and where <code class="highlighter-rouge">ZZ</code> is the pull request number on apache/mahout repository. Including 
+<p>MAHOUT-XXXX is all caps and where <code>ZZ</code> is the pull request number on apache/mahout repository. Including 
 “closes apache/mahout#ZZ” will close the PR automatically. More information is found here [<a href="https://help.github.com/articles/closing-issues-via-commit-messages">3</a>].</p>
 
 <p>Next, push to git-wip-us.a.o:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>push apache master
-</code></pre></div></div>
+<pre><code>push apache master
+</code></pre>
 
 <p>(this will require Apache handle credentials).</p>
 
 <p>The PR, once pushed, will get mirrored to github. To update your github version push there too:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>push origin master
-</code></pre></div></div>
+<pre><code>push origin master
+</code></pre>
 
 <p><em>Note on squashing: Since squash discards remote branch history, repeated PRs from the same remote branch are 
 difficult for merging. The workflow implies that every new PR starts with a new rebased branch. This is more 
@@ -412,11 +412,11 @@ would warn to begin with. Anyway, watch for dupe PRs (based on same source branc
 <p>When we want to reject a PR (close without committing), we can just issue an empty commit on master’s HEAD 
 <em>without merging the PR</em>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git commit --allow-empty -m "closes apache/mahout#ZZ *Won't fix*"
+<pre><code>git commit --allow-empty -m "closes apache/mahout#ZZ *Won't fix*"
 git push apache master
-</code></pre></div></div>
+</code></pre>
 
-<p>that should close PR <code class="highlighter-rouge">ZZ</code> on github mirror without merging and any code modifications in the master repository.</p>
+<p>that should close PR <code>ZZ</code> on github mirror without merging and any code modifications in the master repository.</p>
 
 <h2 id="apachegithub-integration-features">Apache/github integration features</h2>
 
@@ -424,8 +424,8 @@ git push apache master
 Mahout issue handles must be in the form MAHOUT-YYYYY (all capitals). Usually it makes sense to 
 file a jira issue first, and then create a PR with description</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>MAHOUT-YYYY: &lt;jira-issue-description&gt;
-</code></pre></div></div>
+<pre><code>MAHOUT-YYYY: &lt;jira-issue-description&gt;
+</code></pre>
 
 <p>In this case all subsequent comments will automatically be copied to jira without having to mention 
 jira issue explicitly in each comment of the PR.</p>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/developers/githubPRs.html
----------------------------------------------------------------------
diff --git a/developers/githubPRs.html b/developers/githubPRs.html
index 9627917..e083236 100644
--- a/developers/githubPRs.html
+++ b/developers/githubPRs.html
@@ -291,17 +291,17 @@ same time, it is recommended to use <strong>squash commits</strong>.</p>
 
 <p>Read [<a href="https://help.github.com/articles/merging-a-pull-request#merging-locally">2</a>] (merging locally). Merging pull requests are equivalent to merging contributor’s branch:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git checkout master      # switch to local master branch
+<pre><code>git checkout master      # switch to local master branch
 git pull apache master   # fast-forward to current remote HEAD
 git pull --squash https://github.com/cuser/mahout cbranch  # merge to master 
-</code></pre></div></div>
+</code></pre>
 
 <p>In this example we assume that contributor Github handle is “cuser” and the PR branch name is “cbranch” there. We also 
 assume that <em>apache</em> remote is configured as</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apache  https://git-wip-us.apache.org/repos/asf/mahout.git (fetch)
+<pre><code>apache  https://git-wip-us.apache.org/repos/asf/mahout.git (fetch)
 apache  https://git-wip-us.apache.org/repos/asf/mahout.git (push)
-</code></pre></div></div>
+</code></pre>
 
 <p>Squash pull ensures all PR history is squashed into single commit. Also, it is not yet committed, even if 
 fast forward is possible, so you get chance to change things before committing.</p>
@@ -312,8 +312,8 @@ fast forward is possible, so you get chance to change things before committing.<
 
 <p>Suppose everything is fine, you now can commit the squashed request</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git commit -a
-</code></pre></div></div>
+<pre><code>git commit -a
+</code></pre>
 
 <p>edit message to contain “MAHOUT-YYYY description <strong>closes #ZZ</strong>”, where ZZ is the pull request number. 
 Including “closes #ZZ” will close PR automatically. More information [<a href="https://help.github.com/articles/closing-issues-via-commit-messages">3</a>].</p>
@@ -332,9 +332,9 @@ would warn to begin with. Anyway, watch for dupe PRs (based on same source branc
 <p>When we want to reject a PR (close without committing), just do the following commit on master’s HEAD 
 <em>without merging the PR</em>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git commit --allow-empty -m "closes #ZZ *Won't fix*"
+<pre><code>git commit --allow-empty -m "closes #ZZ *Won't fix*"
 git push apache master
-</code></pre></div></div>
+</code></pre>
 
 <p>that should close PR without merging and any code modifications in the master repository.</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/developers/how-to-release.html
----------------------------------------------------------------------
diff --git a/developers/how-to-release.html b/developers/how-to-release.html
index f25809b..2a6322c 100644
--- a/developers/how-to-release.html
+++ b/developers/how-to-release.html
@@ -312,13 +312,13 @@ warnings)</li>
 <p><a name="HowToRelease-Beforebuildingrelease"></a></p>
 <h2 id="before-building-release">Before building release</h2>
 <ol>
-  <li>Check that all tests pass after a clean compile: <code class="highlighter-rouge">mvn clean test</code></li>
+  <li>Check that all tests pass after a clean compile: <code>mvn clean test</code></li>
   <li>Check that there are no remaining unresolved Jira issues with the upcoming version number listed as the “Fix” version</li>
   <li>Publish any previously unpublished third-party dependenciess: <a href="thirdparty-dependencies.html">Thirdparty Dependencies</a></li>
   <li>Build and preview resulting artifacts:
-    <div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code> <span class="nb">cd </span>buildtools
+    <pre><code class="language-bash"> cd buildtools
  ./build-all-release-jars.sh
-</code></pre></div>    </div>
+</code></pre>
   </li>
   <li>Make sure packages will come out looking right</li>
 </ol>
@@ -333,10 +333,10 @@ warnings)</li>
   <li>Ensure you have set up standard Apache committer settings in
  ~/.m2/settings.xml as per <a href="http://maven.apache.org/developers/committer-settings.html">this page</a>
 .</li>
-  <li>Add a profile to your <code class="highlighter-rouge">~/.m2/settings.xml</code> in the <code class="highlighter-rouge">&lt;profiles&gt;</code> section with:</li>
+  <li>Add a profile to your <code>~/.m2/settings.xml</code> in the <code>&lt;profiles&gt;</code> section with:</li>
 </ul>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;profiles&gt;
+<pre><code>&lt;profiles&gt;
   &lt;profile&gt;
     &lt;id&gt;mahout_release&lt;/id&gt;
     &lt;properties&gt;
@@ -348,10 +348,10 @@ warnings)</li>
     &lt;/properties&gt;
   &lt;/profile&gt;
 &lt;/profiles&gt;
-</code></pre></div></div>
+</code></pre>
 
 <ul>
-  <li>You may also need to add the following to the <code class="highlighter-rouge">&lt;servers&gt;</code> section in <code class="highlighter-rouge">~/.m2/settings.xml</code> in order to upload artifacts (as the <code class="highlighter-rouge">-Dusername=</code> <code class="highlighter-rouge">-Dpassword=</code> didn’t work for gsingers for 0.8, but this did; n.b. it didn’t work for akm for the 0.13 release):
+  <li>You may also need to add the following to the <code>&lt;servers&gt;</code> section in <code>~/.m2/settings.xml</code> in order to upload artifacts (as the <code>-Dusername=</code> <code>-Dpassword=</code> didn’t work for gsingers for 0.8, but this did; n.b. it didn’t work for akm for the 0.13 release):
 ```</li>
 </ul>
 <server>
@@ -359,7 +359,7 @@ warnings)</li>
   <username>USERNAME</username>
   <password>PASSWORD</password>
 </server>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>
+<pre><code>
 * *Clarify which env var is better or choose one* Set environment variable `MAVEN_OPTS` to `-Xmx1024m` to ensure the tests can run: `export JAVA_OPTIONS="-Xmx1g"`
 * If you are outside the US, then svn.apache.org may not resolve to the main US-based Subversion servers. (Compare the IP address you get for svn.apache.org with svn.us.apache.org to see if they are different.) This will cause problems during the release since it will create a revision and then immediately access, but, there is a replication lag of perhaps a minute to the non-US servers. To temporarily force using the US-based server, edit your equivalent of /etc/hosts and map the IP address of svn.us.apache.org to svn.apache.org.
 * Create the release candidate: `mvn -Pmahout-release,apache-release release:prepare release:perform`
@@ -378,7 +378,7 @@ release:perform target
 * Call a VOTE on dev@mahout.apache.org.  Votes require 3 days before passing.  See Apache [release policy|http://www.apache.org/foundation/voting.html#ReleaseVotes] for more info.
 * If there's a problem, you need to unwind the release and start all over.
 
-</code></pre></div></div>
+</code></pre>
 <p>mvn -Pmahout-release,apache-release versions:set -DnewVersion=PREVIOUS_SNAPSHOT
 mvn -Pmahout-release,apache-release versions:commit
 git commit 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/developers/how-to-update-the-website.html
----------------------------------------------------------------------
diff --git a/developers/how-to-update-the-website.html b/developers/how-to-update-the-website.html
index 42c38f0..d316f06 100644
--- a/developers/how-to-update-the-website.html
+++ b/developers/how-to-update-the-website.html
@@ -276,11 +276,11 @@
 
 <p>Website updates are handled by updating code in the trunk.</p>
 
-<p>You will find markdown pages in <code class="highlighter-rouge">mahout/website</code>.</p>
+<p>You will find markdown pages in <code>mahout/website</code>.</p>
 
 <p>Jenkins rebuilds and publishes the website whenever a change is detected in master.</p>
 
-<p><code class="highlighter-rouge">mahout/website/build_site.sh</code> contains the script that is used to do this.</p>
+<p><code>mahout/website/build_site.sh</code> contains the script that is used to do this.</p>
 
    </div>
   </div>     

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/docs/latest/sitemap.xml
----------------------------------------------------------------------
diff --git a/docs/latest/sitemap.xml b/docs/latest/sitemap.xml
index 5d9c772..35ce323 100644
--- a/docs/latest/sitemap.xml
+++ b/docs/latest/sitemap.xml
@@ -299,6 +299,6 @@
 </url>
 <url>
 <loc>/distributed/spark-bindings/ScalaSparkBindings.pdf</loc>
-<lastmod>2017-11-30T00:12:37+00:00</lastmod>
+<lastmod>2017-11-30T00:23:17+00:00</lastmod>
 </url>
 </urlset>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/general/downloads.html
----------------------------------------------------------------------
diff --git a/general/downloads.html b/general/downloads.html
index 4a29fe9..0a59a9e 100644
--- a/general/downloads.html
+++ b/general/downloads.html
@@ -286,19 +286,19 @@ the Apache mirrors. The latest Mahout release is available for download at:</p>
 
 <p>Apache Mahout is mirrored to <a href="https://github.com/apache/mahout">Github</a>. To get all source:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>git clone https://github.com/apache/mahout.git mahout
-</code></pre></div></div>
+<pre><code>git clone https://github.com/apache/mahout.git mahout
+</code></pre>
 
 <h1 id="environment">Environment</h1>
 
 <p>Whether you are using Mahout’s Shell, running command line jobs or using it as a library to build your own apps 
 you’ll need to setup several environment variables. 
-Edit your environment in <code class="highlighter-rouge">~/.bash_profile</code> for Mac or <code class="highlighter-rouge">~/.bashrc</code> for many linux distributions. Add the following</p>
+Edit your environment in <code>~/.bash_profile</code> for Mac or <code>~/.bashrc</code> for many linux distributions. Add the following</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export MAHOUT_HOME=/path/to/mahout
+<pre><code>export MAHOUT_HOME=/path/to/mahout
 export MAHOUT_LOCAL=true # for running standalone on your dev machine, 
 # unset MAHOUT_LOCAL for running on a cluster 
-</code></pre></div></div>
+</code></pre>
 
 <p>If you are running on Spark you will also need $SPARK_HOME</p>
 
@@ -311,21 +311,21 @@ Then add the appropriate setting to your pom.xml or build.sbt following the temp
 
 <p>If you only need the math part of Mahout:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;dependency&gt;
+<pre><code>&lt;dependency&gt;
     &lt;groupId&gt;org.apache.mahout&lt;/groupId&gt;
     &lt;artifactId&gt;mahout-math&lt;/artifactId&gt;
     &lt;version&gt;${mahout.version}&lt;/version&gt;
 &lt;/dependency&gt;
-</code></pre></div></div>
+</code></pre>
 
 <p>In case you would like to use some of our integration tooling (e.g. for generating vectors from Lucene):</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;dependency&gt;
+<pre><code>&lt;dependency&gt;
     &lt;groupId&gt;org.apache.mahout&lt;/groupId&gt;
     &lt;artifactId&gt;mahout-hdfs&lt;/artifactId&gt;
     &lt;version&gt;${mahout.version}&lt;/version&gt;
 &lt;/dependency&gt;
-</code></pre></div></div>
+</code></pre>
 
 <p>In case you are using Ivy, Gradle, Buildr, Grape or SBT you might want to directly head over to the official <a href="http://mvnrepository.com/artifact/org.apache.mahout/mahout-core">Maven Repository search</a>.</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/general/mahout-wiki.html
----------------------------------------------------------------------
diff --git a/general/mahout-wiki.html b/general/mahout-wiki.html
index 7c9f32a..67a1bef 100644
--- a/general/mahout-wiki.html
+++ b/general/mahout-wiki.html
@@ -474,9 +474,9 @@ and picking a username and password.</li>
 
 <p>There are some conventions used on the Mahout wiki:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>* {noformat}+*TODO:*+{noformat} (+*TODO:*+ ) is used to denote sections that definitely need to be cleaned up.
+<pre><code>* {noformat}+*TODO:*+{noformat} (+*TODO:*+ ) is used to denote sections that definitely need to be cleaned up.
 * {noformat}+*Mahout_(version)*+{noformat} (+*Mahout_0.2*+) is used to draw attention to which version of Mahout a feature was (or will be) added to Mahout.
-</code></pre></div></div>
+</code></pre>
 
 
    </div>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/sitemap.xml
----------------------------------------------------------------------
diff --git a/sitemap.xml b/sitemap.xml
index ea32436..f4fee43 100644
--- a/sitemap.xml
+++ b/sitemap.xml
@@ -347,14 +347,14 @@
 </url>
 <url>
 <loc>/release-notes/Apache-Mahout-0.10.0-Release-Notes.pdf</loc>
-<lastmod>2017-11-30T00:12:37+00:00</lastmod>
+<lastmod>2017-11-30T00:23:17+00:00</lastmod>
 </url>
 <url>
 <loc>/users/dim-reduction/ssvd.page/SSVD-CLI.pdf</loc>
-<lastmod>2017-11-30T00:12:37+00:00</lastmod>
+<lastmod>2017-11-30T00:23:17+00:00</lastmod>
 </url>
 <url>
 <loc>/users/sparkbindings/ScalaSparkBindings.pdf</loc>
-<lastmod>2017-11-30T00:12:37+00:00</lastmod>
+<lastmod>2017-11-30T00:23:17+00:00</lastmod>
 </url>
 </urlset>

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/algorithms/d-als.html
----------------------------------------------------------------------
diff --git a/users/algorithms/d-als.html b/users/algorithms/d-als.html
index 2677918..0a98e03 100644
--- a/users/algorithms/d-als.html
+++ b/users/algorithms/d-als.html
@@ -280,13 +280,13 @@
 
 <h2 id="algorithm">Algorithm</h2>
 
-<p>For the classic QR decomposition of the form <code class="highlighter-rouge">\(\mathbf{A}=\mathbf{QR},\mathbf{A}\in\mathbb{R}^{m\times n}\)</code> a distributed version is fairly easily achieved if <code class="highlighter-rouge">\(\mathbf{A}\)</code> is tall and thin such that <code class="highlighter-rouge">\(\mathbf{A}^{\top}\mathbf{A}\)</code> fits in memory, i.e. <em>m</em> is large but <em>n</em> &lt; ~5000 Under such circumstances, only <code class="highlighter-rouge">\(\mathbf{A}\)</code> and <code class="highlighter-rouge">\(\mathbf{Q}\)</code> are distributed matricies and <code class="highlighter-rouge">\(\mathbf{A^{\top}A}\)</code> and <code class="highlighter-rouge">\(\mathbf{R}\)</code> are in-core products. We just compute the in-core version of the Cholesky decomposition in the form of <code class="highlighter-rouge">\(\mathbf{LL}^{\top}= \mathbf{A}^{\top}\mathbf{A}\)</code>.  After that we take <code class="highlighter-rouge">\(\mathbf{R}= \mathbf{L}^{\top}\)</co
 de> and <code class="highlighter-rouge">\(\mathbf{Q}=\mathbf{A}\left(\mathbf{L}^{\top}\right)^{-1}\)</code>.  The latter is easily achieved by multiplying each verticle block of <code class="highlighter-rouge">\(\mathbf{A}\)</code> by <code class="highlighter-rouge">\(\left(\mathbf{L}^{\top}\right)^{-1}\)</code>.  (There is no actual matrix inversion happening).</p>
+<p>For the classic QR decomposition of the form <code>\(\mathbf{A}=\mathbf{QR},\mathbf{A}\in\mathbb{R}^{m\times n}\)</code> a distributed version is fairly easily achieved if <code>\(\mathbf{A}\)</code> is tall and thin such that <code>\(\mathbf{A}^{\top}\mathbf{A}\)</code> fits in memory, i.e. <em>m</em> is large but <em>n</em> &lt; ~5000 Under such circumstances, only <code>\(\mathbf{A}\)</code> and <code>\(\mathbf{Q}\)</code> are distributed matricies and <code>\(\mathbf{A^{\top}A}\)</code> and <code>\(\mathbf{R}\)</code> are in-core products. We just compute the in-core version of the Cholesky decomposition in the form of <code>\(\mathbf{LL}^{\top}= \mathbf{A}^{\top}\mathbf{A}\)</code>.  After that we take <code>\(\mathbf{R}= \mathbf{L}^{\top}\)</code> and <code>\(\mathbf{Q}=\mathbf{A}\left(\mathbf{L}^{\top}\right)^{-1}\)</code>.  The latter is easily achieved by multiplying each verticle block of <code>\(\mathbf{A}\)</code> by <code>\(\left(\mathbf{L}^{\top}\right)^{-1}\)</code
 >.  (There is no actual matrix inversion happening).</p>
 
 <h2 id="implementation">Implementation</h2>
 
-<p>Mahout <code class="highlighter-rouge">dqrThin(...)</code> is implemented in the mahout <code class="highlighter-rouge">math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
+<p>Mahout <code>dqrThin(...)</code> is implemented in the mahout <code>math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def dqrThin[K: ClassTag](A: DrmLike[K], checkRankDeficiency: Boolean = true): (DrmLike[K], Matrix) = {        
+<pre><code>def dqrThin[K: ClassTag](A: DrmLike[K], checkRankDeficiency: Boolean = true): (DrmLike[K], Matrix) = {        
     if (drmA.ncol &gt; 5000)
         log.warn("A is too fat. A'A must fit in memory and easily broadcasted.")
     implicit val ctx = drmA.context
@@ -302,18 +302,18 @@
     }
     Q -&gt; inCoreR
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="usage">Usage</h2>
 
-<p>The scala <code class="highlighter-rouge">dqrThin(...)</code> method can easily be called in any Spark or H2O application built with the <code class="highlighter-rouge">math-scala</code> library and the corresponding <code class="highlighter-rouge">Spark</code> or <code class="highlighter-rouge">H2O</code> engine module as follows:</p>
+<p>The scala <code>dqrThin(...)</code> method can easily be called in any Spark or H2O application built with the <code>math-scala</code> library and the corresponding <code>Spark</code> or <code>H2O</code> engine module as follows:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import org.apache.mahout.math._
+<pre><code>import org.apache.mahout.math._
 import decompositions._
 import drm._
 
 val(drmQ, inCoreR) = dqrThin(drma)
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="references">References</h2>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/algorithms/d-qr.html
----------------------------------------------------------------------
diff --git a/users/algorithms/d-qr.html b/users/algorithms/d-qr.html
index add7d10..f1aec35 100644
--- a/users/algorithms/d-qr.html
+++ b/users/algorithms/d-qr.html
@@ -280,13 +280,13 @@
 
 <h2 id="algorithm">Algorithm</h2>
 
-<p>For the classic QR decomposition of the form <code class="highlighter-rouge">\(\mathbf{A}=\mathbf{QR},\mathbf{A}\in\mathbb{R}^{m\times n}\)</code> a distributed version is fairly easily achieved if <code class="highlighter-rouge">\(\mathbf{A}\)</code> is tall and thin such that <code class="highlighter-rouge">\(\mathbf{A}^{\top}\mathbf{A}\)</code> fits in memory, i.e. <em>m</em> is large but <em>n</em> &lt; ~5000 Under such circumstances, only <code class="highlighter-rouge">\(\mathbf{A}\)</code> and <code class="highlighter-rouge">\(\mathbf{Q}\)</code> are distributed matrices and <code class="highlighter-rouge">\(\mathbf{A^{\top}A}\)</code> and <code class="highlighter-rouge">\(\mathbf{R}\)</code> are in-core products. We just compute the in-core version of the Cholesky decomposition in the form of <code class="highlighter-rouge">\(\mathbf{LL}^{\top}= \mathbf{A}^{\top}\mathbf{A}\)</code>.  After that we take <code class="highlighter-rouge">\(\mathbf{R}= \mathbf{L}^{\top}\)</cod
 e> and <code class="highlighter-rouge">\(\mathbf{Q}=\mathbf{A}\left(\mathbf{L}^{\top}\right)^{-1}\)</code>.  The latter is easily achieved by multiplying each vertical block of <code class="highlighter-rouge">\(\mathbf{A}\)</code> by <code class="highlighter-rouge">\(\left(\mathbf{L}^{\top}\right)^{-1}\)</code>.  (There is no actual matrix inversion happening).</p>
+<p>For the classic QR decomposition of the form <code>\(\mathbf{A}=\mathbf{QR},\mathbf{A}\in\mathbb{R}^{m\times n}\)</code> a distributed version is fairly easily achieved if <code>\(\mathbf{A}\)</code> is tall and thin such that <code>\(\mathbf{A}^{\top}\mathbf{A}\)</code> fits in memory, i.e. <em>m</em> is large but <em>n</em> &lt; ~5000 Under such circumstances, only <code>\(\mathbf{A}\)</code> and <code>\(\mathbf{Q}\)</code> are distributed matrices and <code>\(\mathbf{A^{\top}A}\)</code> and <code>\(\mathbf{R}\)</code> are in-core products. We just compute the in-core version of the Cholesky decomposition in the form of <code>\(\mathbf{LL}^{\top}= \mathbf{A}^{\top}\mathbf{A}\)</code>.  After that we take <code>\(\mathbf{R}= \mathbf{L}^{\top}\)</code> and <code>\(\mathbf{Q}=\mathbf{A}\left(\mathbf{L}^{\top}\right)^{-1}\)</code>.  The latter is easily achieved by multiplying each vertical block of <code>\(\mathbf{A}\)</code> by <code>\(\left(\mathbf{L}^{\top}\right)^{-1}\)</code>
 .  (There is no actual matrix inversion happening).</p>
 
 <h2 id="implementation">Implementation</h2>
 
-<p>Mahout <code class="highlighter-rouge">dqrThin(...)</code> is implemented in the mahout <code class="highlighter-rouge">math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
+<p>Mahout <code>dqrThin(...)</code> is implemented in the mahout <code>math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def dqrThin[K: ClassTag](A: DrmLike[K], checkRankDeficiency: Boolean = true): (DrmLike[K], Matrix) = {        
+<pre><code>def dqrThin[K: ClassTag](A: DrmLike[K], checkRankDeficiency: Boolean = true): (DrmLike[K], Matrix) = {        
     if (drmA.ncol &gt; 5000)
         log.warn("A is too fat. A'A must fit in memory and easily broadcasted.")
     implicit val ctx = drmA.context
@@ -302,18 +302,18 @@
     }
     Q -&gt; inCoreR
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="usage">Usage</h2>
 
-<p>The scala <code class="highlighter-rouge">dqrThin(...)</code> method can easily be called in any Spark or H2O application built with the <code class="highlighter-rouge">math-scala</code> library and the corresponding <code class="highlighter-rouge">Spark</code> or <code class="highlighter-rouge">H2O</code> engine module as follows:</p>
+<p>The scala <code>dqrThin(...)</code> method can easily be called in any Spark or H2O application built with the <code>math-scala</code> library and the corresponding <code>Spark</code> or <code>H2O</code> engine module as follows:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import org.apache.mahout.math._
+<pre><code>import org.apache.mahout.math._
 import decompositions._
 import drm._
 
 val(drmQ, inCoreR) = dqrThin(drma)
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="references">References</h2>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/algorithms/d-spca.html
----------------------------------------------------------------------
diff --git a/users/algorithms/d-spca.html b/users/algorithms/d-spca.html
index d0feb7c..2c58f6b 100644
--- a/users/algorithms/d-spca.html
+++ b/users/algorithms/d-spca.html
@@ -276,43 +276,43 @@
 
 <h2 id="intro">Intro</h2>
 
-<p>Mahout has a distributed implementation of Stochastic PCA<a href="Lyubimov and Palumbo, [&quot;Apache Mahout: Beyond MapReduce; Distributed Algorithm Design&quot;](https://www.amazon.com/Apache-Mahout-MapReduce-Dmitriy-Lyubimov/dp/1523775785)">1</a>. This algorithm computes the exact equivalent of Mahout’s dssvd(<code class="highlighter-rouge">\(\mathbf{A-1\mu^\top}\)</code>) by modifying the <code class="highlighter-rouge">dssvd</code> algorithm so as to avoid forming <code class="highlighter-rouge">\(\mathbf{A-1\mu^\top}\)</code>, which would densify a sparse input. Thus, it is suitable for work with both dense and sparse inputs.</p>
+<p>Mahout has a distributed implementation of Stochastic PCA<a href="Lyubimov and Palumbo, [&quot;Apache Mahout: Beyond MapReduce; Distributed Algorithm Design&quot;](https://www.amazon.com/Apache-Mahout-MapReduce-Dmitriy-Lyubimov/dp/1523775785)">1</a>. This algorithm computes the exact equivalent of Mahout’s dssvd(<code>\(\mathbf{A-1\mu^\top}\)</code>) by modifying the <code>dssvd</code> algorithm so as to avoid forming <code>\(\mathbf{A-1\mu^\top}\)</code>, which would densify a sparse input. Thus, it is suitable for work with both dense and sparse inputs.</p>
 
 <h2 id="algorithm">Algorithm</h2>
 
-<p>Given an <em>m</em> <code class="highlighter-rouge">\(\times\)</code> <em>n</em> matrix <code class="highlighter-rouge">\(\mathbf{A}\)</code>, a target rank <em>k</em>, and an oversampling parameter <em>p</em>, this procedure computes a <em>k</em>-rank PCA by finding the unknowns in <code class="highlighter-rouge">\(\mathbf{A−1\mu^\top \approx U\Sigma V^\top}\)</code>:</p>
+<p>Given an <em>m</em> <code>\(\times\)</code> <em>n</em> matrix <code>\(\mathbf{A}\)</code>, a target rank <em>k</em>, and an oversampling parameter <em>p</em>, this procedure computes a <em>k</em>-rank PCA by finding the unknowns in <code>\(\mathbf{A−1\mu^\top \approx U\Sigma V^\top}\)</code>:</p>
 
 <ol>
-  <li>Create seed for random <em>n</em> <code class="highlighter-rouge">\(\times\)</code> <em>(k+p)</em> matrix <code class="highlighter-rouge">\(\Omega\)</code>.</li>
-  <li><code class="highlighter-rouge">\(\mathbf{s_\Omega \leftarrow \Omega^\top \mu}\)</code>.</li>
-  <li><code class="highlighter-rouge">\(\mathbf{Y_0 \leftarrow A\Omega − 1 {s_\Omega}^\top, Y \in \mathbb{R}^{m\times(k+p)}}\)</code>.</li>
-  <li>Column-orthonormalize <code class="highlighter-rouge">\(\mathbf{Y_0} \rightarrow \mathbf{Q}\)</code> by computing thin decomposition <code class="highlighter-rouge">\(\mathbf{Y_0} = \mathbf{QR}\)</code>. Also, <code class="highlighter-rouge">\(\mathbf{Q}\in\mathbb{R}^{m\times(k+p)}, \mathbf{R}\in\mathbb{R}^{(k+p)\times(k+p)}\)</code>.</li>
-  <li><code class="highlighter-rouge">\(\mathbf{s_Q \leftarrow Q^\top 1}\)</code>.</li>
-  <li><code class="highlighter-rouge">\(\mathbf{B_0 \leftarrow Q^\top A: B \in \mathbb{R}^{(k+p)\times n}}\)</code>.</li>
-  <li><code class="highlighter-rouge">\(\mathbf{s_B \leftarrow {B_0}^\top \mu}\)</code>.</li>
+  <li>Create seed for random <em>n</em> <code>\(\times\)</code> <em>(k+p)</em> matrix <code>\(\Omega\)</code>.</li>
+  <li><code>\(\mathbf{s_\Omega \leftarrow \Omega^\top \mu}\)</code>.</li>
+  <li><code>\(\mathbf{Y_0 \leftarrow A\Omega − 1 {s_\Omega}^\top, Y \in \mathbb{R}^{m\times(k+p)}}\)</code>.</li>
+  <li>Column-orthonormalize <code>\(\mathbf{Y_0} \rightarrow \mathbf{Q}\)</code> by computing thin decomposition <code>\(\mathbf{Y_0} = \mathbf{QR}\)</code>. Also, <code>\(\mathbf{Q}\in\mathbb{R}^{m\times(k+p)}, \mathbf{R}\in\mathbb{R}^{(k+p)\times(k+p)}\)</code>.</li>
+  <li><code>\(\mathbf{s_Q \leftarrow Q^\top 1}\)</code>.</li>
+  <li><code>\(\mathbf{B_0 \leftarrow Q^\top A: B \in \mathbb{R}^{(k+p)\times n}}\)</code>.</li>
+  <li><code>\(\mathbf{s_B \leftarrow {B_0}^\top \mu}\)</code>.</li>
   <li>For <em>i</em> in 1..<em>q</em> repeat (power iterations):
     <ul>
-      <li>For <em>j</em> in 1..<em>n</em> apply <code class="highlighter-rouge">\(\mathbf{(B_{i−1})_{∗j} \leftarrow (B_{i−1})_{∗j}−\mu_j s_Q}\)</code>.</li>
-      <li><code class="highlighter-rouge">\(\mathbf{Y_i \leftarrow A{B_{i−1}}^\top−1(s_B−\mu^\top \mu s_Q)^\top}\)</code>.</li>
-      <li>Column-orthonormalize <code class="highlighter-rouge">\(\mathbf{Y_i} \rightarrow \mathbf{Q}\)</code> by computing thin decomposition <code class="highlighter-rouge">\(\mathbf{Y_i = QR}\)</code>.</li>
-      <li><code class="highlighter-rouge">\(\mathbf{s_Q \leftarrow Q^\top 1}\)</code>.</li>
-      <li><code class="highlighter-rouge">\(\mathbf{B_i \leftarrow Q^\top A}\)</code>.</li>
-      <li><code class="highlighter-rouge">\(\mathbf{s_B \leftarrow {B_i}^\top \mu}\)</code>.</li>
+      <li>For <em>j</em> in 1..<em>n</em> apply <code>\(\mathbf{(B_{i−1})_{∗j} \leftarrow (B_{i−1})_{∗j}−\mu_j s_Q}\)</code>.</li>
+      <li><code>\(\mathbf{Y_i \leftarrow A{B_{i−1}}^\top−1(s_B−\mu^\top \mu s_Q)^\top}\)</code>.</li>
+      <li>Column-orthonormalize <code>\(\mathbf{Y_i} \rightarrow \mathbf{Q}\)</code> by computing thin decomposition <code>\(\mathbf{Y_i = QR}\)</code>.</li>
+      <li><code>\(\mathbf{s_Q \leftarrow Q^\top 1}\)</code>.</li>
+      <li><code>\(\mathbf{B_i \leftarrow Q^\top A}\)</code>.</li>
+      <li><code>\(\mathbf{s_B \leftarrow {B_i}^\top \mu}\)</code>.</li>
     </ul>
   </li>
-  <li>Let <code class="highlighter-rouge">\(\mathbf{C \triangleq s_Q {s_B}^\top}\)</code>. <code class="highlighter-rouge">\(\mathbf{M \leftarrow B_q {B_q}^\top − C − C^\top + \mu^\top \mu s_Q {s_Q}^\top}\)</code>.</li>
-  <li>Compute an eigensolution of the small symmetric <code class="highlighter-rouge">\(\mathbf{M = \hat{U} \Lambda \hat{U}^\top: M \in \mathbb{R}^{(k+p)\times(k+p)}}\)</code>.</li>
-  <li>The singular values <code class="highlighter-rouge">\(\Sigma = \Lambda^{\circ 0.5}\)</code>, or, in other words, <code class="highlighter-rouge">\(\mathbf{\sigma_i= \sqrt{\lambda_i}}\)</code>.</li>
-  <li>If needed, compute <code class="highlighter-rouge">\(\mathbf{U = Q\hat{U}}\)</code>.</li>
-  <li>If needed, compute <code class="highlighter-rouge">\(\mathbf{V = B^\top \hat{U} \Sigma^{−1}}\)</code>.</li>
-  <li>If needed, items converted to the PCA space can be computed as <code class="highlighter-rouge">\(\mathbf{U\Sigma}\)</code>.</li>
+  <li>Let <code>\(\mathbf{C \triangleq s_Q {s_B}^\top}\)</code>. <code>\(\mathbf{M \leftarrow B_q {B_q}^\top − C − C^\top + \mu^\top \mu s_Q {s_Q}^\top}\)</code>.</li>
+  <li>Compute an eigensolution of the small symmetric <code>\(\mathbf{M = \hat{U} \Lambda \hat{U}^\top: M \in \mathbb{R}^{(k+p)\times(k+p)}}\)</code>.</li>
+  <li>The singular values <code>\(\Sigma = \Lambda^{\circ 0.5}\)</code>, or, in other words, <code>\(\mathbf{\sigma_i= \sqrt{\lambda_i}}\)</code>.</li>
+  <li>If needed, compute <code>\(\mathbf{U = Q\hat{U}}\)</code>.</li>
+  <li>If needed, compute <code>\(\mathbf{V = B^\top \hat{U} \Sigma^{−1}}\)</code>.</li>
+  <li>If needed, items converted to the PCA space can be computed as <code>\(\mathbf{U\Sigma}\)</code>.</li>
 </ol>
 
 <h2 id="implementation">Implementation</h2>
 
-<p>Mahout <code class="highlighter-rouge">dspca(...)</code> is implemented in the mahout <code class="highlighter-rouge">math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
+<p>Mahout <code>dspca(...)</code> is implemented in the mahout <code>math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def dspca[K](drmA: DrmLike[K], k: Int, p: Int = 15, q: Int = 0): 
+<pre><code>def dspca[K](drmA: DrmLike[K], k: Int, p: Int = 15, q: Int = 0): 
 (DrmLike[K], DrmLike[Int], Vector) = {
 
     // Some mapBlock() calls need it
@@ -429,18 +429,18 @@
 
     (drmU(::, 0 until k), drmV(::, 0 until k), s(0 until k))
 }
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="usage">Usage</h2>
 
-<p>The scala <code class="highlighter-rouge">dspca(...)</code> method can easily be called in any Spark, Flink, or H2O application built with the <code class="highlighter-rouge">math-scala</code> library and the corresponding <code class="highlighter-rouge">Spark</code>, <code class="highlighter-rouge">Flink</code>, or <code class="highlighter-rouge">H2O</code> engine module as follows:</p>
+<p>The scala <code>dspca(...)</code> method can easily be called in any Spark, Flink, or H2O application built with the <code>math-scala</code> library and the corresponding <code>Spark</code>, <code>Flink</code>, or <code>H2O</code> engine module as follows:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import org.apache.mahout.math._
+<pre><code>import org.apache.mahout.math._
 import decompositions._
 import drm._
 
 val (drmU, drmV, s) = dspca(drmA, k=200, q=1)
-</code></pre></div></div>
+</code></pre>
 
 <p>Note the parameter is optional and its default value is zero.</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/algorithms/d-ssvd.html
----------------------------------------------------------------------
diff --git a/users/algorithms/d-ssvd.html b/users/algorithms/d-ssvd.html
index e3775d9..5165c60 100644
--- a/users/algorithms/d-ssvd.html
+++ b/users/algorithms/d-ssvd.html
@@ -280,58 +280,58 @@
 
 <h2 id="modified-ssvd-algorithm">Modified SSVD Algorithm</h2>
 
-<p>Given an <code class="highlighter-rouge">\(m\times n\)</code>
-matrix <code class="highlighter-rouge">\(\mathbf{A}\)</code>, a target rank <code class="highlighter-rouge">\(k\in\mathbb{N}_{1}\)</code>
-, an oversampling parameter <code class="highlighter-rouge">\(p\in\mathbb{N}_{1}\)</code>, 
-and the number of additional power iterations <code class="highlighter-rouge">\(q\in\mathbb{N}_{0}\)</code>, 
-this procedure computes an <code class="highlighter-rouge">\(m\times\left(k+p\right)\)</code>
-SVD <code class="highlighter-rouge">\(\mathbf{A\approx U}\boldsymbol{\Sigma}\mathbf{V}^{\top}\)</code>:</p>
+<p>Given an <code>\(m\times n\)</code>
+matrix <code>\(\mathbf{A}\)</code>, a target rank <code>\(k\in\mathbb{N}_{1}\)</code>
+, an oversampling parameter <code>\(p\in\mathbb{N}_{1}\)</code>, 
+and the number of additional power iterations <code>\(q\in\mathbb{N}_{0}\)</code>, 
+this procedure computes an <code>\(m\times\left(k+p\right)\)</code>
+SVD <code>\(\mathbf{A\approx U}\boldsymbol{\Sigma}\mathbf{V}^{\top}\)</code>:</p>
 
 <ol>
   <li>
-    <p>Create seed for random <code class="highlighter-rouge">\(n\times\left(k+p\right)\)</code>
-  matrix <code class="highlighter-rouge">\(\boldsymbol{\Omega}\)</code>. The seed defines matrix <code class="highlighter-rouge">\(\mathbf{\Omega}\)</code>
+    <p>Create seed for random <code>\(n\times\left(k+p\right)\)</code>
+  matrix <code>\(\boldsymbol{\Omega}\)</code>. The seed defines matrix <code>\(\mathbf{\Omega}\)</code>
   using Gaussian unit vectors per one of suggestions in [Halko, Martinsson, Tropp].</p>
   </li>
   <li>
-    <p><code class="highlighter-rouge">\(\mathbf{Y=A\boldsymbol{\Omega}},\,\mathbf{Y}\in\mathbb{R}^{m\times\left(k+p\right)}\)</code></p>
+    <p><code>\(\mathbf{Y=A\boldsymbol{\Omega}},\,\mathbf{Y}\in\mathbb{R}^{m\times\left(k+p\right)}\)</code></p>
   </li>
   <li>
-    <p>Column-orthonormalize <code class="highlighter-rouge">\(\mathbf{Y}\rightarrow\mathbf{Q}\)</code>
-  by computing thin decomposition <code class="highlighter-rouge">\(\mathbf{Y}=\mathbf{Q}\mathbf{R}\)</code>.
-  Also, <code class="highlighter-rouge">\(\mathbf{Q}\in\mathbb{R}^{m\times\left(k+p\right)},\,\mathbf{R}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>; denoted as <code class="highlighter-rouge">\(\mathbf{Q}=\mbox{qr}\left(\mathbf{Y}\right).\mathbf{Q}\)</code></p>
+    <p>Column-orthonormalize <code>\(\mathbf{Y}\rightarrow\mathbf{Q}\)</code>
+  by computing thin decomposition <code>\(\mathbf{Y}=\mathbf{Q}\mathbf{R}\)</code>.
+  Also, <code>\(\mathbf{Q}\in\mathbb{R}^{m\times\left(k+p\right)},\,\mathbf{R}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>; denoted as <code>\(\mathbf{Q}=\mbox{qr}\left(\mathbf{Y}\right).\mathbf{Q}\)</code></p>
   </li>
   <li>
-    <p><code class="highlighter-rouge">\(\mathbf{B}_{0}=\mathbf{Q}^{\top}\mathbf{A}:\,\,\mathbf{B}\in\mathbb{R}^{\left(k+p\right)\times n}\)</code>.</p>
+    <p><code>\(\mathbf{B}_{0}=\mathbf{Q}^{\top}\mathbf{A}:\,\,\mathbf{B}\in\mathbb{R}^{\left(k+p\right)\times n}\)</code>.</p>
   </li>
   <li>
-    <p>If <code class="highlighter-rouge">\(q&gt;0\)</code>
-  repeat: for <code class="highlighter-rouge">\(i=1..q\)</code>: 
-  <code class="highlighter-rouge">\(\mathbf{B}_{i}^{\top}=\mathbf{A}^{\top}\mbox{qr}\left(\mathbf{A}\mathbf{B}_{i-1}^{\top}\right).\mathbf{Q}\)</code>
+    <p>If <code>\(q&gt;0\)</code>
+  repeat: for <code>\(i=1..q\)</code>: 
+  <code>\(\mathbf{B}_{i}^{\top}=\mathbf{A}^{\top}\mbox{qr}\left(\mathbf{A}\mathbf{B}_{i-1}^{\top}\right).\mathbf{Q}\)</code>
   (power iterations step).</p>
   </li>
   <li>
-    <p>Compute Eigensolution of a small Hermitian <code class="highlighter-rouge">\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}=\mathbf{\hat{U}}\boldsymbol{\Lambda}\mathbf{\hat{U}}^{\top}\)</code>,
-  <code class="highlighter-rouge">\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>.</p>
+    <p>Compute Eigensolution of a small Hermitian <code>\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}=\mathbf{\hat{U}}\boldsymbol{\Lambda}\mathbf{\hat{U}}^{\top}\)</code>,
+  <code>\(\mathbf{B}_{q}\mathbf{B}_{q}^{\top}\in\mathbb{R}^{\left(k+p\right)\times\left(k+p\right)}\)</code>.</p>
   </li>
   <li>
-    <p>Singular values <code class="highlighter-rouge">\(\mathbf{\boldsymbol{\Sigma}}=\boldsymbol{\Lambda}^{0.5}\)</code>,
-  or, in other words, <code class="highlighter-rouge">\(s_{i}=\sqrt{\sigma_{i}}\)</code>.</p>
+    <p>Singular values <code>\(\mathbf{\boldsymbol{\Sigma}}=\boldsymbol{\Lambda}^{0.5}\)</code>,
+  or, in other words, <code>\(s_{i}=\sqrt{\sigma_{i}}\)</code>.</p>
   </li>
   <li>
-    <p>If needed, compute <code class="highlighter-rouge">\(\mathbf{U}=\mathbf{Q}\hat{\mathbf{U}}\)</code>.</p>
+    <p>If needed, compute <code>\(\mathbf{U}=\mathbf{Q}\hat{\mathbf{U}}\)</code>.</p>
   </li>
   <li>
-    <p>If needed, compute <code class="highlighter-rouge">\(\mathbf{V}=\mathbf{B}_{q}^{\top}\hat{\mathbf{U}}\boldsymbol{\Sigma}^{-1}\)</code>.
-Another way is <code class="highlighter-rouge">\(\mathbf{V}=\mathbf{A}^{\top}\mathbf{U}\boldsymbol{\Sigma}^{-1}\)</code>.</p>
+    <p>If needed, compute <code>\(\mathbf{V}=\mathbf{B}_{q}^{\top}\hat{\mathbf{U}}\boldsymbol{\Sigma}^{-1}\)</code>.
+Another way is <code>\(\mathbf{V}=\mathbf{A}^{\top}\mathbf{U}\boldsymbol{\Sigma}^{-1}\)</code>.</p>
   </li>
 </ol>
 
 <h2 id="implementation">Implementation</h2>
 
-<p>Mahout <code class="highlighter-rouge">dssvd(...)</code> is implemented in the mahout <code class="highlighter-rouge">math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
+<p>Mahout <code>dssvd(...)</code> is implemented in the mahout <code>math-scala</code> algebraic optimizer which translates Mahout’s R-like linear algebra operators into a physical plan for both Spark and H2O distributed engines.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def dssvd[K: ClassTag](drmA: DrmLike[K], k: Int, p: Int = 15, q: Int = 0):
+<pre><code>def dssvd[K: ClassTag](drmA: DrmLike[K], k: Int, p: Int = 15, q: Int = 0):
     (DrmLike[K], DrmLike[Int], Vector) = {
 
     val drmAcp = drmA.checkpoint()
@@ -387,21 +387,21 @@ Another way is <code class="highlighter-rouge">\(\mathbf{V}=\mathbf{A}^{\top}\ma
 
     (drmU(::, 0 until k), drmV(::, 0 until k), s(0 until k))
 }
-</code></pre></div></div>
+</code></pre>
 
-<p>Note: As a side effect of checkpointing, U and V values are returned as logical operators (i.e. they are neither checkpointed nor computed).  Therefore there is no physical work actually done to compute <code class="highlighter-rouge">\(\mathbf{U}\)</code> or <code class="highlighter-rouge">\(\mathbf{V}\)</code> until they are used in a subsequent expression.</p>
+<p>Note: As a side effect of checkpointing, U and V values are returned as logical operators (i.e. they are neither checkpointed nor computed).  Therefore there is no physical work actually done to compute <code>\(\mathbf{U}\)</code> or <code>\(\mathbf{V}\)</code> until they are used in a subsequent expression.</p>
 
 <h2 id="usage">Usage</h2>
 
-<p>The scala <code class="highlighter-rouge">dssvd(...)</code> method can easily be called in any Spark or H2O application built with the <code class="highlighter-rouge">math-scala</code> library and the corresponding <code class="highlighter-rouge">Spark</code> or <code class="highlighter-rouge">H2O</code> engine module as follows:</p>
+<p>The scala <code>dssvd(...)</code> method can easily be called in any Spark or H2O application built with the <code>math-scala</code> library and the corresponding <code>Spark</code> or <code>H2O</code> engine module as follows:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>import org.apache.mahout.math._
+<pre><code>import org.apache.mahout.math._
 import decompositions._
 import drm._
 
 
 val(drmU, drmV, s) = dssvd(drma, k = 40, q = 1)
-</code></pre></div></div>
+</code></pre>
 
 <h2 id="references">References</h2>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/algorithms/intro-cooccurrence-spark.html
----------------------------------------------------------------------
diff --git a/users/algorithms/intro-cooccurrence-spark.html b/users/algorithms/intro-cooccurrence-spark.html
index 517157a..a84b90b 100644
--- a/users/algorithms/intro-cooccurrence-spark.html
+++ b/users/algorithms/intro-cooccurrence-spark.html
@@ -312,7 +312,7 @@ For instance they might say an item-view is 0.2 of an item purchase. In practice
 cross-cooccurrence is a more principled way to handle this case. In effect it scrubs secondary actions with the action you want
 to recommend.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>spark-itemsimilarity Mahout 1.0
+<pre><code>spark-itemsimilarity Mahout 1.0
 Usage: spark-itemsimilarity [options]
 
 Disconnected from the target VM, address: '127.0.0.1:64676', transport: 'socket'
@@ -376,7 +376,7 @@ Spark config options:
         
   -h | --help
         prints this usage text
-</code></pre></div></div>
+</code></pre>
 
 <p>This looks daunting but defaults to simple fairly sane values to take exactly the same input as legacy code and is pretty flexible. It allows the user to point to a single text file, a directory full of files, or a tree of directories to be traversed recursively. The files included can be specified with either a regex-style pattern or filename. The schema for the file is defined by column numbers, which map to the important bits of data including IDs and values. The files can even contain filters, which allow unneeded rows to be discarded or used for cross-cooccurrence calculations.</p>
 
@@ -386,20 +386,20 @@ Spark config options:
 
 <p>If all defaults are used the input can be as simple as:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>userID1,itemID1
+<pre><code>userID1,itemID1
 userID2,itemID2
 ...
-</code></pre></div></div>
+</code></pre>
 
 <p>With the command line:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash$ mahout spark-itemsimilarity --input in-file --output out-dir
-</code></pre></div></div>
+<pre><code>bash$ mahout spark-itemsimilarity --input in-file --output out-dir
+</code></pre>
 
 <p>This will use the “local” Spark context and will output the standard text version of a DRM</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>itemID1&lt;tab&gt;itemID2:value2&lt;space&gt;itemID10:value10...
-</code></pre></div></div>
+<pre><code>itemID1&lt;tab&gt;itemID2:value2&lt;space&gt;itemID10:value10...
+</code></pre>
 
 <p>###<a name="multiple-actions">How To Use Multiple User Actions</a></p>
 
@@ -415,7 +415,7 @@ to calculate the cross-cooccurrence indicator matrix.</p>
 <p><em>spark-itemsimilarity</em> can read separate actions from separate files or from a mixed action log by filtering certain lines. For a mixed 
 action log of the form:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>u1,purchase,iphone
+<pre><code>u1,purchase,iphone
 u1,purchase,ipad
 u2,purchase,nexus
 u2,purchase,galaxy
@@ -435,13 +435,13 @@ u3,view,nexus
 u4,view,iphone
 u4,view,ipad
 u4,view,galaxy
-</code></pre></div></div>
+</code></pre>
 
 <p>###Command Line</p>
 
 <p>Use the following options:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash$ mahout spark-itemsimilarity \
+<pre><code>bash$ mahout spark-itemsimilarity \
 	--input in-file \     # where to look for data
     --output out-path \   # root dir for output
     --master masterUrl \  # URL of the Spark master server
@@ -450,35 +450,35 @@ u4,view,galaxy
     --itemIDPosition 2 \  # column that has the item ID
     --rowIDPosition 0 \   # column that has the user ID
     --filterPosition 1    # column that has the filter word
-</code></pre></div></div>
+</code></pre>
 
 <p>###Output</p>
 
 <p>The output of the job will be the standard text version of two Mahout DRMs. This is a case where we are calculating 
 cross-cooccurrence so a primary indicator matrix and cross-cooccurrence indicator matrix will be created</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>out-path
+<pre><code>out-path
   |-- similarity-matrix - TDF part files
   \-- cross-similarity-matrix - TDF part-files
-</code></pre></div></div>
+</code></pre>
 
 <p>The similarity-matrix will contain the lines:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>galaxy\tnexus:1.7260924347106847
+<pre><code>galaxy\tnexus:1.7260924347106847
 ipad\tiphone:1.7260924347106847
 nexus\tgalaxy:1.7260924347106847
 iphone\tipad:1.7260924347106847
 surface
-</code></pre></div></div>
+</code></pre>
 
 <p>The cross-similarity-matrix will contain:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>iphone\tnexus:1.7260924347106847 iphone:1.7260924347106847 ipad:1.7260924347106847 galaxy:1.7260924347106847
+<pre><code>iphone\tnexus:1.7260924347106847 iphone:1.7260924347106847 ipad:1.7260924347106847 galaxy:1.7260924347106847
 ipad\tnexus:0.6795961471815897 iphone:0.6795961471815897 ipad:0.6795961471815897 galaxy:0.6795961471815897
 nexus\tnexus:0.6795961471815897 iphone:0.6795961471815897 ipad:0.6795961471815897 galaxy:0.6795961471815897
 galaxy\tnexus:1.7260924347106847 iphone:1.7260924347106847 ipad:1.7260924347106847 galaxy:1.7260924347106847
 surface\tsurface:4.498681156950466 nexus:0.6795961471815897
-</code></pre></div></div>
+</code></pre>
 
 <p><strong>Note:</strong> You can run this multiple times to use more than two actions or you can use the underlying 
 SimilarityAnalysis.cooccurrence API, which will more efficiently calculate any number of cross-cooccurrence indicators.</p>
@@ -487,7 +487,7 @@ SimilarityAnalysis.cooccurrence API, which will more efficiently calculate any n
 
 <p>A common method of storing data is in log files. If they are written using some delimiter they can be consumed directly by spark-itemsimilarity. For instance input of the form:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>2014-06-23 14:46:53.115\tu1\tpurchase\trandom text\tiphone
+<pre><code>2014-06-23 14:46:53.115\tu1\tpurchase\trandom text\tiphone
 2014-06-23 14:46:53.115\tu1\tpurchase\trandom text\tipad
 2014-06-23 14:46:53.115\tu2\tpurchase\trandom text\tnexus
 2014-06-23 14:46:53.115\tu2\tpurchase\trandom text\tgalaxy
@@ -507,11 +507,11 @@ SimilarityAnalysis.cooccurrence API, which will more efficiently calculate any n
 2014-06-23 14:46:53.115\tu4\tview\trandom text\tiphone
 2014-06-23 14:46:53.115\tu4\tview\trandom text\tipad
 2014-06-23 14:46:53.115\tu4\tview\trandom text\tgalaxy    
-</code></pre></div></div>
+</code></pre>
 
 <p>Can be parsed with the following CLI and run on the cluster producing the same output as the above example.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>bash$ mahout spark-itemsimilarity \
+<pre><code>bash$ mahout spark-itemsimilarity \
     --input in-file \
     --output out-path \
     --master spark://sparkmaster:4044 \
@@ -521,7 +521,7 @@ SimilarityAnalysis.cooccurrence API, which will more efficiently calculate any n
     --itemIDPosition 4 \
     --rowIDPosition 1 \
     --filterPosition 2
-</code></pre></div></div>
+</code></pre>
 
 <p>##2. spark-rowsimilarity</p>
 
@@ -536,7 +536,7 @@ by a list of the most similar rows.</p>
 
 <p>The command line interface is:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>spark-rowsimilarity Mahout 1.0
+<pre><code>spark-rowsimilarity Mahout 1.0
 Usage: spark-rowsimilarity [options]
 
 Input, output options
@@ -582,7 +582,7 @@ Spark config options:
         
   -h | --help
         prints this usage text
-</code></pre></div></div>
+</code></pre>
 
 <p>See RowSimilarityDriver.scala in Mahout’s spark module if you want to customize the code.</p>
 
@@ -664,32 +664,32 @@ content or metadata, not by which users interacted with them.</p>
 
 <p>For this we need input of the form:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>itemID&lt;tab&gt;list-of-tags
+<pre><code>itemID&lt;tab&gt;list-of-tags
 ...
-</code></pre></div></div>
+</code></pre>
 
 <p>The full collection will look like the tags column from a catalog DB. For our ecom example it might be:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>3459860b&lt;tab&gt;men long-sleeve chambray clothing casual
+<pre><code>3459860b&lt;tab&gt;men long-sleeve chambray clothing casual
 9446577d&lt;tab&gt;women tops chambray clothing casual
 ...
-</code></pre></div></div>
+</code></pre>
 
 <p>We’ll use <em>spark-rowimilairity</em> because we are looking for similar rows, which encode items in this case. As with the 
 collaborative filtering indicators we use the –omitStrength option. The strengths created are 
 probabilistic log-likelihood ratios and so are used to filter unimportant similarities. Once the filtering or downsampling 
 is finished we no longer need the strengths. We will get an indicator matrix of the form:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>itemID&lt;tab&gt;list-of-item IDs
+<pre><code>itemID&lt;tab&gt;list-of-item IDs
 ...
-</code></pre></div></div>
+</code></pre>
 
 <p>This is a content indicator since it has found other items with similar content or metadata.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>3459860b&lt;tab&gt;3459860b 3459860b 6749860c 5959860a 3434860a 3477860a
+<pre><code>3459860b&lt;tab&gt;3459860b 3459860b 6749860c 5959860a 3434860a 3477860a
 9446577d&lt;tab&gt;9446577d 9496577d 0943577d 8346577d 9442277d 9446577e
 ...  
-</code></pre></div></div>
+</code></pre>
 
 <p>We now have three indicators, two collaborative filtering type and one content type.</p>
 
@@ -700,11 +700,11 @@ is finished we no longer need the strengths. We will get an indicator matrix of
 <p>We have 3 indicators, these are indexed by the search engine into 3 fields, we’ll call them “purchase”, “view”, and “tags”. 
 We take the user’s history that corresponds to each indicator and create a query of the form:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Query:
+<pre><code>Query:
   field: purchase; q:user's-purchase-history
   field: view; q:user's view-history
   field: tags; q:user's-tags-associated-with-purchases
-</code></pre></div></div>
+</code></pre>
 
 <p>The query will result in an ordered list of items recommended for purchase but skewed towards items with similar tags to 
 the ones the user has already purchased.</p>
@@ -716,11 +716,11 @@ by tagging items with some category of popularity (hot, warm, cold for instance)
 index that as a new indicator field and include the corresponding value in a query 
 on the popularity field. If we use the ecom example but use the query to get “hot” recommendations it might look like this:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Query:
+<pre><code>Query:
   field: purchase; q:user's-purchase-history
   field: view; q:user's view-history
   field: popularity; q:"hot"
-</code></pre></div></div>
+</code></pre>
 
 <p>This will return recommendations favoring ones that have the intrinsic indicator “hot”.</p>
 

http://git-wip-us.apache.org/repos/asf/mahout/blob/d9686c8b/users/algorithms/spark-naive-bayes.html
----------------------------------------------------------------------
diff --git a/users/algorithms/spark-naive-bayes.html b/users/algorithms/spark-naive-bayes.html
index 46c9e06..ea8d2d3 100644
--- a/users/algorithms/spark-naive-bayes.html
+++ b/users/algorithms/spark-naive-bayes.html
@@ -281,45 +281,45 @@
 <p>Where Bayes has long been a standard in text classification, CBayes is an extension of Bayes that performs particularly well on datasets with skewed classes and has been shown to be competitive with algorithms of higher complexity such as Support Vector Machines.</p>
 
 <h2 id="implementations">Implementations</h2>
-<p>The mahout <code class="highlighter-rouge">math-scala</code> library has an implemetation of both Bayes and CBayes which is further optimized in the <code class="highlighter-rouge">spark</code> module. Currently the Spark optimized version provides CLI drivers for training and testing. Mahout Spark-Naive-Bayes models can also be trained, tested and saved to the filesystem from the Mahout Spark Shell.</p>
+<p>The mahout <code>math-scala</code> library has an implemetation of both Bayes and CBayes which is further optimized in the <code>spark</code> module. Currently the Spark optimized version provides CLI drivers for training and testing. Mahout Spark-Naive-Bayes models can also be trained, tested and saved to the filesystem from the Mahout Spark Shell.</p>
 
 <h2 id="preprocessing-and-algorithm">Preprocessing and Algorithm</h2>
 
 <p>As described in <a href="http://people.csail.mit.edu/jrennie/papers/icml03-nb.pdf">[1]</a> Mahout Naive Bayes is broken down into the following steps (assignments are over all possible index values):</p>
 
 <ul>
-  <li>Let <code class="highlighter-rouge">\(\vec{d}=(\vec{d_1},...,\vec{d_n})\)</code> be a set of documents; <code class="highlighter-rouge">\(d_{ij}\)</code> is the count of word <code class="highlighter-rouge">\(i\)</code> in document <code class="highlighter-rouge">\(j\)</code>.</li>
-  <li>Let <code class="highlighter-rouge">\(\vec{y}=(y_1,...,y_n)\)</code> be their labels.</li>
-  <li>Let <code class="highlighter-rouge">\(\alpha_i\)</code> be a smoothing parameter for all words in the vocabulary; let <code class="highlighter-rouge">\(\alpha=\sum_i{\alpha_i}\)</code>.</li>
-  <li><strong>Preprocessing</strong>(via seq2Sparse) TF-IDF transformation and L2 length normalization of <code class="highlighter-rouge">\(\vec{d}\)</code>
+  <li>Let <code>\(\vec{d}=(\vec{d_1},...,\vec{d_n})\)</code> be a set of documents; <code>\(d_{ij}\)</code> is the count of word <code>\(i\)</code> in document <code>\(j\)</code>.</li>
+  <li>Let <code>\(\vec{y}=(y_1,...,y_n)\)</code> be their labels.</li>
+  <li>Let <code>\(\alpha_i\)</code> be a smoothing parameter for all words in the vocabulary; let <code>\(\alpha=\sum_i{\alpha_i}\)</code>.</li>
+  <li><strong>Preprocessing</strong>(via seq2Sparse) TF-IDF transformation and L2 length normalization of <code>\(\vec{d}\)</code>
     <ol>
-      <li><code class="highlighter-rouge">\(d_{ij} = \sqrt{d_{ij}}\)</code></li>
-      <li><code class="highlighter-rouge">\(d_{ij} = d_{ij}\left(\log{\frac{\sum_k1}{\sum_k\delta_{ik}+1}}+1\right)\)</code></li>
-      <li><code class="highlighter-rouge">\(d_{ij} =\frac{d_{ij}}{\sqrt{\sum_k{d_{kj}^2}}}\)</code></li>
+      <li><code>\(d_{ij} = \sqrt{d_{ij}}\)</code></li>
+      <li><code>\(d_{ij} = d_{ij}\left(\log{\frac{\sum_k1}{\sum_k\delta_{ik}+1}}+1\right)\)</code></li>
+      <li><code>\(d_{ij} =\frac{d_{ij}}{\sqrt{\sum_k{d_{kj}^2}}}\)</code></li>
     </ol>
   </li>
-  <li><strong>Training: Bayes</strong><code class="highlighter-rouge">\((\vec{d},\vec{y})\)</code> calculate term weights <code class="highlighter-rouge">\(w_{ci}\)</code> as:
+  <li><strong>Training: Bayes</strong><code>\((\vec{d},\vec{y})\)</code> calculate term weights <code>\(w_{ci}\)</code> as:
     <ol>
-      <li><code class="highlighter-rouge">\(\hat\theta_{ci}=\frac{d_{ic}+\alpha_i}{\sum_k{d_{kc}}+\alpha}\)</code></li>
-      <li><code class="highlighter-rouge">\(w_{ci}=\log{\hat\theta_{ci}}\)</code></li>
+      <li><code>\(\hat\theta_{ci}=\frac{d_{ic}+\alpha_i}{\sum_k{d_{kc}}+\alpha}\)</code></li>
+      <li><code>\(w_{ci}=\log{\hat\theta_{ci}}\)</code></li>
     </ol>
   </li>
-  <li><strong>Training: CBayes</strong><code class="highlighter-rouge">\((\vec{d},\vec{y})\)</code> calculate term weights <code class="highlighter-rouge">\(w_{ci}\)</code> as:
+  <li><strong>Training: CBayes</strong><code>\((\vec{d},\vec{y})\)</code> calculate term weights <code>\(w_{ci}\)</code> as:
     <ol>
-      <li><code class="highlighter-rouge">\(\hat\theta_{ci} = \frac{\sum_{j:y_j\neq c}d_{ij}+\alpha_i}{\sum_{j:y_j\neq c}{\sum_k{d_{kj}}}+\alpha}\)</code></li>
-      <li><code class="highlighter-rouge">\(w_{ci}=-\log{\hat\theta_{ci}}\)</code></li>
-      <li><code class="highlighter-rouge">\(w_{ci}=\frac{w_{ci}}{\sum_i \lvert w_{ci}\rvert}\)</code></li>
+      <li><code>\(\hat\theta_{ci} = \frac{\sum_{j:y_j\neq c}d_{ij}+\alpha_i}{\sum_{j:y_j\neq c}{\sum_k{d_{kj}}}+\alpha}\)</code></li>
+      <li><code>\(w_{ci}=-\log{\hat\theta_{ci}}\)</code></li>
+      <li><code>\(w_{ci}=\frac{w_{ci}}{\sum_i \lvert w_{ci}\rvert}\)</code></li>
     </ol>
   </li>
   <li><strong>Label Assignment/Testing:</strong>
     <ol>
-      <li>Let <code class="highlighter-rouge">\(\vec{t}= (t_1,...,t_n)\)</code> be a test document; let <code class="highlighter-rouge">\(t_i\)</code> be the count of the word <code class="highlighter-rouge">\(t\)</code>.</li>
-      <li>Label the document according to <code class="highlighter-rouge">\(l(t)=\arg\max_c \sum\limits_{i} t_i w_{ci}\)</code></li>
+      <li>Let <code>\(\vec{t}= (t_1,...,t_n)\)</code> be a test document; let <code>\(t_i\)</code> be the count of the word <code>\(t\)</code>.</li>
+      <li>Label the document according to <code>\(l(t)=\arg\max_c \sum\limits_{i} t_i w_{ci}\)</code></li>
     </ol>
   </li>
 </ul>
 
-<p>As we can see, the main difference between Bayes and CBayes is the weight calculation step.  Where Bayes weighs terms more heavily based on the likelihood that they belong to class <code class="highlighter-rouge">\(c\)</code>, CBayes seeks to maximize term weights on the likelihood that they do not belong to any other class.</p>
+<p>As we can see, the main difference between Bayes and CBayes is the weight calculation step.  Where Bayes weighs terms more heavily based on the likelihood that they belong to class <code>\(c\)</code>, CBayes seeks to maximize term weights on the likelihood that they do not belong to any other class.</p>
 
 <h2 id="running-from-the-command-line">Running from the command line</h2>
 
@@ -330,34 +330,34 @@
     <p><strong>Preprocessing:</strong>
 For a set of Sequence File Formatted documents in PATH_TO_SEQUENCE_FILES the <a href="https://mahout.apache.org/users/basics/creating-vectors-from-text.html">mahout seq2sparse</a> command performs the TF-IDF transformations (-wt tfidf option) and L2 length normalization (-n 2 option) as follows:</p>
 
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  $ mahout seq2sparse 
+    <pre><code>  $ mahout seq2sparse 
     -i ${PATH_TO_SEQUENCE_FILES} 
     -o ${PATH_TO_TFIDF_VECTORS} 
     -nv 
     -n 2
     -wt tfidf
-</code></pre></div>    </div>
+</code></pre>
   </li>
   <li>
     <p><strong>Training:</strong>
-The model is then trained using <code class="highlighter-rouge">mahout spark-trainnb</code>.  The default is to train a Bayes model. The -c option is given to train a CBayes model:</p>
+The model is then trained using <code>mahout spark-trainnb</code>.  The default is to train a Bayes model. The -c option is given to train a CBayes model:</p>
 
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  $ mahout spark-trainnb
+    <pre><code>  $ mahout spark-trainnb
     -i ${PATH_TO_TFIDF_VECTORS} 
     -o ${PATH_TO_MODEL}
     -ow 
     -c
-</code></pre></div>    </div>
+</code></pre>
   </li>
   <li>
     <p><strong>Label Assignment/Testing:</strong>
-Classification and testing on a holdout set can then be performed via <code class="highlighter-rouge">mahout spark-testnb</code>. Again, the -c option indicates that the model is CBayes:</p>
+Classification and testing on a holdout set can then be performed via <code>mahout spark-testnb</code>. Again, the -c option indicates that the model is CBayes:</p>
 
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  $ mahout spark-testnb 
+    <pre><code>  $ mahout spark-testnb 
     -i ${PATH_TO_TFIDF_TEST_VECTORS}
     -m ${PATH_TO_MODEL} 
     -c 
-</code></pre></div>    </div>
+</code></pre>
   </li>
 </ul>
 
@@ -367,9 +367,9 @@ Classification and testing on a holdout set can then be performed via <code clas
   <li>
     <p><strong>Preprocessing:</strong> <em>note: still reliant on MapReduce seq2sparse</em></p>
 
-    <p>Only relevant parameters used for Bayes/CBayes as detailed above are shown. Several other transformations can be performed by <code class="highlighter-rouge">mahout seq2sparse</code> and used as input to Bayes/CBayes.  For a full list of <code class="highlighter-rouge">mahout seq2Sparse</code> options see the <a href="https://mahout.apache.org/users/basics/creating-vectors-from-text.html">Creating vectors from text</a> page.</p>
+    <p>Only relevant parameters used for Bayes/CBayes as detailed above are shown. Several other transformations can be performed by <code>mahout seq2sparse</code> and used as input to Bayes/CBayes.  For a full list of <code>mahout seq2Sparse</code> options see the <a href="https://mahout.apache.org/users/basics/creating-vectors-from-text.html">Creating vectors from text</a> page.</p>
 
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  $ mahout seq2sparse                         
+    <pre><code>  $ mahout seq2sparse                         
     --output (-o) output             The directory pathname for output.        
     --input (-i) input               Path to job input directory.              
     --weight (-wt) weight            The kind of weight to use. Currently TF   
@@ -384,12 +384,12 @@ Classification and testing on a holdout set can then be performed via <code clas
                                          else false                                
     --namedVector (-nv)              (Optional) Whether output vectors should  
                                          be NamedVectors. If set true else false   
-</code></pre></div>    </div>
+</code></pre>
   </li>
   <li>
     <p><strong>Training:</strong></p>
 
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  $ mahout spark-trainnb
+    <pre><code>  $ mahout spark-trainnb
     --input (-i) input               Path to job input directory.                 
     --output (-o) output             The directory pathname for output.           
     --trainComplementary (-c)        Train complementary? Default is false.
@@ -398,12 +398,12 @@ Classification and testing on a holdout set can then be performed via <code clas
                                          cores to get a performance improvement, 
                                          for example "local[4]"
     --help (-h)                      Print out help                               
-</code></pre></div>    </div>
+</code></pre>
   </li>
   <li>
     <p><strong>Testing:</strong></p>
 
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>  $ mahout spark-testnb   
+    <pre><code>  $ mahout spark-testnb   
     --input (-i) input               Path to job input directory.                  
     --model (-m) model               The path to the model built during training.   
     --testComplementary (-c)         Test complementary? Default is false.                          
@@ -412,7 +412,7 @@ Classification and testing on a holdout set can then be performed via <code clas
                                          cores to get a performance improvement, 
                                          for example "local[4]"                        
     --help (-h)                      Print out help                                
-</code></pre></div>    </div>
+</code></pre>
   </li>
 </ul>