You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by gu...@apache.org on 2021/02/22 00:23:04 UTC

[spark-website] branch asf-site updated: Upgrade Jekyll to 4.2.0

This is an automated email from the ASF dual-hosted git repository.

gurwls223 pushed a commit to branch asf-site
in repository https://gitbox.apache.org/repos/asf/spark-website.git


The following commit(s) were added to refs/heads/asf-site by this push:
     new 60818c4  Upgrade Jekyll to 4.2.0
60818c4 is described below

commit 60818c40c9e6714a82b4335ef1150b08b92cf98e
Author: attilapiros <pi...@gmail.com>
AuthorDate: Mon Feb 22 09:22:49 2021 +0900

    Upgrade Jekyll to 4.2.0
    
    Most of the changes are the result of using a new CSS class: `language-plaintext`.
    There are a few removed empty lines and some lines are breaked at another position.
    
    Author: attilapiros <pi...@gmail.com>
    
    Closes #308 from attilapiros/jekyll_version_upgrade.
---
 Gemfile                                |  21 ++-
 Gemfile.lock                           |  54 +++++---
 site/committers.html                   |  26 ++--
 site/community.html                    |  14 +-
 site/contributing.html                 |  64 ++++-----
 site/developer-tools.html              | 202 ++++++++++++++--------------
 site/downloads.html                    |   4 +-
 site/examples.html                     | 238 ++++++++++++++++-----------------
 site/news/index.html                   |  45 -------
 site/powered-by.html                   |   2 +-
 site/release-process.html              |  72 +++++-----
 site/releases/spark-release-0-8-0.html |  24 ++--
 site/releases/spark-release-0-8-1.html |  18 +--
 site/releases/spark-release-0-9-0.html |  18 +--
 site/releases/spark-release-1-0-0.html |   2 +-
 site/releases/spark-release-1-0-1.html |   4 +-
 site/releases/spark-release-1-1-0.html |   8 +-
 site/releases/spark-release-1-2-0.html |  18 +--
 site/releases/spark-release-1-2-2.html |   2 +-
 site/releases/spark-release-1-3-0.html |   4 +-
 site/releases/spark-release-1-5-0.html |   2 +-
 site/releases/spark-release-1-6-0.html |  14 +-
 site/releases/spark-release-2-1-0.html |   2 +-
 site/releases/spark-release-2-2-0.html |   4 +-
 site/releases/spark-release-2-2-1.html |   2 +-
 site/releases/spark-release-2-3-0.html |  80 +++++------
 site/releases/spark-release-2-3-1.html |   2 +-
 site/releases/spark-release-2-4-0.html |   2 +-
 site/releases/spark-release-2-4-1.html |   2 +-
 site/releases/spark-release-2-4-2.html |   2 +-
 site/releases/spark-release-3-0-0.html |  16 +--
 site/releases/spark-release-3-0-2.html |   2 +-
 site/screencasts/index.html            |   4 -
 site/security.html                     |  32 ++---
 site/third-party-projects.html         |   2 +-
 site/versioning-policy.html            |   2 +-
 36 files changed, 497 insertions(+), 513 deletions(-)

diff --git a/Gemfile b/Gemfile
index e233a05..2719567 100644
--- a/Gemfile
+++ b/Gemfile
@@ -1,4 +1,21 @@
+#
+# Licensed to the Apache Software Foundation (ASF) under one or more
+# contributor license agreements.  See the NOTICE file distributed with
+# this work for additional information regarding copyright ownership.
+# The ASF licenses this file to You under the Apache License, Version 2.0
+# (the "License"); you may not use this file except in compliance with
+# the License.  You may obtain a copy of the License at
+#
+#    http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+#
+
 source "https://rubygems.org"
 
-gem "jekyll", "3.6.3"
-gem "rouge", "2.2.1"
+gem "jekyll", "4.2.0"
+gem "rouge", "3.26.0"
diff --git a/Gemfile.lock b/Gemfile.lock
index 07ab375..6a818aa 100644
--- a/Gemfile.lock
+++ b/Gemfile.lock
@@ -4,49 +4,65 @@ GEM
     addressable (2.7.0)
       public_suffix (>= 2.0.2, < 5.0)
     colorator (1.1.0)
+    concurrent-ruby (1.1.8)
+    em-websocket (0.5.2)
+      eventmachine (>= 0.12.9)
+      http_parser.rb (~> 0.6.0)
+    eventmachine (1.2.7)
     ffi (1.14.2)
     forwardable-extended (2.6.0)
-    jekyll (3.6.3)
+    http_parser.rb (0.6.0)
+    i18n (1.8.9)
+      concurrent-ruby (~> 1.0)
+    jekyll (4.2.0)
       addressable (~> 2.4)
       colorator (~> 1.0)
-      jekyll-sass-converter (~> 1.0)
-      jekyll-watch (~> 1.1)
-      kramdown (~> 1.14)
+      em-websocket (~> 0.5)
+      i18n (~> 1.0)
+      jekyll-sass-converter (~> 2.0)
+      jekyll-watch (~> 2.0)
+      kramdown (~> 2.3)
+      kramdown-parser-gfm (~> 1.0)
       liquid (~> 4.0)
-      mercenary (~> 0.3.3)
+      mercenary (~> 0.4.0)
       pathutil (~> 0.9)
-      rouge (>= 1.7, < 3)
+      rouge (~> 3.0)
       safe_yaml (~> 1.0)
-    jekyll-sass-converter (1.5.2)
-      sass (~> 3.4)
-    jekyll-watch (1.5.1)
+      terminal-table (~> 2.0)
+    jekyll-sass-converter (2.1.0)
+      sassc (> 2.0.1, < 3.0)
+    jekyll-watch (2.2.1)
       listen (~> 3.0)
-    kramdown (1.17.0)
+    kramdown (2.3.0)
+      rexml
+    kramdown-parser-gfm (1.1.0)
+      kramdown (~> 2.0)
     liquid (4.0.3)
     listen (3.4.1)
       rb-fsevent (~> 0.10, >= 0.10.3)
       rb-inotify (~> 0.9, >= 0.9.10)
-    mercenary (0.3.6)
+    mercenary (0.4.0)
     pathutil (0.16.2)
       forwardable-extended (~> 2.6)
     public_suffix (4.0.6)
     rb-fsevent (0.10.4)
     rb-inotify (0.10.1)
       ffi (~> 1.0)
-    rouge (2.2.1)
+    rexml (3.2.4)
+    rouge (3.26.0)
     safe_yaml (1.0.5)
-    sass (3.7.4)
-      sass-listen (~> 4.0.0)
-    sass-listen (4.0.0)
-      rb-fsevent (~> 0.9, >= 0.9.4)
-      rb-inotify (~> 0.9, >= 0.9.7)
+    sassc (2.4.0)
+      ffi (~> 1.9)
+    terminal-table (2.0.0)
+      unicode-display_width (~> 1.1, >= 1.1.1)
+    unicode-display_width (1.7.0)
 
 PLATFORMS
   ruby
 
 DEPENDENCIES
-  jekyll (= 3.6.3)
-  rouge (= 2.2.1)
+  jekyll (= 4.2.0)
+  rouge (= 3.26.0)
 
 BUNDLED WITH
    1.17.2
diff --git a/site/committers.html b/site/committers.html
index 58320b4..a72207b 100644
--- a/site/committers.html
+++ b/site/committers.html
@@ -565,7 +565,7 @@ who have shown they understand and can help with these activities.</p>
 <a href="/contributing.html">Contributing to Spark</a>. 
 In particular, if you are working on an area of the codebase you are unfamiliar with, look at the 
 Git history for that code to see who reviewed patches before. You can do this using 
-<code class="highlighter-rouge">git log --format=full &lt;filename&gt;</code>, by examining the &#8220;Commit&#8221; field to see who committed each patch.</p>
+<code class="language-plaintext highlighter-rouge">git log --format=full &lt;filename&gt;</code>, by examining the &#8220;Commit&#8221; field to see who committed each patch.</p>
 
 <h3>When to commit/merge a pull request</h3>
 
@@ -590,16 +590,16 @@ it. So please don&#8217;t add any test commits or anything like that, only real
 
 <h4>Setting up Remotes</h4>
 
-<p>To use the <code class="highlighter-rouge">merge_spark_pr.py</code> script described below, you 
-will need to add a git remote called <code class="highlighter-rouge">apache</code> at <code class="highlighter-rouge">https://github.com/apache/spark</code>, 
-as well as one called <code class="highlighter-rouge">apache-github</code> at <code class="highlighter-rouge">git://github.com/apache/spark</code>.</p>
+<p>To use the <code class="language-plaintext highlighter-rouge">merge_spark_pr.py</code> script described below, you 
+will need to add a git remote called <code class="language-plaintext highlighter-rouge">apache</code> at <code class="language-plaintext highlighter-rouge">https://github.com/apache/spark</code>, 
+as well as one called <code class="language-plaintext highlighter-rouge">apache-github</code> at <code class="language-plaintext highlighter-rouge">git://github.com/apache/spark</code>.</p>
 
-<p>You will likely also have a remote <code class="highlighter-rouge">origin</code> pointing to your fork of Spark, and
-<code class="highlighter-rouge">upstream</code> pointing to the <code class="highlighter-rouge">apache/spark</code> GitHub repo.</p>
+<p>You will likely also have a remote <code class="language-plaintext highlighter-rouge">origin</code> pointing to your fork of Spark, and
+<code class="language-plaintext highlighter-rouge">upstream</code> pointing to the <code class="language-plaintext highlighter-rouge">apache/spark</code> GitHub repo.</p>
 
-<p>If correct, your <code class="highlighter-rouge">git remote -v</code> should look like:</p>
+<p>If correct, your <code class="language-plaintext highlighter-rouge">git remote -v</code> should look like:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apache	https://github.com/apache/spark.git (fetch)
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>apache	https://github.com/apache/spark.git (fetch)
 apache	https://github.com/apache/spark.git (push)
 apache-github	git://github.com/apache/spark (fetch)
 apache-github	git://github.com/apache/spark (push)
@@ -609,7 +609,7 @@ upstream	https://github.com/apache/spark.git (fetch)
 upstream	https://github.com/apache/spark.git (push)
 </code></pre></div></div>
 
-<p>For the <code class="highlighter-rouge">apache</code> repo, you will need to set up command-line authentication to GitHub. This may
+<p>For the <code class="language-plaintext highlighter-rouge">apache</code> repo, you will need to set up command-line authentication to GitHub. This may
 include setting up an SSH key and/or personal access token. See:</p>
 
 <ul>
@@ -617,7 +617,7 @@ include setting up an SSH key and/or personal access token. See:</p>
   <li>https://help.github.com/articles/creating-a-personal-access-token-for-the-command-line/</li>
 </ul>
 
-<p>Ask <code class="highlighter-rouge">dev@spark.apache.org</code> if you have trouble with these steps, or want help doing your first merge.</p>
+<p>Ask <code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code> if you have trouble with these steps, or want help doing your first merge.</p>
 
 <h4>Merge Script</h4>
 
@@ -629,9 +629,9 @@ which squashes the pull request&#8217;s changes into one commit.</p>
 
 <p>If you want to amend a commit before merging – which should be used for trivial touch-ups – 
 then simply let the script wait at the point where it asks you if you want to push to Apache. 
-Then, in a separate window, modify the code and push a commit. Run <code class="highlighter-rouge">git rebase -i HEAD~2</code> and 
+Then, in a separate window, modify the code and push a commit. Run <code class="language-plaintext highlighter-rouge">git rebase -i HEAD~2</code> and 
 &#8220;squash&#8221; your new commit. Edit the commit message just after to remove your commit message. 
-You can verify the result is one change with <code class="highlighter-rouge">git log</code>. Then resume the script in the other window.</p>
+You can verify the result is one change with <code class="language-plaintext highlighter-rouge">git log</code>. Then resume the script in the other window.</p>
 
 <p>Also, please remember to set Assignee on JIRAs where applicable when they are resolved. The script 
 can do this automatically in most cases. However where the contributor is not yet a part of the
@@ -643,7 +643,7 @@ https://issues.apache.org/jira/plugins/servlet/project-config/SPARK/roles .</p>
 
 <h3>Policy on Backporting Bug Fixes</h3>
 
-<p>From <a href="https://www.mail-archive.com/dev@spark.apache.org/msg10284.html"><code class="highlighter-rouge">pwendell</code></a>:</p>
+<p>From <a href="https://www.mail-archive.com/dev@spark.apache.org/msg10284.html"><code class="language-plaintext highlighter-rouge">pwendell</code></a>:</p>
 
 <p>The trade off when backporting is you get to deliver the fix to people running older versions 
 (great!), but you risk introducing new or even worse bugs in maintenance releases (bad!). 
diff --git a/site/community.html b/site/community.html
index 992eba0..c7b3d03 100644
--- a/site/community.html
+++ b/site/community.html
@@ -208,7 +208,7 @@
 <h4>StackOverflow</h4>
 
 <p>For usage questions and help (e.g. how to use this Spark API), it is recommended you use the 
-StackOverflow tag <a href="https://stackoverflow.com/questions/tagged/apache-spark"><code class="highlighter-rouge">apache-spark</code></a> 
+StackOverflow tag <a href="https://stackoverflow.com/questions/tagged/apache-spark"><code class="language-plaintext highlighter-rouge">apache-spark</code></a> 
 as it is an active forum for Spark users&#8217; questions and answers.</p>
 
 <p>Some quick tips when using StackOverflow:</p>
@@ -217,17 +217,17 @@ as it is an active forum for Spark users&#8217; questions and answers.</p>
   <li>Prior to asking submitting questions, please:
     <ul>
       <li>Search StackOverflow&#8217;s 
-<a href="https://stackoverflow.com/questions/tagged/apache-spark"><code class="highlighter-rouge">apache-spark</code></a> tag to see if 
+<a href="https://stackoverflow.com/questions/tagged/apache-spark"><code class="language-plaintext highlighter-rouge">apache-spark</code></a> tag to see if 
 your question has already been answered</li>
       <li>Search the nabble archive for
 <a href="http://apache-spark-user-list.1001560.n3.nabble.com/">user@spark.apache.org</a></li>
     </ul>
   </li>
   <li>Please follow the StackOverflow <a href="https://stackoverflow.com/help/how-to-ask">code of conduct</a></li>
-  <li>Always use the <code class="highlighter-rouge">apache-spark</code> tag when asking questions</li>
+  <li>Always use the <code class="language-plaintext highlighter-rouge">apache-spark</code> tag when asking questions</li>
   <li>Please also use a secondary tag to specify components so subject matter experts can more easily find them.
- Examples include: <code class="highlighter-rouge">pyspark</code>, <code class="highlighter-rouge">spark-dataframe</code>, <code class="highlighter-rouge">spark-streaming</code>, <code class="highlighter-rouge">spark-r</code>, <code class="highlighter-rouge">spark-mllib</code>, 
-<code class="highlighter-rouge">spark-ml</code>, <code class="highlighter-rouge">spark-graphx</code>, <code class="highlighter-rouge">spark-graphframes</code>, <code class="highlighter-rouge">spark-tensorframes</code>, etc.</li>
+ Examples include: <code class="language-plaintext highlighter-rouge">pyspark</code>, <code class="language-plaintext highlighter-rouge">spark-dataframe</code>, <code class="language-plaintext highlighter-rouge">spark-streaming</code>, <code class="language-plaintext highlighter-rouge">spark-r</code>, <code class="language-plaintext highlighter-rouge">spark-mllib</code>, 
+<code class="language-plaintext highlighter-rouge">spark-ml</code>, <code class="language-plaintext highlighter-rouge">spark-graphx</code>, <code class="language-plaintext highlighter-rouge">spark-graphframes</code>, <code class="language-plaintext highlighter-rouge">spark-tensorframes</code>, etc.</li>
   <li>Please do not cross-post between StackOverflow and the mailing lists</li>
   <li>No jobs, sales, or solicitation is permitted on StackOverflow</li>
 </ul>
@@ -258,14 +258,14 @@ project, and scenarios, it is recommended you use the user@spark.apache.org mail
 <ul>
   <li>Prior to asking submitting questions, please:
     <ul>
-      <li>Search StackOverflow at <a href="https://stackoverflow.com/questions/tagged/apache-spark"><code class="highlighter-rouge">apache-spark</code></a> 
+      <li>Search StackOverflow at <a href="https://stackoverflow.com/questions/tagged/apache-spark"><code class="language-plaintext highlighter-rouge">apache-spark</code></a> 
 to see if your question has already been answered</li>
       <li>Search the nabble archive for
 <a href="http://apache-spark-user-list.1001560.n3.nabble.com/">user@spark.apache.org</a></li>
     </ul>
   </li>
   <li>Tagging the subject line of your email will help you get a faster response, e.g. 
-<code class="highlighter-rouge">[Spark SQL]: Does Spark SQL support LEFT SEMI JOIN?</code></li>
+<code class="language-plaintext highlighter-rouge">[Spark SQL]: Does Spark SQL support LEFT SEMI JOIN?</code></li>
   <li>Tags may help identify a topic by:
     <ul>
       <li>Component: Spark Core, Spark SQL, ML, MLlib, GraphFrames, GraphX, TensorFrames, etc</li>
diff --git a/site/contributing.html b/site/contributing.html
index 9005074..0c99606 100644
--- a/site/contributing.html
+++ b/site/contributing.html
@@ -214,7 +214,7 @@ rather than just open pull requests.</p>
 
 <h2>Contributing by Helping Other Users</h2>
 
-<p>A great way to contribute to Spark is to help answer user questions on the <code class="highlighter-rouge">user@spark.apache.org</code> 
+<p>A great way to contribute to Spark is to help answer user questions on the <code class="language-plaintext highlighter-rouge">user@spark.apache.org</code> 
 mailing list or on StackOverflow. There are always many new Spark users; taking a few minutes to 
 help answer a question is a very valuable community service.</p>
 
@@ -229,7 +229,7 @@ like StackOverflow.</p>
 <h2>Contributing by Testing Releases</h2>
 
 <p>Spark&#8217;s release process is community-oriented, and members of the community can vote on new 
-releases on the <code class="highlighter-rouge">dev@spark.apache.org</code> mailing list. Spark users are invited to subscribe to 
+releases on the <code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code> mailing list. Spark users are invited to subscribe to 
 this list to receive announcements, and test their workloads on newer release and provide 
 feedback on any performance or correctness issues found in the newer release.</p>
 
@@ -249,8 +249,8 @@ convenient way to view and filter open PRs.</p>
 <p>To propose a change to <em>release</em> documentation (that is, docs that appear under 
 <a href="https://spark.apache.org/docs/">https://spark.apache.org/docs/</a>), 
 edit the Markdown source files in Spark&#8217;s 
-<a href="https://github.com/apache/spark/tree/master/docs"><code class="highlighter-rouge">docs/</code></a> directory, 
-whose <code class="highlighter-rouge">README</code> file shows how to build the documentation locally to test your changes.
+<a href="https://github.com/apache/spark/tree/master/docs"><code class="language-plaintext highlighter-rouge">docs/</code></a> directory, 
+whose <code class="language-plaintext highlighter-rouge">README</code> file shows how to build the documentation locally to test your changes.
 The process to propose a doc change is otherwise the same as the process for proposing code 
 changes below.</p>
 
@@ -290,8 +290,8 @@ request to fix the bug should narrow down the problem to the root cause.</p>
 must provide a benchmark to prove the problem is indeed fixed.</p>
 
 <p>Note that, data correctness/data loss bugs are very serious. Make sure the corresponding bug 
-report JIRA ticket is labeled as <code class="highlighter-rouge">correctness</code> or <code class="highlighter-rouge">data-loss</code>. If the bug report doesn&#8217;t get 
-enough attention, please send an email to <code class="highlighter-rouge">dev@spark.apache.org</code>, to draw more attentions.</p>
+report JIRA ticket is labeled as <code class="language-plaintext highlighter-rouge">correctness</code> or <code class="language-plaintext highlighter-rouge">data-loss</code>. If the bug report doesn&#8217;t get 
+enough attention, please send an email to <code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code>, to draw more attentions.</p>
 
 <p>It is possible to propose new features as well. These are generally not helpful unless 
 accompanied by detail, such as a design document and/or code change. Large new contributions 
@@ -350,7 +350,7 @@ Review can take hours or days of committer time. Everyone benefits if contributo
 changes that are useful, clear, easy to evaluate, and already pass basic checks.</p>
 
 <p>Sometimes, a contributor will already have a particular new change or bug in mind. If seeking 
-ideas, consult the list of starter tasks in JIRA, or ask the <code class="highlighter-rouge">user@spark.apache.org</code> mailing list.</p>
+ideas, consult the list of starter tasks in JIRA, or ask the <code class="language-plaintext highlighter-rouge">user@spark.apache.org</code> mailing list.</p>
 
 <p>Before proceeding, contributors should evaluate if the proposed change is likely to be relevant, 
 new and actionable:</p>
@@ -359,8 +359,8 @@ new and actionable:</p>
   <li>Is it clear that code must change? Proposing a JIRA and pull request is appropriate only when a 
 clear problem or change has been identified. If simply having trouble using Spark, use the mailing 
 lists first, rather than consider filing a JIRA or proposing a change. When in doubt, email 
-<code class="highlighter-rouge">user@spark.apache.org</code> first about the possible change</li>
-  <li>Search the <code class="highlighter-rouge">user@spark.apache.org</code> and <code class="highlighter-rouge">dev@spark.apache.org</code> mailing list 
+<code class="language-plaintext highlighter-rouge">user@spark.apache.org</code> first about the possible change</li>
+  <li>Search the <code class="language-plaintext highlighter-rouge">user@spark.apache.org</code> and <code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code> mailing list 
 <a href="/community.html#mailing-lists">archives</a> for 
 related discussions. Use <a href="http://search-hadoop.com/?q=&amp;fc_project=Spark">search-hadoop.com</a> 
 or similar search tools. 
@@ -368,7 +368,7 @@ Often, the problem has been discussed before, with a resolution that doesn&#8217
 change, or recording what kinds of changes will not be accepted as a resolution.</li>
   <li>Search JIRA for existing issues: 
 <a href="https://issues.apache.org/jira/browse/SPARK">https://issues.apache.org/jira/browse/SPARK</a></li>
-  <li>Type <code class="highlighter-rouge">spark [search terms]</code> at the top right search box. If a logically similar issue already 
+  <li>Type <code class="language-plaintext highlighter-rouge">spark [search terms]</code> at the top right search box. If a logically similar issue already 
 exists, then contribute to the discussion on the existing JIRA and pull request first, instead of 
 creating a new one.</li>
   <li>Is the scope of the change matched to the contributor&#8217;s level of experience? Anyone is qualified 
@@ -392,7 +392,7 @@ that maintainability, consistency, and code quality come first. New algorithms s
   <li>Be well documented</li>
   <li>Have APIs consistent with other algorithms in MLLib that accomplish the same thing</li>
   <li>Come with a reasonable expectation of developer support.</li>
-  <li>Have <code class="highlighter-rouge">@Since</code> annotation on public classes, methods, and variables.</li>
+  <li>Have <code class="language-plaintext highlighter-rouge">@Since</code> annotation on public classes, methods, and variables.</li>
 </ul>
 
 <h3>Code Review Criteria</h3>
@@ -447,7 +447,7 @@ have the legal authority to do so.</strong></p>
 
 <p>If you are interested in working with the newest under-development code or contributing to Apache Spark development, you can check out the master branch from Git:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Master development branch
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Master development branch
 git clone git://github.com/apache/spark.git
 </code></pre></div></div>
 
@@ -472,7 +472,7 @@ decisions are discussed in JIRA.</p>
   </li>
   <li>If the change is new, then it usually needs a new JIRA. However, trivial changes, where the
 what should change is virtually the same as the how it should change do not require a JIRA. 
-Example: <code class="highlighter-rouge">Fix typos in Foo scaladoc</code></li>
+Example: <code class="language-plaintext highlighter-rouge">Fix typos in Foo scaladoc</code></li>
   <li>If required, create a new JIRA:
     <ol>
       <li>Provide a descriptive Title. &#8220;Update web UI&#8221; or &#8220;Problem in scheduler&#8221; is not sufficient.
@@ -503,11 +503,11 @@ Example: <code class="highlighter-rouge">Fix typos in Foo scaladoc</code></li>
  problem or need the change</li>
           <li><strong>Label</strong>. Not widely used, except for the following:
             <ul>
-              <li><code class="highlighter-rouge">correctness</code>: a correctness issue</li>
-              <li><code class="highlighter-rouge">data-loss</code>: a data loss issue</li>
-              <li><code class="highlighter-rouge">release-notes</code>: the change&#8217;s effects need mention in release notes. The JIRA or pull request
+              <li><code class="language-plaintext highlighter-rouge">correctness</code>: a correctness issue</li>
+              <li><code class="language-plaintext highlighter-rouge">data-loss</code>: a data loss issue</li>
+              <li><code class="language-plaintext highlighter-rouge">release-notes</code>: the change&#8217;s effects need mention in release notes. The JIRA or pull request
   should include detail suitable for inclusion in release notes &#8211; see &#8220;Docs Text&#8221; below.</li>
-              <li><code class="highlighter-rouge">starter</code>: small, simple change suitable for new contributors</li>
+              <li><code class="language-plaintext highlighter-rouge">starter</code>: small, simple change suitable for new contributors</li>
             </ul>
           </li>
           <li><strong>Docs Text</strong>: For issues that require an entry in the release notes, this should contain the
@@ -528,7 +528,7 @@ Example: <code class="highlighter-rouge">Fix typos in Foo scaladoc</code></li>
     </ol>
   </li>
   <li>If the change is a large change, consider inviting discussion on the issue at 
-<code class="highlighter-rouge">dev@spark.apache.org</code> first before proceeding to implement the change.</li>
+<code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code> first before proceeding to implement the change.</li>
 </ol>
 
 <h3>Pull Request</h3>
@@ -546,25 +546,25 @@ and add them as needed.
    a couple of tests to an existing test class. See the examples below:
         <ul>
           <li>Scala
-            <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>test("SPARK-12345: a short description of the test") {
+            <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>test("SPARK-12345: a short description of the test") {
   ...
 </code></pre></div>            </div>
           </li>
           <li>Java
-            <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Test
+            <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>@Test
 public void testCase() {
   // SPARK-12345: a short description of the test
   ...
 </code></pre></div>            </div>
           </li>
           <li>Python
-            <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def test_case(self):
+            <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>def test_case(self):
     # SPARK-12345: a short description of the test
     ...
 </code></pre></div>            </div>
           </li>
           <li>R
-            <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>test_that("SPARK-12345: a short description of the test", {
+            <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>test_that("SPARK-12345: a short description of the test", {
   ...
 </code></pre></div>            </div>
           </li>
@@ -572,20 +572,20 @@ public void testCase() {
       </li>
     </ol>
   </li>
-  <li>Run all tests with <code class="highlighter-rouge">./dev/run-tests</code> to verify that the code still compiles, passes tests, and 
+  <li>Run all tests with <code class="language-plaintext highlighter-rouge">./dev/run-tests</code> to verify that the code still compiles, passes tests, and 
 passes style checks. If style checks fail, review the Code Style Guide below.</li>
   <li><a href="https://help.github.com/articles/using-pull-requests/">Open a pull request</a> against 
-the <code class="highlighter-rouge">master</code> branch of <code class="highlighter-rouge">apache/spark</code>. (Only in special cases would the PR be opened against other branches.)
+the <code class="language-plaintext highlighter-rouge">master</code> branch of <code class="language-plaintext highlighter-rouge">apache/spark</code>. (Only in special cases would the PR be opened against other branches.)
     <ol>
-      <li>The PR title should be of the form <code class="highlighter-rouge">[SPARK-xxxx][COMPONENT] Title</code>, where <code class="highlighter-rouge">SPARK-xxxx</code> is 
-  the relevant JIRA number, <code class="highlighter-rouge">COMPONENT </code>is one of the PR categories shown at 
+      <li>The PR title should be of the form <code class="language-plaintext highlighter-rouge">[SPARK-xxxx][COMPONENT] Title</code>, where <code class="language-plaintext highlighter-rouge">SPARK-xxxx</code> is 
+  the relevant JIRA number, <code class="language-plaintext highlighter-rouge">COMPONENT </code>is one of the PR categories shown at 
   <a href="https://spark-prs.appspot.com/">spark-prs.appspot.com</a> and 
   Title may be the JIRA&#8217;s title or a more specific title describing the PR itself.</li>
       <li>If the pull request is still a work in progress, and so is not ready to be merged, 
-  but needs to be pushed to GitHub to facilitate review, then add <code class="highlighter-rouge">[WIP]</code> after the component.</li>
+  but needs to be pushed to GitHub to facilitate review, then add <code class="language-plaintext highlighter-rouge">[WIP]</code> after the component.</li>
       <li>Consider identifying committers or other contributors who have worked on the code being 
   changed. Find the file(s) in GitHub and click &#8220;Blame&#8221; to see a line-by-line annotation of 
-  who changed the code last. You can add <code class="highlighter-rouge">@username</code> in the PR description to ping them 
+  who changed the code last. You can add <code class="language-plaintext highlighter-rouge">@username</code> in the PR description to ping them 
   immediately.</li>
       <li>Please state that the contribution is your original work and that you license the work 
   to the project under the project&#8217;s open source license.</li>
@@ -638,8 +638,8 @@ comment LGTM you will be expected to help with bugs or follow-up issues on the p
 judicious use of LGTMs is a great way to gain credibility as a reviewer with the broader community.</li>
   <li>Sometimes, other changes will be merged which conflict with your pull request&#8217;s changes. The 
 PR can&#8217;t be merged until the conflict is resolved. This can be resolved by, for example, adding a remote
-to keep up with upstream changes by <code class="highlighter-rouge">git remote add upstream https://github.com/apache/spark.git</code>,
-running <code class="highlighter-rouge">git fetch upstream</code> followed by <code class="highlighter-rouge">git rebase upstream/master</code> and resolving the conflicts by hand,
+to keep up with upstream changes by <code class="language-plaintext highlighter-rouge">git remote add upstream https://github.com/apache/spark.git</code>,
+running <code class="language-plaintext highlighter-rouge">git fetch upstream</code> followed by <code class="language-plaintext highlighter-rouge">git rebase upstream/master</code> and resolving the conflicts by hand,
 then pushing the result to your branch.</li>
   <li>Try to be responsive to the discussion rather than let days pass between replies</li>
 </ul>
@@ -651,7 +651,7 @@ then pushing the result to your branch.</li>
 along with the associated JIRA if any
     <ul>
       <li>Note that in the rare case you are asked to open a pull request against a branch besides 
-<code class="highlighter-rouge">master</code>, that you will actually have to close the pull request manually</li>
+<code class="language-plaintext highlighter-rouge">master</code>, that you will actually have to close the pull request manually</li>
       <li>The JIRA will be Assigned to the primary contributor to the change as a way of giving credit. 
 If the JIRA isn&#8217;t closed and/or Assigned promptly, comment on the JIRA.</li>
     </ul>
@@ -701,7 +701,7 @@ Scala guidelines below. The latter is preferred.</li>
 
 <p>If you&#8217;re not sure about the right style for something, try to follow the style of the existing 
 codebase. Look at whether there are other examples in the code that use your feature. Feel free 
-to ask on the <code class="highlighter-rouge">dev@spark.apache.org</code> list as well and/or ask committers.</p>
+to ask on the <code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code> list as well and/or ask committers.</p>
 
   </div>
 </div>
diff --git a/site/developer-tools.html b/site/developer-tools.html
index 02a556e..8b38f34 100644
--- a/site/developer-tools.html
+++ b/site/developer-tools.html
@@ -211,7 +211,7 @@ be cumbersome when doing iterative development. When developing locally, it is p
 an assembly jar including all of Spark&#8217;s dependencies and then re-package only Spark itself 
 when making changes.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ build/sbt clean package
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ build/sbt clean package
 $ ./bin/spark-shell
 $ export SPARK_PREPEND_CLASSES=true
 $ ./bin/spark-shell # Now it's using compiled classes
@@ -231,20 +231,20 @@ $ build/sbt ~compile
 <p><a href="https://github.com/typesafehub/zinc">Zinc</a> is a long-running server version of SBT&#8217;s incremental
 compiler. When run locally as a background process, it speeds up builds of Scala-based projects
 like Spark. Developers who regularly recompile Spark with Maven will be the most interested in
-Zinc. The project site gives instructions for building and running <code class="highlighter-rouge">zinc</code>; OS X users can
-install it using <code class="highlighter-rouge">brew install zinc</code>.</p>
+Zinc. The project site gives instructions for building and running <code class="language-plaintext highlighter-rouge">zinc</code>; OS X users can
+install it using <code class="language-plaintext highlighter-rouge">brew install zinc</code>.</p>
 
-<p>If using the <code class="highlighter-rouge">build/mvn</code> package <code class="highlighter-rouge">zinc</code> will automatically be downloaded and leveraged for all
-builds. This process will auto-start after the first time <code class="highlighter-rouge">build/mvn</code> is called and bind to port
-3030 unless the <code class="highlighter-rouge">ZINC_PORT</code> environment variable is set. The <code class="highlighter-rouge">zinc</code> process can subsequently be
-shut down at any time by running <code class="highlighter-rouge">build/zinc-&lt;version&gt;/bin/zinc -shutdown</code> and will automatically
-restart whenever <code class="highlighter-rouge">build/mvn</code> is called.</p>
+<p>If using the <code class="language-plaintext highlighter-rouge">build/mvn</code> package <code class="language-plaintext highlighter-rouge">zinc</code> will automatically be downloaded and leveraged for all
+builds. This process will auto-start after the first time <code class="language-plaintext highlighter-rouge">build/mvn</code> is called and bind to port
+3030 unless the <code class="language-plaintext highlighter-rouge">ZINC_PORT</code> environment variable is set. The <code class="language-plaintext highlighter-rouge">zinc</code> process can subsequently be
+shut down at any time by running <code class="language-plaintext highlighter-rouge">build/zinc-&lt;version&gt;/bin/zinc -shutdown</code> and will automatically
+restart whenever <code class="language-plaintext highlighter-rouge">build/mvn</code> is called.</p>
 
 <h3>Building submodules individually</h3>
 
 <p>For instance, you can build the Spark Core module using:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ # sbt
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ # sbt
 $ build/sbt
 &gt; project core
 &gt; package
@@ -263,87 +263,87 @@ $ build/mvn package -DskipTests -pl core
 
 <h4>Testing with SBT</h4>
 
-<p>The fastest way to run individual tests is to use the <code class="highlighter-rouge">sbt</code> console. It&#8217;s fastest to keep a <code class="highlighter-rouge">sbt</code> console open, and use it to re-run tests as necessary.  For example, to run all of the tests in a particular project, e.g., <code class="highlighter-rouge">core</code>:</p>
+<p>The fastest way to run individual tests is to use the <code class="language-plaintext highlighter-rouge">sbt</code> console. It&#8217;s fastest to keep a <code class="language-plaintext highlighter-rouge">sbt</code> console open, and use it to re-run tests as necessary.  For example, to run all of the tests in a particular project, e.g., <code class="language-plaintext highlighter-rouge">core</code>:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ build/sbt
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ build/sbt
 &gt; project core
 &gt; test
 </code></pre></div></div>
 
-<p>You can run a single test suite using the <code class="highlighter-rouge">testOnly</code> command.  For example, to run the DAGSchedulerSuite:</p>
+<p>You can run a single test suite using the <code class="language-plaintext highlighter-rouge">testOnly</code> command.  For example, to run the DAGSchedulerSuite:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly org.apache.spark.scheduler.DAGSchedulerSuite
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly org.apache.spark.scheduler.DAGSchedulerSuite
 </code></pre></div></div>
 
-<p>The <code class="highlighter-rouge">testOnly</code> command accepts wildcards; e.g., you can also run the <code class="highlighter-rouge">DAGSchedulerSuite</code> with:</p>
+<p>The <code class="language-plaintext highlighter-rouge">testOnly</code> command accepts wildcards; e.g., you can also run the <code class="language-plaintext highlighter-rouge">DAGSchedulerSuite</code> with:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly *DAGSchedulerSuite
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly *DAGSchedulerSuite
 </code></pre></div></div>
 
 <p>Or you could run all of the tests in the scheduler package:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly org.apache.spark.scheduler.*
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly org.apache.spark.scheduler.*
 </code></pre></div></div>
 
-<p>If you&#8217;d like to run just a single test in the <code class="highlighter-rouge">DAGSchedulerSuite</code>, e.g., a test that includes &#8220;SPARK-12345&#8221; in the name, you run the following command in the sbt console:</p>
+<p>If you&#8217;d like to run just a single test in the <code class="language-plaintext highlighter-rouge">DAGSchedulerSuite</code>, e.g., a test that includes &#8220;SPARK-12345&#8221; in the name, you run the following command in the sbt console:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly *DAGSchedulerSuite -- -z "SPARK-12345"
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&gt; testOnly *DAGSchedulerSuite -- -z "SPARK-12345"
 </code></pre></div></div>
 
-<p>If you&#8217;d prefer, you can run all of these commands on the command line (but this will be slower than running tests using an open console).  To do this, you need to surround <code class="highlighter-rouge">testOnly</code> and the following arguments in quotes:</p>
+<p>If you&#8217;d prefer, you can run all of these commands on the command line (but this will be slower than running tests using an open console).  To do this, you need to surround <code class="language-plaintext highlighter-rouge">testOnly</code> and the following arguments in quotes:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ build/sbt "core/testOnly *DAGSchedulerSuite -- -z SPARK-12345"
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ build/sbt "core/testOnly *DAGSchedulerSuite -- -z SPARK-12345"
 </code></pre></div></div>
 
 <p>For more about how to run individual tests with sbt, see the <a href="https://www.scala-sbt.org/0.13/docs/Testing.html">sbt documentation</a>.</p>
 
 <h4>Testing with Maven</h4>
 
-<p>With Maven, you can use the <code class="highlighter-rouge">-DwildcardSuites</code> flag to run individual Scala tests:</p>
+<p>With Maven, you can use the <code class="language-plaintext highlighter-rouge">-DwildcardSuites</code> flag to run individual Scala tests:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>build/mvn -Dtest=none -DwildcardSuites=org.apache.spark.scheduler.DAGSchedulerSuite test
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>build/mvn -Dtest=none -DwildcardSuites=org.apache.spark.scheduler.DAGSchedulerSuite test
 </code></pre></div></div>
 
-<p>You need <code class="highlighter-rouge">-Dtest=none</code> to avoid running the Java tests.  For more information about the ScalaTest Maven Plugin, refer to the <a href="http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin">ScalaTest documentation</a>.</p>
+<p>You need <code class="language-plaintext highlighter-rouge">-Dtest=none</code> to avoid running the Java tests.  For more information about the ScalaTest Maven Plugin, refer to the <a href="http://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin">ScalaTest documentation</a>.</p>
 
-<p>To run individual Java tests, you can use the <code class="highlighter-rouge">-Dtest</code> flag:</p>
+<p>To run individual Java tests, you can use the <code class="language-plaintext highlighter-rouge">-Dtest</code> flag:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>build/mvn test -DwildcardSuites=none -Dtest=org.apache.spark.streaming.JavaAPISuite test
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>build/mvn test -DwildcardSuites=none -Dtest=org.apache.spark.streaming.JavaAPISuite test
 </code></pre></div></div>
 
 <h4>Testing PySpark</h4>
 
-<p>To run individual PySpark tests, you can use <code class="highlighter-rouge">run-tests</code> script under <code class="highlighter-rouge">python</code> directory. Test cases are located at <code class="highlighter-rouge">tests</code> package under each PySpark packages.
+<p>To run individual PySpark tests, you can use <code class="language-plaintext highlighter-rouge">run-tests</code> script under <code class="language-plaintext highlighter-rouge">python</code> directory. Test cases are located at <code class="language-plaintext highlighter-rouge">tests</code> package under each PySpark packages.
 Note that, if you add some changes into Scala or Python side in Apache Spark, you need to manually build Apache Spark again before running PySpark tests in order to apply the changes.
 Running PySpark testing script does not automatically build it.</p>
 
-<p>Also, note that there is an ongoing issue to use PySpark on macOS High Serria+. <code class="highlighter-rouge">OBJC_DISABLE_INITIALIZE_FORK_SAFETY</code>
-should be set to <code class="highlighter-rouge">YES</code> in order to run some of tests.
+<p>Also, note that there is an ongoing issue to use PySpark on macOS High Serria+. <code class="language-plaintext highlighter-rouge">OBJC_DISABLE_INITIALIZE_FORK_SAFETY</code>
+should be set to <code class="language-plaintext highlighter-rouge">YES</code> in order to run some of tests.
 See <a href="https://issues.apache.org/jira/browse/SPARK-25473">PySpark issue</a> and <a href="https://bugs.python.org/issue33725">Python issue</a> for more details.</p>
 
 <p>To run test cases in a specific module:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames pyspark.sql.tests.test_arrow
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames pyspark.sql.tests.test_arrow
 </code></pre></div></div>
 
 <p>To run test cases in a specific class:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames 'pyspark.sql.tests.test_arrow ArrowTests'
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames 'pyspark.sql.tests.test_arrow ArrowTests'
 </code></pre></div></div>
 
 <p>To run single test case in a specific class:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames 'pyspark.sql.tests.test_arrow ArrowTests.test_null_conversion'
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames 'pyspark.sql.tests.test_arrow ArrowTests.test_null_conversion'
 </code></pre></div></div>
 
 <p>You can also run doctests in a specific module:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames pyspark.sql.dataframe
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests --testnames pyspark.sql.dataframe
 </code></pre></div></div>
 
-<p>Lastly, there is another script called <code class="highlighter-rouge">run-tests-with-coverage</code> in the same location, which generates coverage report for PySpark tests. It accepts same arguments with <code class="highlighter-rouge">run-tests</code>.</p>
+<p>Lastly, there is another script called <code class="language-plaintext highlighter-rouge">run-tests-with-coverage</code> in the same location, which generates coverage report for PySpark tests. It accepts same arguments with <code class="language-plaintext highlighter-rouge">run-tests</code>.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests-with-coverage --testnames pyspark.sql.tests.test_arrow --python-executables=python
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ python/run-tests-with-coverage --testnames pyspark.sql.tests.test_arrow --python-executables=python
 ...
 Name                              Stmts   Miss Branch BrPart  Cover
 -------------------------------------------------------------------
@@ -353,9 +353,9 @@ pyspark/_globals.py                  16      3      4      2    75%
 Generating HTML files for PySpark coverage under /.../spark/python/test_coverage/htmlcov
 </code></pre></div></div>
 
-<p>You can check the coverage report visually by HTMLs under <code class="highlighter-rouge">/.../spark/python/test_coverage/htmlcov</code>.</p>
+<p>You can check the coverage report visually by HTMLs under <code class="language-plaintext highlighter-rouge">/.../spark/python/test_coverage/htmlcov</code>.</p>
 
-<p>Please check other available options via <code class="highlighter-rouge">python/run-tests[-with-coverage] --help</code>.</p>
+<p>Please check other available options via <code class="language-plaintext highlighter-rouge">python/run-tests[-with-coverage] --help</code>.</p>
 
 <h4>Testing K8S</h4>
 
@@ -363,15 +363,15 @@ Generating HTML files for PySpark coverage under /.../spark/python/test_coverage
 
 <ul>
   <li>minikube version v0.34.1 (or greater, but backwards-compatibility between versions is spotty)</li>
-  <li>You must use a VM driver!  Running minikube with the <code class="highlighter-rouge">--vm-driver=none</code> option requires that the user launching minikube/k8s have root access.  Our Jenkins workers use the <a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver">kvm2</a> drivers.  More details <a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md">here</a>.</li>
-  <li>kubernetes version v1.15.12 (can be set by executing <code class="highlighter-rouge">minikube config set kubernetes-version v1.15.12</code>)</li>
+  <li>You must use a VM driver!  Running minikube with the <code class="language-plaintext highlighter-rouge">--vm-driver=none</code> option requires that the user launching minikube/k8s have root access.  Our Jenkins workers use the <a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md#kvm2-driver">kvm2</a> drivers.  More details <a href="https://github.com/kubernetes/minikube/blob/master/docs/drivers.md">here</a>.</li>
+  <li>kubernetes version v1.15.12 (can be set by executing <code class="language-plaintext highlighter-rouge">minikube config set kubernetes-version v1.15.12</code>)</li>
 </ul>
 
 <p>Once you have minikube properly set up, and have successfully completed the <a href="https://kubernetes.io/docs/setup/minikube/#quickstart">quick start</a>, you can test your changes locally.  All subsequent commands should be run from your root spark/ repo directory:</p>
 
 <p>1) Build a tarball to test against:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export DATE=`date "+%Y%m%d"`
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>export DATE=`date "+%Y%m%d"`
 export REVISION=`git rev-parse --short HEAD`
 export ZINC_PORT=$(python -S -c "import random; print(random.randrange(3030,4030))")
 export HADOOP_PROFILE=hadoop-3.2
@@ -383,7 +383,7 @@ export TARBALL_TO_TEST=($(pwd)/spark-*${DATE}-${REVISION}.tgz)
 
 <p>2) Use that tarball and run the K8S integration tests:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PVC_TMP_DIR=$(mktemp -d)
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>PVC_TMP_DIR=$(mktemp -d)
 export PVC_TESTS_HOST_PATH=$PVC_TMP_DIR
 export PVC_TESTS_VM_PATH=$PVC_TMP_DIR
 
@@ -401,9 +401,9 @@ kill -9 $MOUNT_PID
 minikube stop
 </code></pre></div></div>
 
-<p>After the run is completed, the integration test logs are saved here: <code class="highlighter-rouge">./resource-managers/kubernetes/integration-tests/target/integration-tests.log</code>.</p>
+<p>After the run is completed, the integration test logs are saved here: <code class="language-plaintext highlighter-rouge">./resource-managers/kubernetes/integration-tests/target/integration-tests.log</code>.</p>
 
-<p>In case of a failure the POD logs (driver and executors) can be found at the end of the failed test (within <code class="highlighter-rouge">integration-tests.log</code>) in the <code class="highlighter-rouge">EXTRA LOGS FOR THE FAILED TEST</code> section.</p>
+<p>In case of a failure the POD logs (driver and executors) can be found at the end of the failed test (within <code class="language-plaintext highlighter-rouge">integration-tests.log</code>) in the <code class="language-plaintext highlighter-rouge">EXTRA LOGS FOR THE FAILED TEST</code> section.</p>
 
 <p>Kubernetes, and more importantly, minikube have rapid release cycles, and point releases have been found to be buggy and/or break older and existing functionality.  If you are having trouble getting tests to pass on Jenkins, but locally things work, don&#8217;t hesitate to file a Jira issue.</p>
 
@@ -432,23 +432,23 @@ To run tests on &#8220;your_branch&#8221; and check test results:</p>
 
 <p>If the following error occurs when running ScalaTest</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>An internal error occurred during: "Launching XYZSuite.scala".
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>An internal error occurred during: "Launching XYZSuite.scala".
 java.lang.NullPointerException
 </code></pre></div></div>
 <p>It is due to an incorrect Scala library in the classpath. To fix it:</p>
 
 <ul>
   <li>Right click on project</li>
-  <li>Select <code class="highlighter-rouge">Build Path | Configure Build Path</code></li>
-  <li><code class="highlighter-rouge">Add Library | Scala Library</code></li>
-  <li>Remove <code class="highlighter-rouge">scala-library-2.10.4.jar - lib_managed\jars</code></li>
+  <li>Select <code class="language-plaintext highlighter-rouge">Build Path | Configure Build Path</code></li>
+  <li><code class="language-plaintext highlighter-rouge">Add Library | Scala Library</code></li>
+  <li>Remove <code class="language-plaintext highlighter-rouge">scala-library-2.10.4.jar - lib_managed\jars</code></li>
 </ul>
 
 <p>In the event of &#8220;Could not find resource path for Web UI: org/apache/spark/ui/static&#8221;, 
 it&#8217;s due to a classpath issue (some classes were probably not compiled). To fix this, it 
 sufficient to run a test from the command line:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>build/sbt "testOnly org.apache.spark.rdd.SortingSuite"
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>build/sbt "testOnly org.apache.spark.rdd.SortingSuite"
 </code></pre></div></div>
 
 <h3>Running Different Test Permutations on Jenkins</h3>
@@ -457,12 +457,12 @@ sufficient to run a test from the command line:</p>
 your pull request to change testing behavior. This includes:</p>
 
 <ul>
-  <li><code class="highlighter-rouge">[test-maven]</code> - signals to test the pull request using maven</li>
-  <li><code class="highlighter-rouge">[test-hadoop2.7]</code> - signals to test using Spark&#8217;s Hadoop 2.7 profile</li>
-  <li><code class="highlighter-rouge">[test-hadoop3.2]</code> - signals to test using Spark&#8217;s Hadoop 3.2 profile</li>
-  <li><code class="highlighter-rouge">[test-hadoop3.2][test-java11]</code> - signals to test using Spark&#8217;s Hadoop 3.2 profile with JDK 11</li>
-  <li><code class="highlighter-rouge">[test-hive1.2]</code> - signals to test using Spark&#8217;s Hive 1.2 profile</li>
-  <li><code class="highlighter-rouge">[test-hive2.3]</code> - signals to test using Spark&#8217;s Hive 2.3 profile</li>
+  <li><code class="language-plaintext highlighter-rouge">[test-maven]</code> - signals to test the pull request using maven</li>
+  <li><code class="language-plaintext highlighter-rouge">[test-hadoop2.7]</code> - signals to test using Spark&#8217;s Hadoop 2.7 profile</li>
+  <li><code class="language-plaintext highlighter-rouge">[test-hadoop3.2]</code> - signals to test using Spark&#8217;s Hadoop 3.2 profile</li>
+  <li><code class="language-plaintext highlighter-rouge">[test-hadoop3.2][test-java11]</code> - signals to test using Spark&#8217;s Hadoop 3.2 profile with JDK 11</li>
+  <li><code class="language-plaintext highlighter-rouge">[test-hive1.2]</code> - signals to test using Spark&#8217;s Hive 1.2 profile</li>
+  <li><code class="language-plaintext highlighter-rouge">[test-hive2.3]</code> - signals to test using Spark&#8217;s Hive 2.3 profile</li>
 </ul>
 
 <h3>Binary compatibility</h3>
@@ -476,19 +476,19 @@ not introduce binary incompatibilities before opening a pull request.</p>
 
 <p>You can do so by running the following command:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dev/mima
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ dev/mima
 </code></pre></div></div>
 
 <p>A binary incompatibility reported by MiMa might look like the following:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[error] method this(org.apache.spark.sql.Dataset)Unit in class org.apache.spark.SomeClass does not have a correspondent in current version
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[error] method this(org.apache.spark.sql.Dataset)Unit in class org.apache.spark.SomeClass does not have a correspondent in current version
 [error] filter with: ProblemFilters.exclude[DirectMissingMethodProblem]("org.apache.spark.SomeClass.this")
 </code></pre></div></div>
 
 <p>If you open a pull request containing binary incompatibilities anyway, Jenkins
 will remind you by failing the test build with the following message:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Test build #xx has finished for PR yy at commit ffffff.
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Test build #xx has finished for PR yy at commit ffffff.
 
   This patch fails MiMa tests.
   This patch merges cleanly.
@@ -506,8 +506,8 @@ JIRA number of the issue you&#8217;re working on as well as its title.</p>
 
 <p>For the problem described above, we might add the following:</p>
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// [SPARK-zz][CORE] Fix an issue
-</span><span class="nc">ProblemFilters</span><span class="o">.</span><span class="n">exclude</span><span class="o">[</span><span class="kt">DirectMissingMethodProblem</span><span class="o">](</span><span class="s">"org.apache.spark.SomeClass.this"</span><span class="o">)</span></code></pre></figure>
+<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// [SPARK-zz][CORE] Fix an issue</span>
+<span class="nv">ProblemFilters</span><span class="o">.</span><span class="py">exclude</span><span class="o">[</span><span class="kt">DirectMissingMethodProblem</span><span class="o">](</span><span class="s">"org.apache.spark.SomeClass.this"</span><span class="o">)</span></code></pre></figure>
 
 <p>Otherwise, you will have to resolve those incompatibilies before opening or
 updating your pull request. Usually, the problems reported by MiMa are
@@ -520,15 +520,15 @@ you will have to add back in order to maintain binary compatibility.</p>
 This is useful when reviewing code or testing patches locally. If you haven&#8217;t yet cloned the 
 Spark Git repository, use the following command:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/apache/spark.git
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git clone https://github.com/apache/spark.git
 $ cd spark
 </code></pre></div></div>
 
 <p>To enable this feature you&#8217;ll need to configure the git remote repository to fetch pull request 
-data. Do this by modifying the <code class="highlighter-rouge">.git/config</code> file inside of your Spark directory. The remote may 
+data. Do this by modifying the <code class="language-plaintext highlighter-rouge">.git/config</code> file inside of your Spark directory. The remote may 
 not be named &#8220;origin&#8221; if you&#8217;ve named it something else:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[remote "origin"]
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>[remote "origin"]
   url = git@github.com:apache/spark.git
   fetch = +refs/heads/*:refs/remotes/origin/*
   fetch = +refs/pull/*/head:refs/remotes/origin/pr/*   # Add this line
@@ -536,7 +536,7 @@ not be named &#8220;origin&#8221; if you&#8217;ve named it something else:</p>
 
 <p>Once you&#8217;ve done this you can fetch remote pull requests</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Fetch remote pull requests
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Fetch remote pull requests
 $ git fetch origin
 # Checkout a remote pull request
 $ git checkout origin/pr/112
@@ -546,7 +546,7 @@ $ git checkout origin/pr/112 -b new-branch
 
 <h3>Generating Dependency Graphs</h3>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ # sbt
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ # sbt
 $ build/sbt dependency-tree
  
 $ # Maven
@@ -564,7 +564,7 @@ your code.  It can be configured to match the import ordering from the style gui
 
 <p>To format Scala code, run the following command prior to submitting a PR:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./dev/scalafmt
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ ./dev/scalafmt
 </code></pre></div></div>
 
 <p>By default, this script will format files that differ from git master. For more information, see <a href="https://scalameta.org/scalafmt/">scalafmt documentation</a>, but use the existing script not a locally installed version of scalafmt.</p>
@@ -575,22 +575,22 @@ your code.  It can be configured to match the import ordering from the style gui
 
 <p>While many of the Spark developers use SBT or Maven on the command line, the most common IDE we 
 use is IntelliJ IDEA. You can get the community edition for free (Apache committers can get 
-free IntelliJ Ultimate Edition licenses) and install the JetBrains Scala plugin from <code class="highlighter-rouge">Preferences &gt; Plugins</code>.</p>
+free IntelliJ Ultimate Edition licenses) and install the JetBrains Scala plugin from <code class="language-plaintext highlighter-rouge">Preferences &gt; Plugins</code>.</p>
 
 <p>To create a Spark project for IntelliJ:</p>
 
 <ul>
   <li>Download IntelliJ and install the 
 <a href="https://confluence.jetbrains.com/display/SCA/Scala+Plugin+for+IntelliJ+IDEA">Scala plug-in for IntelliJ</a>.</li>
-  <li>Go to <code class="highlighter-rouge">File -&gt; Import Project</code>, locate the spark source directory, and select &#8220;Maven Project&#8221;.</li>
+  <li>Go to <code class="language-plaintext highlighter-rouge">File -&gt; Import Project</code>, locate the spark source directory, and select &#8220;Maven Project&#8221;.</li>
   <li>In the Import wizard, it&#8217;s fine to leave settings at their default. However it is usually useful 
 to enable &#8220;Import Maven projects automatically&#8221;, since changes to the project structure will 
 automatically update the IntelliJ project.</li>
   <li>As documented in <a href="https://spark.apache.org/docs/latest/building-spark.html">Building Spark</a>, 
 some build configurations require specific profiles to be 
-enabled. The same profiles that are enabled with <code class="highlighter-rouge">-P[profile name]</code> above may be enabled on the 
+enabled. The same profiles that are enabled with <code class="language-plaintext highlighter-rouge">-P[profile name]</code> above may be enabled on the 
 Profiles screen in the Import wizard. For example, if developing for Hadoop 2.7 with YARN support, 
-enable profiles <code class="highlighter-rouge">yarn</code> and <code class="highlighter-rouge">hadoop-2.7</code>. These selections can be changed later by accessing the 
+enable profiles <code class="language-plaintext highlighter-rouge">yarn</code> and <code class="language-plaintext highlighter-rouge">hadoop-2.7</code>. These selections can be changed later by accessing the 
 &#8220;Maven Projects&#8221; tool window from the View menu, and expanding the Profiles section.</li>
 </ul>
 
@@ -603,10 +603,10 @@ Projects&#8221; button in the &#8220;Maven Projects&#8221; tool window to manual
   <li>The version of Maven bundled with IntelliJ may not be new enough for Spark. If that happens,
 the action &#8220;Generate Sources and Update Folders For All Projects&#8221; could fail silently. 
 Please remember to reset the Maven home directory 
-(<code class="highlighter-rouge">Preference -&gt; Build, Execution, Deployment -&gt; Maven -&gt; Maven home directory</code>) of your project to 
-point to a newer installation of Maven. You may also build Spark with the script <code class="highlighter-rouge">build/mvn</code> first.
+(<code class="language-plaintext highlighter-rouge">Preference -&gt; Build, Execution, Deployment -&gt; Maven -&gt; Maven home directory</code>) of your project to 
+point to a newer installation of Maven. You may also build Spark with the script <code class="language-plaintext highlighter-rouge">build/mvn</code> first.
 If the script cannot locate a new enough Maven installation, it will download and install a recent 
-version of Maven to folder <code class="highlighter-rouge">build/apache-maven-&lt;version&gt;/</code>.</li>
+version of Maven to folder <code class="language-plaintext highlighter-rouge">build/apache-maven-&lt;version&gt;/</code>.</li>
   <li>Some of the modules have pluggable source directories based on Maven profiles (i.e. to support 
 both Scala 2.11 and 2.10 or to allow cross building against different versions of Hive). In some 
 cases IntelliJ&#8217;s does not correctly detect use of the maven-build-plugin to add source directories. 
@@ -626,7 +626,7 @@ compiler options&#8221; field.  It will work then although the option will come
 reimports.  If you try to build any of the projects using quasiquotes (eg., sql) then you will 
 need to make that jar a compiler plugin (just below &#8220;Additional compiler options&#8221;). 
 Otherwise you will see errors like:
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/Users/irashid/github/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
+    <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>/Users/irashid/github/spark/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/codegen/CodeGenerator.scala
 Error:(147, 9) value q is not a member of StringContext
  Note: implicit class Evaluate2 is not applicable here because it comes after the application point and it lacks an explicit result type
       q"""
@@ -660,16 +660,16 @@ process and wait for SBT console to connect:</p>
 <p>The following is an example of how to trigger the remote debugging using SBT unit tests.</p>
 
 <p>Enter in SBT console</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./build/sbt
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>./build/sbt
 </code></pre></div></div>
 <p>Switch to project where the target test locates, e.g.:</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt &gt; project core
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt &gt; project core
 </code></pre></div></div>
 <p>Copy pasting the <i>Command line arguments for remote JVM</i></p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt &gt; set javaOptions in Test += "-agentlib:jdwp=transport=dt_socket,server=n,suspend=n,address=localhost:5005"
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt &gt; set javaOptions in Test += "-agentlib:jdwp=transport=dt_socket,server=n,suspend=n,address=localhost:5005"
 </code></pre></div></div>
 <p>Set breakpoints with IntelliJ and run the test with SBT, e.g.:</p>
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt &gt; testOnly *SparkContextSuite -- -t "Only one SparkContext may be active at a time"
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt &gt; testOnly *SparkContextSuite -- -t "Only one SparkContext may be active at a time"
 </code></pre></div></div>
 
 <p>It should be successfully connected to IntelliJ when you see &#8220;Connected to the target VM, 
@@ -692,29 +692,29 @@ type &#8220;session clear&#8221; in SBT console while you&#8217;re in a project.
 <p>The easiest way is to download the Scala IDE bundle from the Scala IDE download page. It comes 
 pre-installed with ScalaTest. Alternatively, use the Scala IDE update site or Eclipse Marketplace.</p>
 
-<p>SBT can create Eclipse <code class="highlighter-rouge">.project</code> and <code class="highlighter-rouge">.classpath</code> files. To create these files for each Spark sub 
+<p>SBT can create Eclipse <code class="language-plaintext highlighter-rouge">.project</code> and <code class="language-plaintext highlighter-rouge">.classpath</code> files. To create these files for each Spark sub 
 project, use this command:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt/sbt eclipse
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sbt/sbt eclipse
 </code></pre></div></div>
 
-<p>To import a specific project, e.g. spark-core, select <code class="highlighter-rouge">File | Import | Existing Projects</code> into 
+<p>To import a specific project, e.g. spark-core, select <code class="language-plaintext highlighter-rouge">File | Import | Existing Projects</code> into 
 Workspace. Do not select &#8220;Copy projects into workspace&#8221;.</p>
 
 <p>If you want to develop on Scala 2.10 you need to configure a Scala installation for the 
 exact Scala version that’s used to compile Spark. 
  Since Scala IDE bundles the latest versions (2.10.5 and 2.11.8 at this point), you need to add one 
-in <code class="highlighter-rouge">Eclipse Preferences -&gt; Scala -&gt; Installations</code> by pointing to the <code class="highlighter-rouge">lib/</code> directory of your 
+in <code class="language-plaintext highlighter-rouge">Eclipse Preferences -&gt; Scala -&gt; Installations</code> by pointing to the <code class="language-plaintext highlighter-rouge">lib/</code> directory of your 
 Scala 2.10.5 distribution. Once this is done, select all Spark projects and right-click, 
-choose <code class="highlighter-rouge">Scala -&gt; Set Scala Installation</code> and point to the 2.10.5 installation. 
+choose <code class="language-plaintext highlighter-rouge">Scala -&gt; Set Scala Installation</code> and point to the 2.10.5 installation. 
 This should clear all errors about invalid cross-compiled libraries. A clean build should succeed now.</p>
 
-<p>ScalaTest can execute unit tests by right clicking a source file and selecting <code class="highlighter-rouge">Run As | Scala Test</code>.</p>
+<p>ScalaTest can execute unit tests by right clicking a source file and selecting <code class="language-plaintext highlighter-rouge">Run As | Scala Test</code>.</p>
 
-<p>If Java memory errors occur, it might be necessary to increase the settings in <code class="highlighter-rouge">eclipse.ini</code> 
+<p>If Java memory errors occur, it might be necessary to increase the settings in <code class="language-plaintext highlighter-rouge">eclipse.ini</code> 
 in the Eclipse install directory. Increase the following setting as needed:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--launcher.XXMaxPermSize
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>--launcher.XXMaxPermSize
 256M
 </code></pre></div></div>
 
@@ -727,7 +727,7 @@ repository to your build. Note that SNAPSHOT artifacts are ephemeral and may cha
 be removed. To use these you must add the ASF snapshot repository at 
 &lt;a href=&#8221;https://repository.apache.org/snapshots/<a>.</a></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>groupId: org.apache.spark
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>groupId: org.apache.spark
 artifactId: spark-core_2.12
 version: 3.0.0-SNAPSHOT
 </code></pre></div></div>
@@ -744,31 +744,31 @@ version: 3.0.0-SNAPSHOT
 <a href="https://www.yourkit.com/download/index.jsp">YourKit downloads page</a>. 
 This file is pretty big (~100 MB) and YourKit downloads site is somewhat slow, so you may 
 consider mirroring this file or including it on a custom AMI.</li>
-  <li>Unzip this file somewhere (in <code class="highlighter-rouge">/root</code> in our case): <code class="highlighter-rouge">unzip YourKit-JavaProfiler-2017.02-b66.zip</code></li>
-  <li>Copy the expanded YourKit files to each node using copy-dir: <code class="highlighter-rouge">~/spark-ec2/copy-dir /root/YourKit-JavaProfiler-2017.02</code></li>
-  <li>Configure the Spark JVMs to use the YourKit profiling agent by editing <code class="highlighter-rouge">~/spark/conf/spark-env.sh</code> 
+  <li>Unzip this file somewhere (in <code class="language-plaintext highlighter-rouge">/root</code> in our case): <code class="language-plaintext highlighter-rouge">unzip YourKit-JavaProfiler-2017.02-b66.zip</code></li>
+  <li>Copy the expanded YourKit files to each node using copy-dir: <code class="language-plaintext highlighter-rouge">~/spark-ec2/copy-dir /root/YourKit-JavaProfiler-2017.02</code></li>
+  <li>Configure the Spark JVMs to use the YourKit profiling agent by editing <code class="language-plaintext highlighter-rouge">~/spark/conf/spark-env.sh</code> 
 and adding the lines
-    <div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SPARK_DAEMON_JAVA_OPTS+=" -agentpath:/root/YourKit-JavaProfiler-2017.02/bin/linux-x86-64/libyjpagent.so=sampling"
+    <div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>SPARK_DAEMON_JAVA_OPTS+=" -agentpath:/root/YourKit-JavaProfiler-2017.02/bin/linux-x86-64/libyjpagent.so=sampling"
 export SPARK_DAEMON_JAVA_OPTS
 SPARK_EXECUTOR_OPTS+=" -agentpath:/root/YourKit-JavaProfiler-2017.02/bin/linux-x86-64/libyjpagent.so=sampling"
 export SPARK_EXECUTOR_OPTS
 </code></pre></div>    </div>
   </li>
-  <li>Copy the updated configuration to each node: <code class="highlighter-rouge">~/spark-ec2/copy-dir ~/spark/conf/spark-env.sh</code></li>
-  <li>Restart your Spark cluster: <code class="highlighter-rouge">~/spark/bin/stop-all.sh</code> and <code class="highlighter-rouge">~/spark/bin/start-all.sh</code></li>
-  <li>By default, the YourKit profiler agents use ports <code class="highlighter-rouge">10001-10010</code>. To connect the YourKit desktop
+  <li>Copy the updated configuration to each node: <code class="language-plaintext highlighter-rouge">~/spark-ec2/copy-dir ~/spark/conf/spark-env.sh</code></li>
+  <li>Restart your Spark cluster: <code class="language-plaintext highlighter-rouge">~/spark/bin/stop-all.sh</code> and <code class="language-plaintext highlighter-rouge">~/spark/bin/start-all.sh</code></li>
+  <li>By default, the YourKit profiler agents use ports <code class="language-plaintext highlighter-rouge">10001-10010</code>. To connect the YourKit desktop
 application to the remote profiler agents, you&#8217;ll have to open these ports in the cluster&#8217;s EC2 
 security groups. To do this, sign into the AWS Management Console. Go to the EC2 section and 
-select <code class="highlighter-rouge">Security Groups</code> from the <code class="highlighter-rouge">Network &amp; Security</code> section on the left side of the page. 
-Find the security groups corresponding to your cluster; if you launched a cluster named <code class="highlighter-rouge">test_cluster</code>, 
-then you will want to modify the settings for the <code class="highlighter-rouge">test_cluster-slaves</code> and <code class="highlighter-rouge">test_cluster-master</code> 
-security groups. For each group, select it from the list, click the <code class="highlighter-rouge">Inbound</code> tab, and create a 
-new <code class="highlighter-rouge">Custom TCP Rule</code> opening the port range <code class="highlighter-rouge">10001-10010</code>. Finally, click <code class="highlighter-rouge">Apply Rule Changes</code>. 
+select <code class="language-plaintext highlighter-rouge">Security Groups</code> from the <code class="language-plaintext highlighter-rouge">Network &amp; Security</code> section on the left side of the page. 
+Find the security groups corresponding to your cluster; if you launched a cluster named <code class="language-plaintext highlighter-rouge">test_cluster</code>, 
+then you will want to modify the settings for the <code class="language-plaintext highlighter-rouge">test_cluster-slaves</code> and <code class="language-plaintext highlighter-rouge">test_cluster-master</code> 
+security groups. For each group, select it from the list, click the <code class="language-plaintext highlighter-rouge">Inbound</code> tab, and create a 
+new <code class="language-plaintext highlighter-rouge">Custom TCP Rule</code> opening the port range <code class="language-plaintext highlighter-rouge">10001-10010</code>. Finally, click <code class="language-plaintext highlighter-rouge">Apply Rule Changes</code>. 
 Make sure to do this for both security groups.
-Note: by default, <code class="highlighter-rouge">spark-ec2</code> re-uses security groups: if you stop this cluster and launch another 
+Note: by default, <code class="language-plaintext highlighter-rouge">spark-ec2</code> re-uses security groups: if you stop this cluster and launch another 
 cluster with the same name, your security group settings will be re-used.</li>
   <li>Launch the YourKit profiler on your desktop.</li>
-  <li>Select &#8220;Connect to remote application&#8230;&#8221; from the welcome screen and enter the the address of your Spark master or worker machine, e.g. <code class="highlighter-rouge">ec2--.compute-1.amazonaws.com</code></li>
+  <li>Select &#8220;Connect to remote application&#8230;&#8221; from the welcome screen and enter the the address of your Spark master or worker machine, e.g. <code class="language-plaintext highlighter-rouge">ec2--.compute-1.amazonaws.com</code></li>
   <li>YourKit should now be connected to the remote profiling agent. It may take a few moments for profiling information to appear.</li>
 </ul>
 
@@ -777,8 +777,8 @@ cluster with the same name, your security group settings will be re-used.</li>
 
 <h4>In Spark unit tests</h4>
 
-<p>When running Spark tests through SBT, add <code class="highlighter-rouge">javaOptions in Test += "-agentpath:/path/to/yjp"</code>
-to <code class="highlighter-rouge">SparkBuild.scala</code> to launch the tests with the YourKit profiler agent enabled.<br />
+<p>When running Spark tests through SBT, add <code class="language-plaintext highlighter-rouge">javaOptions in Test += "-agentpath:/path/to/yjp"</code>
+to <code class="language-plaintext highlighter-rouge">SparkBuild.scala</code> to launch the tests with the YourKit profiler agent enabled.<br />
 The platform-specific paths to the profiler agents are listed in the 
 <a href="http://www.yourkit.com/docs/80/help/agent.jsp">YourKit documentation</a>.</p>
 
diff --git a/site/downloads.html b/site/downloads.html
index 48a46b7..0a0b280 100644
--- a/site/downloads.html
+++ b/site/downloads.html
@@ -239,13 +239,13 @@ The latest preview release is Spark 3.0.0-preview2, published on Dec 23, 2019.</
 <h3 id="link-with-spark">Link with Spark</h3>
 <p>Spark artifacts are <a href="https://search.maven.org/search?q=g:org.apache.spark">hosted in Maven Central</a>. You can add a Maven dependency with the following coordinates:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>groupId: org.apache.spark
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>groupId: org.apache.spark
 artifactId: spark-core_2.12
 version: 3.0.2
 </code></pre></div></div>
 
 <h3 id="installing-with-pypi">Installing with PyPi</h3>
-<p><a href="https://pypi.org/project/pyspark/">PySpark</a> is now available in pypi. To install just run <code class="highlighter-rouge">pip install pyspark</code>.</p>
+<p><a href="https://pypi.org/project/pyspark/">PySpark</a> is now available in pypi. To install just run <code class="language-plaintext highlighter-rouge">pip install pyspark</code>.</p>
 
 <h3 id="release-notes-for-stable-releases">Release Notes for Stable Releases</h3>
 
diff --git a/site/examples.html b/site/examples.html
index 9756b57..a55e9a1 100644
--- a/site/examples.html
+++ b/site/examples.html
@@ -230,11 +230,11 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">text_file</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">"hdfs://..."</span><span class="p">)</span>
-<span class="n">counts</span> <span class="o">=</span> <span class="n">text_file</span><span class="o">.</span><span class="n">flatMap</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">" "</span><span class="p">))</span> \
-             <span class="o">.</span><span class="nb">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">word</span><span class="p">:</span> <span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> \
-             <span class="o">.</span><span class="n">reduceByKey</span><span class="p">(</span><span class="k">lambda</span> <span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">:</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="p">)</span>
-<span class="n">counts</span><span class="o">.</span><span class="n">saveAsTextFile</span><span class="p">(</span><span class="s">"hdfs://..."</span><span class="p">)</span></code></pre></figure>
+<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">text_file</span> <span class="o">=</span> <span class="n">sc</span><span class="p">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">"hdfs://..."</span><span class="p">)</span>
+<span class="n">counts</span> <span class="o">=</span> <span class="n">text_file</span><span class="p">.</span><span class="n">flatMap</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="n">line</span><span class="p">.</span><span class="n">split</span><span class="p">(</span><span class="s">" "</span><span class="p">))</span> \
+             <span class="p">.</span><span class="nb">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">word</span><span class="p">:</span> <span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> \
+             <span class="p">.</span><span class="n">reduceByKey</span><span class="p">(</span><span class="k">lambda</span> <span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">:</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="p">)</span>
+<span class="n">counts</span><span class="p">.</span><span class="n">saveAsTextFile</span><span class="p">(</span><span class="s">"hdfs://..."</span><span class="p">)</span></code></pre></figure>
 
 </div>
 </div>
@@ -242,11 +242,11 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">textFile</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">counts</span> <span class="k">=</span> <span class="n">textFile</span><span class="o">.</span><span class="n">flatMap</span><span class="o">(</span><span class="n">line</span> <span class="k">=&gt;</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="s">" "</span><span class="o">))</span>
-                 <span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="n">word</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="n">word</span><span class="o">,</span> <span class="mi">1</span><span class="o">))</span>
-                 <span class="o">.</span><span class="n">reduceByKey</span><span class="o">(</span><span class="k">_</span> <span class="o">+</span> <span class="k">_</span><span class="o">)</span>
-<span class="n">counts</span><span class="o">.</span><span class="n">saveAsTextFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">)</span></code></pre></figure>
+<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="nv">textFile</span> <span class="k">=</span> <span class="nv">sc</span><span class="o">.</span><span class="py">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">)</span>
+<span class="k">val</span> <span class="nv">counts</span> <span class="k">=</span> <span class="nv">textFile</span><span class="o">.</span><span class="py">flatMap</span><span class="o">(</span><span class="n">line</span> <span class="k">=&gt;</span> <span class="nv">line</span><span class="o">.</span><span class="py">split</span><span class="o">(</span><span class="s">" "</span><span class="o">))</span>
+                 <span class="o">.</span><span class="py">map</span><span class="o">(</span><span class="n">word</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="n">word</span><span class="o">,</span> <span class="mi">1</span><span class="o">))</span>
+                 <span class="o">.</span><span class="py">reduceByKey</span><span class="o">(</span><span class="k">_</span> <span class="o">+</span> <span class="k">_</span><span class="o">)</span>
+<span class="nv">counts</span><span class="o">.</span><span class="py">saveAsTextFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">)</span></code></pre></figure>
 
 </div>
 </div>
@@ -254,10 +254,10 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">);</span>
-<span class="n">JavaPairRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">counts</span> <span class="o">=</span> <span class="n">textFile</span>
-    <span class="o">.</span><span class="na">flatMap</span><span class="o">(</span><span class="n">s</span> <span class="o">-&gt;</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">s</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">" "</span><span class="o">)).</span><span class="na">iterator</span><span class="o">())</span>
-    <span class="o">.</span><span class="na">mapToPair</span><span class="o">(</span><span class="n">word</span> <span class="o">-&gt;</span> <span class="k">new</span> <span class="n">Tuple2</span><span class="o">&lt;&gt;(</span><span class="n">word</span><span class="o">,</span> <span class="mi">1</span><span class="o">))</span>
+<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="nc">JavaRDD</span><span class="o">&lt;</span><span class="nc">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">);</span>
+<span class="nc">JavaPairRDD</span><span class="o">&lt;</span><span class="nc">String</span><span class="o">,</span> <span class="nc">Integer</span><span class="o">&gt;</span> <span class="n">counts</span> <span class="o">=</span> <span class="n">textFile</span>
+    <span class="o">.</span><span class="na">flatMap</span><span class="o">(</span><span class="n">s</span> <span class="o">-&gt;</span> <span class="nc">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">s</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">" "</span><span class="o">)).</span><span class="na">iterator</span><span class="o">())</span>
+    <span class="o">.</span><span class="na">mapToPair</span><span class="o">(</span><span class="n">word</span> <span class="o">-&gt;</span> <span class="k">new</span> <span class="nc">Tuple2</span><span class="o">&lt;&gt;(</span><span class="n">word</span><span class="o">,</span> <span class="mi">1</span><span class="o">))</span>
     <span class="o">.</span><span class="na">reduceByKey</span><span class="o">((</span><span class="n">a</span><span class="o">,</span> <span class="n">b</span><span class="o">)</span> <span class="o">-&gt;</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="o">);</span>
 <span class="n">counts</span><span class="o">.</span><span class="na">saveAsTextFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">);</span></code></pre></figure>
 
@@ -279,12 +279,12 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="code code-tab">
 
 <figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">inside</span><span class="p">(</span><span class="n">p</span><span class="p">):</span>
-    <span class="n">x</span><span class="p">,</span> <span class="n">y</span> <span class="o">=</span> <span class="n">random</span><span class="o">.</span><span class="n">random</span><span class="p">(),</span> <span class="n">random</span><span class="o">.</span><span class="n">random</span><span class="p">()</span>
+    <span class="n">x</span><span class="p">,</span> <span class="n">y</span> <span class="o">=</span> <span class="n">random</span><span class="p">.</span><span class="n">random</span><span class="p">(),</span> <span class="n">random</span><span class="p">.</span><span class="n">random</span><span class="p">()</span>
     <span class="k">return</span> <span class="n">x</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="o">*</span><span class="n">y</span> <span class="o">&lt;</span> <span class="mi">1</span>
 
-<span class="n">count</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">NUM_SAMPLES</span><span class="p">))</span> \
-             <span class="o">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">inside</span><span class="p">)</span><span class="o">.</span><span class="n">count</span><span class="p">()</span>
-<span class="k">print</span><span class="p">(</span><span class="s">"Pi is roughly </span><span class="si">%</span><span class="s">f"</span> <span class="o">%</span> <span class="p">(</span><span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="p">))</span></code></pre></figure>
+<span class="n">count</span> <span class="o">=</span> <span class="n">sc</span><span class="p">.</span><span class="n">parallelize</span><span class="p">(</span><span class="nb">range</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">NUM_SAMPLES</span><span class="p">))</span> \
+             <span class="p">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">inside</span><span class="p">).</span><span class="n">count</span><span class="p">()</span>
+<span class="k">print</span><span class="p">(</span><span class="s">"Pi is roughly %f"</span> <span class="o">%</span> <span class="p">(</span><span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="p">))</span></code></pre></figure>
 
 </div>
 </div>
@@ -292,12 +292,12 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">count</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="o">(</span><span class="mi">1</span> <span class="n">to</span> <span class="nc">NUM_SAMPLES</span><span class="o">).</span><span class="n">filter</span> <span class="o">{</span> <span class="k">_</span> <span class="k">=&gt;</span>
-  <span class="k">val</span> <span class="n">x</span> <span class="k">=</span> <span class="n">math</span><span class="o">.</span><span class="n">random</span>
-  <span class="k">val</span> <span class="n">y</span> <span class="k">=</span> <span class="n">math</span><span class="o">.</span><span class="n">random</span>
+<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="nv">count</span> <span class="k">=</span> <span class="nv">sc</span><span class="o">.</span><span class="py">parallelize</span><span class="o">(</span><span class="mi">1</span> <span class="n">to</span> <span class="nc">NUM_SAMPLES</span><span class="o">).</span><span class="py">filter</span> <span class="o">{</span> <span class="k">_</span> <span class="k">=&gt;</span>
+  <span class="k">val</span> <span class="nv">x</span> <span class="k">=</span> <span class="nv">math</span><span class="o">.</span><span class="py">random</span>
+  <span class="k">val</span> <span class="nv">y</span> <span class="k">=</span> <span class="nv">math</span><span class="o">.</span><span class="py">random</span>
   <span class="n">x</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="o">*</span><span class="n">y</span> <span class="o">&lt;</span> <span class="mi">1</span>
-<span class="o">}.</span><span class="n">count</span><span class="o">()</span>
-<span class="n">println</span><span class="o">(</span><span class="n">s</span><span class="s">"Pi is roughly ${4.0 * count / NUM_SAMPLES}"</span><span class="o">)</span></code></pre></figure>
+<span class="o">}.</span><span class="py">count</span><span class="o">()</span>
+<span class="nf">println</span><span class="o">(</span><span class="n">s</span><span class="s">"Pi is roughly ${4.0 * count / NUM_SAMPLES}"</span><span class="o">)</span></code></pre></figure>
 
 </div>
 </div>
@@ -305,17 +305,17 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">List</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;</span> <span class="n">l</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;&gt;(</span><span class="n">NUM_SAMPLES</span><span class="o">);</span>
-<span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">NUM_SAMPLES</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span> <span class="o">{</span>
+<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="nc">List</span><span class="o">&lt;</span><span class="nc">Integer</span><span class="o">&gt;</span> <span class="n">l</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">ArrayList</span><span class="o">&lt;&gt;(</span><span class="no">NUM_SAMPLES</span><span class="o">);</span>
+<span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="no">NUM_SAMPLES</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span> <span class="o">{</span>
   <span class="n">l</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">i</span><span class="o">);</span>
 <span class="o">}</span>
 
 <span class="kt">long</span> <span class="n">count</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">parallelize</span><span class="o">(</span><span class="n">l</span><span class="o">).</span><span class="na">filter</span><span class="o">(</span><span class="n">i</span> <span class="o">-&gt;</span> <span class="o">{</span>
-  <span class="kt">double</span> <span class="n">x</span> <span class="o">=</span> <span class="n">Math</span><span class="o">.</span><span class="na">random</span><span class="o">();</span>
-  <span class="kt">double</span> <span class="n">y</span> <span class="o">=</span> <span class="n">Math</span><span class="o">.</span><span class="na">random</span><span class="o">();</span>
+  <span class="kt">double</span> <span class="n">x</span> <span class="o">=</span> <span class="nc">Math</span><span class="o">.</span><span class="na">random</span><span class="o">();</span>
+  <span class="kt">double</span> <span class="n">y</span> <span class="o">=</span> <span class="nc">Math</span><span class="o">.</span><span class="na">random</span><span class="o">();</span>
   <span class="k">return</span> <span class="n">x</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="o">*</span><span class="n">y</span> <span class="o">&lt;</span> <span class="mi">1</span><span class="o">;</span>
 <span class="o">}).</span><span class="na">count</span><span class="o">();</span>
-<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Pi is roughly "</span> <span class="o">+</span> <span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="o">);</span></code></pre></figure>
+<span class="nc">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">"Pi is roughly "</span> <span class="o">+</span> <span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="no">NUM_SAMPLES</span><span class="o">);</span></code></pre></figure>
 
 </div>
 </div>
@@ -343,17 +343,17 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">"hdfs://..."</span><span class="p">)</span>
+<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="p">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">"hdfs://..."</span><span class="p">)</span>
 
-<span class="c"># Creates a DataFrame having a single column named "line"</span>
-<span class="n">df</span> <span class="o">=</span> <span class="n">textFile</span><span class="o">.</span><span class="nb">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">r</span><span class="p">:</span> <span class="n">Row</span><span class="p">(</span><span class="n">r</span><span class="p">))</span><span class="o">.</span><span class="n">toDF</span><span class="p">([</span><span class="s">"line"</span><span class="p">])</span>
-<span class="n">errors</span> <span class="o">=</span> <span class="n">df</span><span class="o">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">"line"</span><span class="p">)</span><span class="o">.</span><span class="n">like</span><span class="p">(</span><span class="s">"</span><span class="si">%</span><span class="s">ERROR</span><span class="si">%</span><span class="s">"</span><span class="p">))</span>
-<span class="c"># Counts all the errors</span>
-<span class="n">errors</span><span class="o">.</span><span class="n">count</span><span class="p">()</span>
-<span class="c"># Counts errors mentioning MySQL</span>
-<span class="n">errors</span><span class="o">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">"line"</span><span class="p">)</span><span class="o">.</span><span class="n">like</span><span class="p">(</span><span class="s">"</span><span class="si">%</span><span class="s">MySQL</span><span class="si">%</span><span class="s">"</span><span class="p">))</span><span class="o">.</span><span class="n">count</span><spa [...]
-<span class="c"># Fetches the MySQL errors as an array of strings</span>
-<span class="n">errors</span><span class="o">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">"line"</span><span class="p">)</span><span class="o">.</span><span class="n">like</span><span class="p">(</span><span class="s">"</span><span class="si">%</span><span class="s">MySQL</span><span class="si">%</span><span class="s">"</span><span class="p">))</span><span class="o">.</span><span class="n">collect</span><s [...]
+<span class="c1"># Creates a DataFrame having a single column named "line"
+</span><span class="n">df</span> <span class="o">=</span> <span class="n">textFile</span><span class="p">.</span><span class="nb">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">r</span><span class="p">:</span> <span class="n">Row</span><span class="p">(</span><span class="n">r</span><span class="p">)).</span><span class="n">toDF</span><span class="p">([</span><span class="s">"line"</span><span class="p">])</span>
+<span class="n">errors</span> <span class="o">=</span> <span class="n">df</span><span class="p">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">"line"</span><span class="p">).</span><span class="n">like</span><span class="p">(</span><span class="s">"%ERROR%"</span><span class="p">))</span>
+<span class="c1"># Counts all the errors
+</span><span class="n">errors</span><span class="p">.</span><span class="n">count</span><span class="p">()</span>
+<span class="c1"># Counts errors mentioning MySQL
+</span><span class="n">errors</span><span class="p">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">"line"</span><span class="p">).</span><span class="n">like</span><span class="p">(</span><span class="s">"%MySQL%"</span><span class="p">)).</span><span class="n">count</span><span class="p">()</span>
+<span class="c1"># Fetches the MySQL errors as an array of strings
+</span><span class="n">errors</span><span class="p">.</span><span class="nb">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">"line"</span><span class="p">).</span><span class="n">like</span><span class="p">(</span><span class="s">"%MySQL%"</span><span class="p">)).</span><span class="n">collect</span><span class="p">()</span></code></pre></figure>
 
 </div>
 </div>
@@ -361,17 +361,17 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">textFile</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">)</span>
+<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="nv">textFile</span> <span class="k">=</span> <span class="nv">sc</span><span class="o">.</span><span class="py">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">)</span>
 
-<span class="c1">// Creates a DataFrame having a single column named "line"
-</span><span class="k">val</span> <span class="n">df</span> <span class="k">=</span> <span class="n">textFile</span><span class="o">.</span><span class="n">toDF</span><span class="o">(</span><span class="s">"line"</span><span class="o">)</span>
-<span class="k">val</span> <span class="n">errors</span> <span class="k">=</span> <span class="n">df</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="n">like</span><span class="o">(</span><span class="s">"%ERROR%"</span><span class="o">))</span>
-<span class="c1">// Counts all the errors
-</span><span class="n">errors</span><span class="o">.</span><span class="n">count</span><span class="o">()</span>
-<span class="c1">// Counts errors mentioning MySQL
-</span><span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="n">like</span><span class="o">(</span><span class="s">"%MySQL%"</span><span class="o">)).</span><span class="n">count</span><span class="o">()</span>
-<span class="c1">// Fetches the MySQL errors as an array of strings
-</span><span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="n">like</span><span class="o">(</span><span class="s">"%MySQL%"</span><span class="o">)).</span><span class="n">collect</span><span class="o">()</span></code></pre></figure>
+<span class="c1">// Creates a DataFrame having a single column named "line"</span>
+<span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="nv">textFile</span><span class="o">.</span><span class="py">toDF</span><span class="o">(</span><span class="s">"line"</span><span class="o">)</span>
+<span class="k">val</span> <span class="nv">errors</span> <span class="k">=</span> <span class="nv">df</span><span class="o">.</span><span class="py">filter</span><span class="o">(</span><span class="nf">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="py">like</span><span class="o">(</span><span class="s">"%ERROR%"</span><span class="o">))</span>
+<span class="c1">// Counts all the errors</span>
+<span class="nv">errors</span><span class="o">.</span><span class="py">count</span><span class="o">()</span>
+<span class="c1">// Counts errors mentioning MySQL</span>
+<span class="nv">errors</span><span class="o">.</span><span class="py">filter</span><span class="o">(</span><span class="nf">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="py">like</span><span class="o">(</span><span class="s">"%MySQL%"</span><span class="o">)).</span><span class="py">count</span><span class="o">()</span>
+<span class="c1">// Fetches the MySQL errors as an array of strings</span>
+<span class="nv">errors</span><span class="o">.</span><span class="py">filter</span><span class="o">(</span><span class="nf">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="py">like</span><span class="o">(</span><span class="s">"%MySQL%"</span><span class="o">)).</span><span class="py">collect</span><span class="o">()</span></code></pre></figure>
 
 </div>
 </div>
@@ -380,14 +380,14 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <div class="code code-tab">
 
 <figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Creates a DataFrame having a single column named "line"</span>
-<span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">);</span>
-<span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Row</span><span class="o">&gt;</span> <span class="n">rowRDD</span> <span class="o">=</span> <span class="n">textFile</span><span class="o">.</span><span class="na">map</span><span class="o">(</span><span class="nl">RowFactory:</span><span class="o">:</span><span class="n">create</span><span class="o">);</span>
-<span class="n">List</span><span class="o">&lt;</span><span class="n">StructField</span><span class="o">&gt;</span> <span class="n">fields</span> <span class="o">=</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span>
-  <span class="n">DataTypes</span><span class="o">.</span><span class="na">createStructField</span><span class="o">(</span><span class="s">"line"</span><span class="o">,</span> <span class="n">DataTypes</span><span class="o">.</span><span class="na">StringType</span><span class="o">,</span> <span class="kc">true</span><span class="o">));</span>
-<span class="n">StructType</span> <span class="n">schema</span> <span class="o">=</span> <span class="n">DataTypes</span><span class="o">.</span><span class="na">createStructType</span><span class="o">(</span><span class="n">fields</span><span class="o">);</span>
-<span class="n">DataFrame</span> <span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="na">createDataFrame</span><span class="o">(</span><span class="n">rowRDD</span><span class="o">,</span> <span class="n">schema</span><span class="o">);</span>
-
-<span class="n">DataFrame</span> <span class="n">errors</span> <span class="o">=</span> <span class="n">df</span><span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="na">like</span><span class="o">(</span><span class="s">"%ERROR%"</span><span class="o">));</span>
+<span class="nc">JavaRDD</span><span class="o">&lt;</span><span class="nc">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">"hdfs://..."</span><span class="o">);</span>
+<span class="nc">JavaRDD</span><span class="o">&lt;</span><span class="nc">Row</span><span class="o">&gt;</span> <span class="n">rowRDD</span> <span class="o">=</span> <span class="n">textFile</span><span class="o">.</span><span class="na">map</span><span class="o">(</span><span class="nl">RowFactory:</span><span class="o">:</span><span class="n">create</span><span class="o">);</span>
+<span class="nc">List</span><span class="o">&lt;</span><span class="nc">StructField</span><span class="o">&gt;</span> <span class="n">fields</span> <span class="o">=</span> <span class="nc">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span>
+  <span class="nc">DataTypes</span><span class="o">.</span><span class="na">createStructField</span><span class="o">(</span><span class="s">"line"</span><span class="o">,</span> <span class="nc">DataTypes</span><span class="o">.</span><span class="na">StringType</span><span class="o">,</span> <span class="kc">true</span><span class="o">));</span>
+<span class="nc">StructType</span> <span class="n">schema</span> <span class="o">=</span> <span class="nc">DataTypes</span><span class="o">.</span><span class="na">createStructType</span><span class="o">(</span><span class="n">fields</span><span class="o">);</span>
+<span class="nc">DataFrame</span> <span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="na">createDataFrame</span><span class="o">(</span><span class="n">rowRDD</span><span class="o">,</span> <span class="n">schema</span><span class="o">);</span>
+
+<span class="nc">DataFrame</span> <span class="n">errors</span> <span class="o">=</span> <span class="n">df</span><span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">"line"</span><span class="o">).</span><span class="na">like</span><span class="o">(</span><span class="s">"%ERROR%"</span><span class="o">));</span>
 <span class="c1">// Counts all the errors</span>
 <span class="n">errors</span><span class="o">.</span><span class="na">count</span><span class="o">();</span>
 <span class="c1">// Counts errors mentioning MySQL</span>
@@ -417,26 +417,26 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c"># Creates a DataFrame based on a table named "people"</span>
-<span class="c"># stored in a MySQL database.</span>
-<span class="n">url</span> <span class="o">=</span> \
+<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># Creates a DataFrame based on a table named "people"
+# stored in a MySQL database.
+</span><span class="n">url</span> <span class="o">=</span> \
   <span class="s">"jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword"</span>
 <span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span> \
-  <span class="o">.</span><span class="n">read</span> \
-  <span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="s">"jdbc"</span><span class="p">)</span> \
-  <span class="o">.</span><span class="n">option</span><span class="p">(</span><span class="s">"url"</span><span class="p">,</span> <span class="n">url</span><span class="p">)</span> \
-  <span class="o">.</span><span class="n">option</span><span class="p">(</span><span class="s">"dbtable"</span><span class="p">,</span> <span class="s">"people"</span><span class="p">)</span> \
-  <span class="o">.</span><span class="n">load</span><span class="p">()</span>
+  <span class="p">.</span><span class="n">read</span> \
+  <span class="p">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"jdbc"</span><span class="p">)</span> \
+  <span class="p">.</span><span class="n">option</span><span class="p">(</span><span class="s">"url"</span><span class="p">,</span> <span class="n">url</span><span class="p">)</span> \
+  <span class="p">.</span><span class="n">option</span><span class="p">(</span><span class="s">"dbtable"</span><span class="p">,</span> <span class="s">"people"</span><span class="p">)</span> \
+  <span class="p">.</span><span class="n">load</span><span class="p">()</span>
 
-<span class="c"># Looks the schema of this DataFrame.</span>
-<span class="n">df</span><span class="o">.</span><span class="n">printSchema</span><span class="p">()</span>
+<span class="c1"># Looks the schema of this DataFrame.
+</span><span class="n">df</span><span class="p">.</span><span class="n">printSchema</span><span class="p">()</span>
 
-<span class="c"># Counts people by age</span>
-<span class="n">countsByAge</span> <span class="o">=</span> <span class="n">df</span><span class="o">.</span><span class="n">groupBy</span><span class="p">(</span><span class="s">"age"</span><span class="p">)</span><span class="o">.</span><span class="n">count</span><span class="p">()</span>
-<span class="n">countsByAge</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
+<span class="c1"># Counts people by age
+</span><span class="n">countsByAge</span> <span class="o">=</span> <span class="n">df</span><span class="p">.</span><span class="n">groupBy</span><span class="p">(</span><span class="s">"age"</span><span class="p">).</span><span class="n">count</span><span class="p">()</span>
+<span class="n">countsByAge</span><span class="p">.</span><span class="n">show</span><span class="p">()</span>
 
-<span class="c"># Saves countsByAge to S3 in the JSON format.</span>
-<span class="n">countsByAge</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="s">"json"</span><span class="p">)</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s">"s3a://..."</span><span class="p">)</span></code></pre></figure>
+<span class="c1"># Saves countsByAge to S3 in the JSON format.
+</span><span class="n">countsByAge</span><span class="p">.</span><span class="n">write</span><span class="p">.</span><span class="nb">format</span><span class="p">(</span><span class="s">"json"</span><span class="p">).</span><span class="n">save</span><span class="p">(</span><span class="s">"s3a://..."</span><span class="p">)</span></code></pre></figure>
 
 </div>
 </div>
@@ -444,26 +444,26 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Creates a DataFrame based on a table named "people"
-// stored in a MySQL database.
-</span><span class="k">val</span> <span class="n">url</span> <span class="k">=</span>
+<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Creates a DataFrame based on a table named "people"</span>
+<span class="c1">// stored in a MySQL database.</span>
+<span class="k">val</span> <span class="nv">url</span> <span class="k">=</span>
   <span class="s">"jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword"</span>
-<span class="k">val</span> <span class="n">df</span> <span class="k">=</span> <span class="n">sqlContext</span>
-  <span class="o">.</span><span class="n">read</span>
-  <span class="o">.</span><span class="n">format</span><span class="o">(</span><span class="s">"jdbc"</span><span class="o">)</span>
-  <span class="o">.</span><span class="n">option</span><span class="o">(</span><span class="s">"url"</span><span class="o">,</span> <span class="n">url</span><span class="o">)</span>
-  <span class="o">.</span><span class="n">option</span><span class="o">(</span><span class="s">"dbtable"</span><span class="o">,</span> <span class="s">"people"</span><span class="o">)</span>
-  <span class="o">.</span><span class="n">load</span><span class="o">()</span>
+<span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="n">sqlContext</span>
+  <span class="o">.</span><span class="py">read</span>
+  <span class="o">.</span><span class="py">format</span><span class="o">(</span><span class="s">"jdbc"</span><span class="o">)</span>
+  <span class="o">.</span><span class="py">option</span><span class="o">(</span><span class="s">"url"</span><span class="o">,</span> <span class="n">url</span><span class="o">)</span>
+  <span class="o">.</span><span class="py">option</span><span class="o">(</span><span class="s">"dbtable"</span><span class="o">,</span> <span class="s">"people"</span><span class="o">)</span>
+  <span class="o">.</span><span class="py">load</span><span class="o">()</span>
 
-<span class="c1">// Looks the schema of this DataFrame.
-</span><span class="n">df</span><span class="o">.</span><span class="n">printSchema</span><span class="o">()</span>
+<span class="c1">// Looks the schema of this DataFrame.</span>
+<span class="nv">df</span><span class="o">.</span><span class="py">printSchema</span><span class="o">()</span>
 
-<span class="c1">// Counts people by age
-</span><span class="k">val</span> <span class="n">countsByAge</span> <span class="k">=</span> <span class="n">df</span><span class="o">.</span><span class="n">groupBy</span><span class="o">(</span><span class="s">"age"</span><span class="o">).</span><span class="n">count</span><span class="o">()</span>
-<span class="n">countsByAge</span><span class="o">.</span><span class="n">show</span><span class="o">()</span>
+<span class="c1">// Counts people by age</span>
+<span class="k">val</span> <span class="nv">countsByAge</span> <span class="k">=</span> <span class="nv">df</span><span class="o">.</span><span class="py">groupBy</span><span class="o">(</span><span class="s">"age"</span><span class="o">).</span><span class="py">count</span><span class="o">()</span>
+<span class="nv">countsByAge</span><span class="o">.</span><span class="py">show</span><span class="o">()</span>
 
-<span class="c1">// Saves countsByAge to S3 in the JSON format.
-</span><span class="n">countsByAge</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="n">format</span><span class="o">(</span><span class="s">"json"</span><span class="o">).</span><span class="n">save</span><span class="o">(</span><span class="s">"s3a://..."</span><span class="o">)</span></code></pre></figure>
+<span class="c1">// Saves countsByAge to S3 in the JSON format.</span>
+<span class="nv">countsByAge</span><span class="o">.</span><span class="py">write</span><span class="o">.</span><span class="py">format</span><span class="o">(</span><span class="s">"json"</span><span class="o">).</span><span class="py">save</span><span class="o">(</span><span class="s">"s3a://..."</span><span class="o">)</span></code></pre></figure>
 
 </div>
 </div>
@@ -473,9 +473,9 @@ A simple MySQL table "people" is used in the example and this table has two colu
 
 <figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Creates a DataFrame based on a table named "people"</span>
 <span class="c1">// stored in a MySQL database.</span>
-<span class="n">String</span> <span class="n">url</span> <span class="o">=</span>
+<span class="nc">String</span> <span class="n">url</span> <span class="o">=</span>
   <span class="s">"jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword"</span><span class="o">;</span>
-<span class="n">DataFrame</span> <span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span>
+<span class="nc">DataFrame</span> <span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span>
   <span class="o">.</span><span class="na">read</span><span class="o">()</span>
   <span class="o">.</span><span class="na">format</span><span class="o">(</span><span class="s">"jdbc"</span><span class="o">)</span>
   <span class="o">.</span><span class="na">option</span><span class="o">(</span><span class="s">"url"</span><span class="o">,</span> <span class="n">url</span><span class="o">)</span>
@@ -486,7 +486,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <span class="n">df</span><span class="o">.</span><span class="na">printSchema</span><span class="o">();</span>
 
 <span class="c1">// Counts people by age</span>
-<span class="n">DataFrame</span> <span class="n">countsByAge</span> <span class="o">=</span> <span class="n">df</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="s">"age"</span><span class="o">).</span><span class="na">count</span><span class="o">();</span>
+<span class="nc">DataFrame</span> <span class="n">countsByAge</span> <span class="o">=</span> <span class="n">df</span><span class="o">.</span><span class="na">groupBy</span><span class="o">(</span><span class="s">"age"</span><span class="o">).</span><span class="na">count</span><span class="o">();</span>
 <span class="n">countsByAge</span><span class="o">.</span><span class="na">show</span><span class="o">();</span>
 
 <span class="c1">// Saves countsByAge to S3 in the JSON format.</span>
@@ -521,19 +521,19 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c"># Every record of this DataFrame contains the label and</span>
-<span class="c"># features represented by a vector.</span>
-<span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="n">createDataFrame</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="p">[</span><span class="s">"label"</span><span class="p">,</span> <span class="s">"features"</span><span class="p">])</span>
+<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c1"># Every record of this DataFrame contains the label and
+# features represented by a vector.
+</span><span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span><span class="p">.</span><span class="n">createDataFrame</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="p">[</span><span class="s">"label"</span><span class="p">,</span> <span class="s">"features"</span><span class="p">])</span>
 
-<span class="c"># Set parameters for the algorithm.</span>
-<span class="c"># Here, we limit the number of iterations to 10.</span>
-<span class="n">lr</span> <span class="o">=</span> <span class="n">LogisticRegression</span><span class="p">(</span><span class="n">maxIter</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span>
+<span class="c1"># Set parameters for the algorithm.
+# Here, we limit the number of iterations to 10.
+</span><span class="n">lr</span> <span class="o">=</span> <span class="n">LogisticRegression</span><span class="p">(</span><span class="n">maxIter</span><span class="o">=</span><span class="mi">10</span><span class="p">)</span>
 
-<span class="c"># Fit the model to the data.</span>
-<span class="n">model</span> <span class="o">=</span> <span class="n">lr</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">df</span><span class="p">)</span>
+<span class="c1"># Fit the model to the data.
+</span><span class="n">model</span> <span class="o">=</span> <span class="n">lr</span><span class="p">.</span><span class="n">fit</span><span class="p">(</span><span class="n">df</span><span class="p">)</span>
 
-<span class="c"># Given a dataset, predict each point's label, and show the results.</span>
-<span class="n">model</span><span class="o">.</span><span class="n">transform</span><span class="p">(</span><span class="n">df</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
+<span class="c1"># Given a dataset, predict each point's label, and show the results.
+</span><span class="n">model</span><span class="p">.</span><span class="n">transform</span><span class="p">(</span><span class="n">df</span><span class="p">).</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
 
 </div>
 </div>
@@ -541,22 +541,22 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Every record of this DataFrame contains the label and
-// features represented by a vector.
-</span><span class="k">val</span> <span class="n">df</span> <span class="k">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="n">createDataFrame</span><span class="o">(</span><span class="n">data</span><span class="o">).</span><span class="n">toDF</span><span class="o">(</span><span class="s">"label"</span><span class="o">,</span> <span class="s">"features"</span><span class="o">)</span>
+<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Every record of this DataFrame contains the label and</span>
+<span class="c1">// features represented by a vector.</span>
+<span class="k">val</span> <span class="nv">df</span> <span class="k">=</span> <span class="nv">sqlContext</span><span class="o">.</span><span class="py">createDataFrame</span><span class="o">(</span><span class="n">data</span><span class="o">).</span><span class="py">toDF</span><span class="o">(</span><span class="s">"label"</span><span class="o">,</span> <span class="s">"features"</span><span class="o">)</span>
 
-<span class="c1">// Set parameters for the algorithm.
-// Here, we limit the number of iterations to 10.
-</span><span class="k">val</span> <span class="n">lr</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">LogisticRegression</span><span class="o">().</span><span class="n">setMaxIter</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
+<span class="c1">// Set parameters for the algorithm.</span>
+<span class="c1">// Here, we limit the number of iterations to 10.</span>
+<span class="k">val</span> <span class="nv">lr</span> <span class="k">=</span> <span class="k">new</span> <span class="nc">LogisticRegression</span><span class="o">().</span><span class="py">setMaxIter</span><span class="o">(</span><span class="mi">10</span><span class="o">)</span>
 
-<span class="c1">// Fit the model to the data.
-</span><span class="k">val</span> <span class="n">model</span> <span class="k">=</span> <span class="n">lr</span><span class="o">.</span><span class="n">fit</span><span class="o">(</span><span class="n">df</span><span class="o">)</span>
+<span class="c1">// Fit the model to the data.</span>
+<span class="k">val</span> <span class="nv">model</span> <span class="k">=</span> <span class="nv">lr</span><span class="o">.</span><span class="py">fit</span><span class="o">(</span><span class="n">df</span><span class="o">)</span>
 
-<span class="c1">// Inspect the model: get the feature weights.
-</span><span class="k">val</span> <span class="n">weights</span> <span class="k">=</span> <span class="n">model</span><span class="o">.</span><span class="n">weights</span>
+<span class="c1">// Inspect the model: get the feature weights.</span>
+<span class="k">val</span> <span class="nv">weights</span> <span class="k">=</span> <span class="nv">model</span><span class="o">.</span><span class="py">weights</span>
 
-<span class="c1">// Given a dataset, predict each point's label, and show the results.
-</span><span class="n">model</span><span class="o">.</span><span class="n">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="n">show</span><span class="o">()</span></code></pre></figure>
+<span class="c1">// Given a dataset, predict each point's label, and show the results.</span>
+<span class="nv">model</span><span class="o">.</span><span class="py">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="py">show</span><span class="o">()</span></code></pre></figure>
 
 </div>
 </div>
@@ -566,21 +566,21 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 
 <figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Every record of this DataFrame contains the label and</span>
 <span class="c1">// features represented by a vector.</span>
-<span class="n">StructType</span> <span class="n">schema</span> <span class="o">=</span> <span class="k">new</span> <span class="n">StructType</span><span class="o">(</span><span class="k">new</span> <span class="n">StructField</span><span class="o">[]{</span>
-  <span class="k">new</span> <span class="nf">StructField</span><span class="o">(</span><span class="s">"label"</span><span class="o">,</span> <span class="n">DataTypes</span><span class="o">.</span><span class="na">DoubleType</span><span class="o">,</span> <span class="kc">false</span><span class="o">,</span> <span class="n">Metadata</span><span class="o">.</span><span class="na">empty</span><span class="o">()),</span>
-  <span class="k">new</span> <span class="nf">StructField</span><span class="o">(</span><span class="s">"features"</span><span class="o">,</span> <span class="k">new</span> <span class="n">VectorUDT</span><span class="o">(),</span> <span class="kc">false</span><span class="o">,</span> <span class="n">Metadata</span><span class="o">.</span><span class="na">empty</span><span class="o">()),</span>
+<span class="nc">StructType</span> <span class="n">schema</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">StructType</span><span class="o">(</span><span class="k">new</span> <span class="nc">StructField</span><span class="o">[]{</span>
+  <span class="k">new</span> <span class="nf">StructField</span><span class="o">(</span><span class="s">"label"</span><span class="o">,</span> <span class="nc">DataTypes</span><span class="o">.</span><span class="na">DoubleType</span><span class="o">,</span> <span class="kc">false</span><span class="o">,</span> <span class="nc">Metadata</span><span class="o">.</span><span class="na">empty</span><span class="o">()),</span>
+  <span class="k">new</span> <span class="nf">StructField</span><span class="o">(</span><span class="s">"features"</span><span class="o">,</span> <span class="k">new</span> <span class="nc">VectorUDT</span><span class="o">(),</span> <span class="kc">false</span><span class="o">,</span> <span class="nc">Metadata</span><span class="o">.</span><span class="na">empty</span><span class="o">()),</span>
 <span class="o">});</span>
-<span class="n">DataFrame</span> <span class="n">df</span> <span class="o">=</span> <span class="n">jsql</span><span class="o">.</span><span class="na">createDataFrame</span><span class="o">(</span><span class="n">data</span><span class="o">,</span> <span class="n">schema</span><span class="o">);</span>
+<span class="nc">DataFrame</span> <span class="n">df</span> <span class="o">=</span> <span class="n">jsql</span><span class="o">.</span><span class="na">createDataFrame</span><span class="o">(</span><span class="n">data</span><span class="o">,</span> <span class="n">schema</span><span class="o">);</span>
 
 <span class="c1">// Set parameters for the algorithm.</span>
 <span class="c1">// Here, we limit the number of iterations to 10.</span>
-<span class="n">LogisticRegression</span> <span class="n">lr</span> <span class="o">=</span> <span class="k">new</span> <span class="n">LogisticRegression</span><span class="o">().</span><span class="na">setMaxIter</span><span class="o">(</span><span class="mi">10</span><span class="o">);</span>
+<span class="nc">LogisticRegression</span> <span class="n">lr</span> <span class="o">=</span> <span class="k">new</span> <span class="nc">LogisticRegression</span><span class="o">().</span><span class="na">setMaxIter</span><span class="o">(</span><span class="mi">10</span><span class="o">);</span>
 
 <span class="c1">// Fit the model to the data.</span>
-<span class="n">LogisticRegressionModel</span> <span class="n">model</span> <span class="o">=</span> <span class="n">lr</span><span class="o">.</span><span class="na">fit</span><span class="o">(</span><span class="n">df</span><span class="o">);</span>
+<span class="nc">LogisticRegressionModel</span> <span class="n">model</span> <span class="o">=</span> <span class="n">lr</span><span class="o">.</span><span class="na">fit</span><span class="o">(</span><span class="n">df</span><span class="o">);</span>
 
 <span class="c1">// Inspect the model: get the feature weights.</span>
-<span class="n">Vector</span> <span class="n">weights</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="na">weights</span><span class="o">();</span>
+<span class="nc">Vector</span> <span class="n">weights</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="na">weights</span><span class="o">();</span>
 
 <span class="c1">// Given a dataset, predict each point's label, and show the results.</span>
 <span class="n">model</span><span class="o">.</span><span class="na">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="na">show</span><span class="o">();</span></code></pre></figure>
diff --git a/site/news/index.html b/site/news/index.html
index dc3601a..6b5cecb 100644
--- a/site/news/index.html
+++ b/site/news/index.html
@@ -219,7 +219,6 @@
     <div class="entry-content"><p>The next official Spark release is Spark 3.1.1 instead of Spark 3.1.0.
 There was a technical issue during Spark 3.1.0 RC1 preparation,
 see <a href="https://www.mail-archive.com/dev@spark.apache.org/msg27133.html">[VOTE] Release Spark 3.1.0 (RC1)</a> in the Spark dev mailing list.</p>
-
 </div>
   </article>
 
@@ -283,7 +282,6 @@ see <a href="https://www.mail-archive.com/dev@spark.apache.org/msg27133.html">[V
       <div class="entry-date">December 23, 2019</div>
     </header>
     <div class="entry-content"><p>To enable wide-scale community testing of the upcoming Spark 3.0 release, the Apache Spark community has posted a <a href="https://archive.apache.org/dist/spark/spark-3.0.0-preview2/">Spark 3.0.0 preview2 release</a>. This preview is <b>not a stable release in terms of either API or functionality</b>, but it is meant to give the community early access to try the code that will become Spark 3.0. If you would like to test the release, please download it, a [...]
-
 </div>
   </article>
 
@@ -293,7 +291,6 @@ see <a href="https://www.mail-archive.com/dev@spark.apache.org/msg27133.html">[V
       <div class="entry-date">November 6, 2019</div>
     </header>
     <div class="entry-content"><p>To enable wide-scale community testing of the upcoming Spark 3.0 release, the Apache Spark community has posted a <a href="https://archive.apache.org/dist/spark/spark-3.0.0-preview/">preview release of Spark 3.0</a>. This preview is <b>not a stable release in terms of either API or functionality</b>, but it is meant to give the community early access to try the code that will become Spark 3.0. If you would like to test the release, please download it, an [...]
-
 </div>
   </article>
 
@@ -327,7 +324,6 @@ However, maintaining Python 2/3 compatibility is an increasing burden and it ess
 the use of Python 3 features in Spark.
 Given the end of life (EOL) of Python 2 is coming, we plan to eventually drop Python 2 support as
 well. The current plan is as follows:</p>
-
 </div>
   </article>
 
@@ -481,7 +477,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">August 28, 2017</div>
     </header>
     <div class="entry-content"><p>The agenda for <a href="https://spark-summit.org/eu-2017/">Spark Summit EU 2017</a> is now available! The summit kicks off on October 24 with a full day of Apache Spark training followed by over 80+ talks featuring speakers from Shell, Netflix, Intel, IBM, Facebook, Toon and many more. Check out the <a href="https://spark-summit.org/eu-2017/schedule/">full schedule</a> and <a href="https://prevalentdesignevents.com/sparksummit/eu17/">register</a> to attend!</p>
-
 </div>
   </article>
 
@@ -536,7 +531,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">November 15, 2016</div>
     </header>
     <div class="entry-content"><p>We are proud to announce that Apache Spark won the <a href="http://sortbenchmark.org/">2016 CloudSort Benchmark</a> (both Daytona and Indy category). A joint team from Nanjing University, Alibaba Group, and Databricks Inc. entered the competition using NADSort, a distributed sorting program built on top of Spark, and set a new world record as the most cost-efficient way to sort 100TB of data.</p>
-
 </div>
   </article>
 
@@ -546,7 +540,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">November 14, 2016</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">Apache Spark 2.0.2</a>! This maintenance release includes fixes across several areas of Spark, as well as Kafka 0.10 and runtime metrics support for Structured Streaming.</p>
-
 </div>
   </article>
 
@@ -556,7 +549,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">November 7, 2016</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">Spark 1.6.3</a>! This maintenance release includes fixes across several areas of Spark.</p>
-
 </div>
   </article>
 
@@ -584,7 +576,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">June 25, 2016</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">Spark 1.6.2</a>! This maintenance release includes fixes across several areas of Spark.</p>
-
 </div>
   </article>
 
@@ -603,7 +594,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">May 26, 2016</div>
     </header>
     <div class="entry-content"><p>To enable wide-scale community testing of the upcoming Spark 2.0 release, the Apache Spark team has posted a <a href="https://archive.apache.org/dist/spark/spark-2.0.0-preview/">preview release of Spark 2.0</a>. This preview is <b>not a stable release in terms of either API or functionality</b>, but it is meant to give the community early access to try the code that will become Spark 2.0. If you would like to test the release, simply download it, and sen [...]
-
 </div>
   </article>
 
@@ -622,7 +612,6 @@ well. The current plan is as follows:</p>
       <div class="entry-date">March 9, 2016</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">Spark 1.6.1</a>! This maintenance release includes fixes across several areas of Spark, including significant updates to the experimental Dataset API.</p>
-
 </div>
   </article>
 
@@ -653,7 +642,6 @@ well. The current plan is as follows:</p>
 <a href="/releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">Spark 1.6.0</a>! 
 Spark 1.6.0 is the seventh release on the API-compatible 1.X line. 
 With this release the Spark community continues to grow, with contributions from 248 developers!</p>
-
 </div>
   </article>
 
@@ -672,7 +660,6 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">November 9, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">Spark 1.5.2</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.</p>
-
 </div>
   </article>
 
@@ -691,7 +678,6 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">October 2, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">Spark 1.5.1</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.</p>
-
 </div>
   </article>
 
@@ -701,7 +687,6 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">September 9, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-5-0.html" title="Spark Release 1.5.0">Spark 1.5.0</a>! Spark 1.5.0 is the sixth release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 230 developers and more than 1,400 commits!</p>
-
 </div>
   </article>
 
@@ -720,7 +705,6 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">July 15, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-4-1.html" title="Spark Release 1.4.1">Spark 1.4.1</a>! This is a maintenance release that includes contributions from 85 developers. Spark 1.4.1 includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, Spark SQL, and MLlib.</p>
-
 </div>
   </article>
 
@@ -739,7 +723,6 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">June 11, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-4-0.html" title="Spark Release 1.4.0">Spark 1.4.0</a>! Spark 1.4.0 is the fifth release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 210 developers and more than 1,000 commits!</p>
-
 </div>
   </article>
 
@@ -760,7 +743,6 @@ With this release the Spark community continues to grow, with contributions from
     <div class="entry-content"><p>There is one month left until <a href="https://spark-summit.org/2015/">Spark Summit 2015</a>, which
 will be held in San Francisco on June 15th to 17th.
 The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presentations</a> from over 50 organizations using Spark, focused on use cases and ongoing development.</p>
-
 </div>
   </article>
 
@@ -770,7 +752,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">April 20, 2015</div>
     </header>
     <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top.</p>
-
 </div>
   </article>
 
@@ -780,7 +761,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">April 17, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers.</p>
-
 </div>
   </article>
 
@@ -790,7 +770,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">March 13, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-3-0.html" title="Spark Release 1.3.0">Spark 1.3.0</a>! Spark 1.3.0 is the third release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 174 developers and more than 1,000 commits!</p>
-
 </div>
   </article>
 
@@ -800,7 +779,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">February 9, 2015</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-1.html" title="Spark Release 1.2.1">Spark 1.2.1</a>! This is a maintenance release that includes contributions from 69 developers. Spark 1.2.1 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, SQL, GraphX, and MLlib.</p>
-
 </div>
   </article>
 
@@ -810,7 +788,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">January 21, 2015</div>
     </header>
     <div class="entry-content"><p>The <a href="http://spark-summit.org/east/2015/agenda">agenda for Spark Summit East</a> is now posted, with 38 talks from organizations including Goldman Sachs, Baidu, Salesforce, Novartis, Cisco and others. This inaugural Spark conference on the US East Coast will run March 18th-19th 2015 in New York City. More details are available on the <a href="http://spark-summit.org/east/2015/agenda">Spark Summit East website</a>, where you can also <a href="http: [...]
-
 </div>
   </article>
 
@@ -820,7 +797,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">December 18, 2014</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-0.html" title="Spark Release 1.2.0">Spark 1.2.0</a>! Spark 1.2.0 is the third release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 172 developers and more than 1,000 commits!</p>
-
 </div>
   </article>
 
@@ -830,7 +806,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">November 26, 2014</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-1.html" title="Spark Release 1.1.1">Spark 1.1.1</a>! This is a maintenance release that includes contributions from 55 developers. Spark 1.1.1 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, SQL, GraphX, and MLlib.</p>
-
 </div>
   </article>
 
@@ -840,7 +815,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">November 26, 2014</div>
     </header>
     <div class="entry-content"><p>Registration is now open for <a href="http://spark-summit.org/east">Spark Summit East 2015</a>, to be held on March 18th and 19th in New York City. The conference will be a great chance to meet people from throughout the Spark community as well as attend training workshops on Spark. If you haven&#8217;t been to previous Spark Summits, you can find content from previous events on the <a href="http://spark-summit.org">Spark Summit website</a>.</p>
-
 </div>
   </article>
 
@@ -850,7 +824,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">November 5, 2014</div>
     </header>
     <div class="entry-content"><p>We are proud to announce that Spark won the <a href="http://sortbenchmark.org/">2014 Gray Sort Benchmark</a> (Daytona 100TB category). A team from <a href="http://databricks.com/">Databricks</a> including Spark committers, Reynold Xin, Xiangrui Meng, and Matei Zaharia, <a href="http://databricks.com/blog/2014/11/05/spark-officially-sets-a-new-record-in-large-scale-sorting.html">entered the benchmark using Spark</a>. Spark won a tie with the Themis team f [...]
-
 </div>
   </article>
 
@@ -860,7 +833,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">October 18, 2014</div>
     </header>
     <div class="entry-content"><p>After successful events in the past two years, the <a href="http://spark-summit.org">Spark Summit</a> conference has expanded for 2015, offering both an event in New York on March 18-19 and one in San Francisco on June 15-17. The conference is a great chance to meet people from throughout the Spark community and see the latest news, tips and use cases.</p>
-
 </div>
   </article>
 
@@ -870,7 +842,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">September 11, 2014</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 171 developers!</p>
-
 </div>
   </article>
 
@@ -880,7 +851,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <div class="entry-date">August 5, 2014</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-0-2.html" title="Spark Release 1.0.2">Spark 1.0.2</a>! This release includes contributions from 30 developers. Spark 1.0.2 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, and MLlib.</p>
-
 </div>
   </article>
 
@@ -892,7 +862,6 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
 Contributions to this release came from 28 developers.</p>
-
 </div>
   </article>
 
@@ -902,7 +871,6 @@ Contributions to this release came from 28 developers.</p>
       <div class="entry-date">July 18, 2014</div>
     </header>
     <div class="entry-content"><p>The videos and slides for Spark Summit 2014 are now all <a href="http://spark-summit.org/2014/agenda">available online</a>. Watch them to see the latest news from the Spark community as well as use cases and applications built on top. In addition, <a href="http://spark-summit.org/2014/training">training materials</a> from the Summit, including hands-on exercises, are all available freely as well.</p>
-
 </div>
   </article>
 
@@ -912,7 +880,6 @@ Contributions to this release came from 28 developers.</p>
       <div class="entry-date">July 11, 2014</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-0-1.html" title="Spark Release 1.0.1">Spark 1.0.1</a>! This release includes contributions from 70 developers. Spark 1.0.0 includes fixes across several areas of Spark, including the core API, PySpark, and MLlib. It also includes new features in Spark&#8217;s (alpha) SQL library, including support for JSON data and performance and stability fixes.</p>
-
 </div>
   </article>
 
@@ -925,7 +892,6 @@ Contributions to this release came from 28 developers.</p>
 will be held in San Francisco on June 30th to July 2nd.
 The Summit will contain <a href="http://spark-summit.org/2014/agenda">presentations</a> from over 50
 organizations using Spark, focused on use cases and ongoing development.</p>
-
 </div>
   </article>
 
@@ -936,7 +902,6 @@ organizations using Spark, focused on use cases and ongoing development.</p>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-0-0.html" title="Spark Release 1.0.0">Spark 1.0.0</a>! Spark 1.0.0 is the first in the 1.X line of releases, providing API stability for Spark&#8217;s core interfaces. It is Spark&#8217;s largest release ever, with contributions from 117 developers. 
 This release expands Spark&#8217;s standard libraries, introducing a new SQL package (Spark SQL) that lets users integrate SQL queries into existing Spark workflows. MLlib, Spark&#8217;s machine learning library, is expanded with sparse vector support and several new algorithms. The GraphX and Streaming libraries also introduce new features and optimizations. Spark&#8217;s core engine adds support for secured YARN clusters, a unified tool for submitting Spark applications, and several pe [...]
-
 </div>
   </article>
 
@@ -950,7 +915,6 @@ is now <a href="http://spark-summit.org/2014/agenda">available online</a>. With
 talks from more than 50 organizations, it will be the biggest Spark event yet, bringing
 the developer and user communities together. Join us in person or tune in online to learn
 about the latest happenings in Spark.</p>
-
 </div>
   </article>
 
@@ -963,7 +927,6 @@ about the latest happenings in Spark.</p>
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
 Contributions to this release came from 37 developers.</p>
-
 </div>
   </article>
 
@@ -976,7 +939,6 @@ Contributions to this release came from 37 developers.</p>
 and talk submissions are now open for <a href="http://spark-summit.org/2014">Spark Summit 2014</a>.
 This will be a 3-day event in San Francisco organized by multiple companies in the Spark community.
 The event will run <strong>June 30th to July 2nd</strong> in San Francisco, CA.</p>
-
 </div>
   </article>
 
@@ -986,7 +948,6 @@ The event will run <strong>June 30th to July 2nd</strong> in San Francisco, CA.<
       <div class="entry-date">February 27, 2014</div>
     </header>
     <div class="entry-content"><p>The Apache Software Foundation <a href="https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces50">announced</a> today that Spark has graduated from the Apache Incubator to become a top-level Apache project, signifying that the project&#8217;s community and products have been well-governed under the ASF&#8217;s meritocratic process and principles. This is a major step for the community and we are very proud to share this news w [...]
-
 </div>
   </article>
 
@@ -1000,7 +961,6 @@ Spark 0.9.0</a>! Spark 0.9.0 is a major release and Spark&#8217;s largest releas
 This release expands Spark&#8217;s standard libraries, introducing a new graph computation package (GraphX) and adding several new features to the machine learning and stream-processing packages. It also makes major improvements to the core engine,
 including external aggregations, a simplified H/A mode for long lived applications, and 
 hardened YARN support.</p>
-
 </div>
   </article>
 
@@ -1020,7 +980,6 @@ hardened YARN support.</p>
     </header>
     <div class="entry-content"><p>The <b><a href="http://www.spark-summit.org">Spark Summit 2013</a></b>, held in early December 2013 in downtown San Francisco, was a success!
 Over 450 Spark developers and enthusiasts from 13 countries and more than 180 companies came to learn from project leaders and production users of Spark, Shark, Spark Streaming and related projects about use cases, recent developments, and the Spark community roadmap.</p>
-
 </div>
   </article>
 
@@ -1030,7 +989,6 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">October 8, 2013</div>
     </header>
     <div class="entry-content"><p>We are excited to announce the <b><a href="http://www.spark-summit.org">first Spark Summit</a> on Dec 2, 2013 in Downtown San Francisco</b>. Come hear from key production users of Spark, Shark, Spark Streaming and related projects. Also find out where the development is going, and learn how to use the Spark stack in a variety of applications. The summit is being organized and sponsored by leading organizations in the Spark community.</p>
-
 </div>
   </article>
 
@@ -1049,7 +1007,6 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">September 5, 2013</div>
     </header>
     <div class="entry-content"><p>As we continue developing Spark, we would love to get feedback from users and hear what you&#8217;d like us to work on next. We&#8217;ve decided that a good way to do that is a survey &#8211; we hope to run this at regular intervals. If you have a few minutes to participate, <a href="https://docs.google.com/forms/d/1eMXp4GjcIXglxJe5vYYBzXKVm-6AiYt1KThJwhCjJiY/viewform">fill in the survey here</a>. Your time is greatly appreciated.</p>
-
 </div>
   </article>
 
@@ -1059,7 +1016,6 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">August 27, 2013</div>
     </header>
     <div class="entry-content"><p>We have released the next screencast, <a href="/screencasts/4-a-standalone-job-in-spark.html">A Standalone Job in Scala</a> that takes you beyond the Spark shell, helping you write your first standalone Spark job.</p>
-
 </div>
   </article>
 
@@ -1134,7 +1090,6 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">April 16, 2013</div>
     </header>
     <div class="entry-content"><p>We have released the first two screencasts in a series of short hands-on video training courses we will be publishing to help new users get up and running with Spark in minutes.</p>
-
 </div>
   </article>
 
diff --git a/site/powered-by.html b/site/powered-by.html
index e5f5b6b..66ee111 100644
--- a/site/powered-by.html
+++ b/site/powered-by.html
@@ -221,7 +221,7 @@ always allowed, as in &#8220;BigCoProduct is a widget for Apache Spark&#8221;.</
 
 <h2>Companies and Organizations</h2>
 
-<p>To add yourself to the list, please email <code class="highlighter-rouge">dev@spark.apache.org</code> with your organization name, URL, 
+<p>To add yourself to the list, please email <code class="language-plaintext highlighter-rouge">dev@spark.apache.org</code> with your organization name, URL, 
 a list of which Spark components you are using, and a short description of your use case.</p>
 
 <ul>
diff --git a/site/release-process.html b/site/release-process.html
index 65ad965..89b9cb8 100644
--- a/site/release-process.html
+++ b/site/release-process.html
@@ -250,12 +250,12 @@
 for details.</p>
 
 <p>If you want to do the release on another machine, you can transfer your secret key to that machine
-via the <code class="highlighter-rouge">gpg --export-secret-keys</code> and <code class="highlighter-rouge">gpg --import</code> commands.</p>
+via the <code class="language-plaintext highlighter-rouge">gpg --export-secret-keys</code> and <code class="language-plaintext highlighter-rouge">gpg --import</code> commands.</p>
 
 <p>The last step is to update the KEYS file with your code signing key
 <a href="https://www.apache.org/dev/openpgp.html#export-public-key">https://www.apache.org/dev/openpgp.html#export-public-key</a></p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Move dev/ to release/ when the voting is completed. See Finalize the Release below
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Move dev/ to release/ when the voting is completed. See Finalize the Release below
 svn co --depth=files "https://dist.apache.org/repos/dist/dev/spark" svn-spark
 # edit svn-spark/KEYS file
 svn ci --username $ASF_USERNAME --password "$ASF_PASSWORD" -m"Update KEYS"
@@ -283,17 +283,17 @@ to the test dashboard at https://amplab.cs.berkeley.edu/jenkins/view/Spark%20QA%
 <h3>Cutting a Release Candidate</h3>
 
 <p>If this is not the first RC, then make sure that the JIRA issues that have been solved since the
-last RC are marked as <code class="highlighter-rouge">Resolved</code> and has a <code class="highlighter-rouge">Target Versions</code> set to this release version.</p>
+last RC are marked as <code class="language-plaintext highlighter-rouge">Resolved</code> and has a <code class="language-plaintext highlighter-rouge">Target Versions</code> set to this release version.</p>
 
 <p>To track any issue with pending PR targeting this release, create a filter in JIRA with a query like this
-<code class="highlighter-rouge">project = SPARK AND "Target Version/s" = "12340470" AND status in (Open, Reopened, "In Progress")</code></p>
+<code class="language-plaintext highlighter-rouge">project = SPARK AND "Target Version/s" = "12340470" AND status in (Open, Reopened, "In Progress")</code></p>
 
 <p>For target version string value to use, find the numeric value corresponds to the release by looking into
 an existing issue with that target version and click on the version (eg. find an issue targeting 2.2.1
 and click on the version link of its Target Versions field)</p>
 
-<p>Verify from <code class="highlighter-rouge">git log</code> whether they are actually making it in the new RC or not. Check for JIRA issues
-with <code class="highlighter-rouge">release-notes</code> label, and make sure they are documented in relevant migration guide for breaking
+<p>Verify from <code class="language-plaintext highlighter-rouge">git log</code> whether they are actually making it in the new RC or not. Check for JIRA issues
+with <code class="language-plaintext highlighter-rouge">release-notes</code> label, and make sure they are documented in relevant migration guide for breaking
 changes or in the release news on the website later.</p>
 
 <p>Also check that all build and test passes are green from the RISELab Jenkins: https://amplab.cs.berkeley.edu/jenkins/ particularly look for Spark Packaging, QA Compile, QA Test.
@@ -307,9 +307,9 @@ Note that not all permutations are run on PR therefore it is important to check
   <li>Publish a snapshot to the Apache staging Maven repo.</li>
 </ol>
 
-<p>The process of cutting a release candidate has been automated via the <code class="highlighter-rouge">dev/create-release/do-release-docker.sh</code> script.
-Run this script, type information it requires, and wait until it finishes. You can also do a single step via the <code class="highlighter-rouge">-s</code> option.
-Please run <code class="highlighter-rouge">do-release-docker.sh -h</code> and see more details.</p>
+<p>The process of cutting a release candidate has been automated via the <code class="language-plaintext highlighter-rouge">dev/create-release/do-release-docker.sh</code> script.
+Run this script, type information it requires, and wait until it finishes. You can also do a single step via the <code class="language-plaintext highlighter-rouge">-s</code> option.
+Please run <code class="language-plaintext highlighter-rouge">do-release-docker.sh -h</code> and see more details.</p>
 
 <h3>Call a Vote on the Release Candidate</h3>
 
@@ -325,7 +325,7 @@ Look at past voting threads to see how this proceeds. The email should follow
 </ul>
 
 <p>Once the vote is done, you should also send out a summary email with the totals, with a subject
-that looks something like <code class="highlighter-rouge">[VOTE][RESULT] ...</code>.</p>
+that looks something like <code class="language-plaintext highlighter-rouge">[VOTE][RESULT] ...</code>.</p>
 
 <h3>Finalize the Release</h3>
 
@@ -336,7 +336,7 @@ move the artifacts into the release folder, they cannot be removed.</strong></p>
 
 <p>After the vote passes, to upload the binaries to Apache mirrors, you move the binaries from dev directory (this should be where they are voted) to release directory. This &#8220;moving&#8221; is the only way you can add stuff to the actual release directory. (Note: only PMC can move to release directory)</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Move the sub-directory in "dev" to the
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Move the sub-directory in "dev" to the
 # corresponding directory in "release"
 $ export SVN_EDITOR=vim
 $ svn mv https://dist.apache.org/repos/dist/dev/spark/v1.1.1-rc2-bin https://dist.apache.org/repos/dist/release/spark/spark-1.1.1
@@ -351,23 +351,23 @@ curl "https://dist.apache.org/repos/dist/dev/spark/KEYS" &gt; svn-spark/KEYS
 It may take a while for them to be visible. This will be mirrored throughout the Apache network.
 Check the release checker result of the release at <a href="https://checker.apache.org/projs/spark.html">https://checker.apache.org/projs/spark.html</a>.</p>
 
-<p>For Maven Central Repository, you can Release from the <a href="https://repository.apache.org/">Apache Nexus Repository Manager</a>. This is already populated by the <code class="highlighter-rouge">release-build.sh publish-release</code> step. Log in, open Staging Repositories, find the one voted on (eg. orgapachespark-1257 for https://repository.apache.org/content/repositories/orgapachespark-1257/), select and click Release and confirm. If successful, it should show up under https:// [...]
+<p>For Maven Central Repository, you can Release from the <a href="https://repository.apache.org/">Apache Nexus Repository Manager</a>. This is already populated by the <code class="language-plaintext highlighter-rouge">release-build.sh publish-release</code> step. Log in, open Staging Repositories, find the one voted on (eg. orgapachespark-1257 for https://repository.apache.org/content/repositories/orgapachespark-1257/), select and click Release and confirm. If successful, it should sho [...]
 and the same under https://repository.apache.org/content/groups/maven-staging-group/org/apache/spark/spark-core_2.11/2.2.1/ (look for the correct release version). After some time this will be sync&#8217;d to <a href="https://search.maven.org/">Maven Central</a> automatically.</p>
 
 <h4>Upload to PyPI</h4>
 
-<p>You&#8217;ll need the credentials for the <code class="highlighter-rouge">spark-upload</code> account, which can be found in
+<p>You&#8217;ll need the credentials for the <code class="language-plaintext highlighter-rouge">spark-upload</code> account, which can be found in
 <a href="https://lists.apache.org/thread.html/2789e448cd8a95361a3164b48f3f8b73a6d9d82aeb228bae2bc4dc7f@%3Cprivate.spark.apache.org%3E">this message</a>
 (only visible to PMC members).</p>
 
 <p>The artifacts can be uploaded using <a href="https://pypi.org/project/twine/">twine</a>. Just run:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>twine upload --repository-url https://upload.pypi.org/legacy/ pyspark-{version}.tar.gz pyspark-{version}.tar.gz.asc
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>twine upload --repository-url https://upload.pypi.org/legacy/ pyspark-{version}.tar.gz pyspark-{version}.tar.gz.asc
 </code></pre></div></div>
 
 <p>Adjusting the command for the files that match the new release. If for some reason the twine upload
 is incorrect (e.g. http failure or other issue), you can rename the artifact to
-<code class="highlighter-rouge">pyspark-version.post0.tar.gz</code>, delete the old artifact from PyPI and re-upload.</p>
+<code class="language-plaintext highlighter-rouge">pyspark-version.post0.tar.gz</code>, delete the old artifact from PyPI and re-upload.</p>
 
 <h4>Publish to CRAN</h4>
 
@@ -379,7 +379,7 @@ Since it requires further manual steps, please also contact the <a href="mailto:
 <p>After the vote passes and you moved the approved RC to the release repository, you should delete
 the RC directories from the staging repository. For example:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>svn rm https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-bin/ \
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>svn rm https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-bin/ \
   https://dist.apache.org/repos/dist/dev/spark/v2.3.1-rc1-docs/ \
   -m"Removing RC artifacts."
 </code></pre></div></div>
@@ -392,13 +392,13 @@ the RC directories from the staging repository. For example:</p>
 <p>Spark always keeps the latest maintenance released of each branch in the mirror network.
 To delete older versions simply use svn rm:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ svn rm https://dist.apache.org/repos/dist/release/spark/spark-1.1.0
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ svn rm https://dist.apache.org/repos/dist/release/spark/spark-1.1.0
 </code></pre></div></div>
 
-<p>You will also need to update <code class="highlighter-rouge">js/download.js</code> to indicate the release is not mirrored
+<p>You will also need to update <code class="language-plaintext highlighter-rouge">js/download.js</code> to indicate the release is not mirrored
 anymore, so that the correct links are generated on the site.</p>
 
-<p>Also take a moment to check <code class="highlighter-rouge">HiveExternalCatalogVersionsSuite.scala</code> starting with branch-2.2
+<p>Also take a moment to check <code class="language-plaintext highlighter-rouge">HiveExternalCatalogVersionsSuite.scala</code> starting with branch-2.2
 and see if it needs to be adjusted, since that test relies on mirrored downloads of previous
 releases.</p>
 
@@ -406,7 +406,7 @@ releases.</p>
 
 <p>Check out the tagged commit for the release candidate that passed and apply the correct version tag.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git tag v1.1.1 v1.1.1-rc2 # the RC that passed
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git tag v1.1.1 v1.1.1-rc2 # the RC that passed
 $ git push apache v1.1.1
 </code></pre></div></div>
 
@@ -418,7 +418,7 @@ $ git push apache v1.1.1
 <p>It&#8217;s recommended to not remove the generated docs of the latest RC, so that we can copy it to
 spark-website directly, otherwise you need to re-build the docs.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Build the latest docs
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Build the latest docs
 $ git checkout v1.1.1
 $ cd docs
 $ PRODUCTION=1 bundle exec jekyll build
@@ -435,16 +435,16 @@ $ ln -s 1.1.1 latest
 </code></pre></div></div>
 
 <p>Next, update the rest of the Spark website. See how the previous releases are documented
-(all the HTML file changes are generated by <code class="highlighter-rouge">jekyll</code>). In particular:</p>
+(all the HTML file changes are generated by <code class="language-plaintext highlighter-rouge">jekyll</code>). In particular:</p>
 
 <ul>
-  <li>update <code class="highlighter-rouge">_layouts/global.html</code> if the new release is the latest one</li>
-  <li>update <code class="highlighter-rouge">documentation.md</code> to add link to the docs for the new release</li>
-  <li>add the new release to <code class="highlighter-rouge">js/downloads.js</code></li>
-  <li>check <code class="highlighter-rouge">security.md</code> for anything to update</li>
+  <li>update <code class="language-plaintext highlighter-rouge">_layouts/global.html</code> if the new release is the latest one</li>
+  <li>update <code class="language-plaintext highlighter-rouge">documentation.md</code> to add link to the docs for the new release</li>
+  <li>add the new release to <code class="language-plaintext highlighter-rouge">js/downloads.js</code></li>
+  <li>check <code class="language-plaintext highlighter-rouge">security.md</code> for anything to update</li>
 </ul>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git add 1.1.1
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ git add 1.1.1
 $ git commit -m "Add docs for Spark 1.1.1"
 </code></pre></div></div>
 
@@ -452,17 +452,17 @@ $ git commit -m "Add docs for Spark 1.1.1"
 <a href="https://issues.apache.org/jira/projects/SPARK?selectedItem=com.atlassian.jira.jira-projects-plugin:release-page">release page in JIRA</a>,
 pick the release version from the list, then click on &#8220;Release Notes&#8221;. Copy this URL and then make a short URL on
 <a href="https://s.apache.org/">s.apache.org</a>, sign in to your Apache account, and pick the ID as something like
-<code class="highlighter-rouge">spark-2.1.2</code>. Create a new release post under <code class="highlighter-rouge">releases/_posts</code> to include this short URL. The date of the post should
+<code class="language-plaintext highlighter-rouge">spark-2.1.2</code>. Create a new release post under <code class="language-plaintext highlighter-rouge">releases/_posts</code> to include this short URL. The date of the post should
 be the date you create it.</p>
 
-<p>Then run <code class="highlighter-rouge">bundle exec jekyll build</code> to update the <code class="highlighter-rouge">site</code> directory.</p>
+<p>Then run <code class="language-plaintext highlighter-rouge">bundle exec jekyll build</code> to update the <code class="language-plaintext highlighter-rouge">site</code> directory.</p>
 
-<p>After merging the change into the <code class="highlighter-rouge">asf-site</code> branch, you may need to create a follow-up empty
+<p>After merging the change into the <code class="language-plaintext highlighter-rouge">asf-site</code> branch, you may need to create a follow-up empty
 commit to force synchronization between ASF&#8217;s git and the web site, and also the GitHub mirror.
 For some reason synchronization seems to not be reliable for this repository.</p>
 
 <p>On a related note, make sure the version is marked as released on JIRA. Go find the release page as above, eg.,
-<code class="highlighter-rouge">https://issues.apache.org/jira/projects/SPARK/versions/12340295</code>, and click the &#8220;Release&#8221; button on the right and enter the release date.</p>
+<code class="language-plaintext highlighter-rouge">https://issues.apache.org/jira/projects/SPARK/versions/12340295</code>, and click the &#8220;Release&#8221; button on the right and enter the release date.</p>
 
 <p>(Generally, this is only for major and minor, but not patch releases) The contributors list can be automatically generated through
 <a href="https://github.com/apache/spark/blob/branch-1.1/dev/create-release/generate-contributors.py">this script</a>.
@@ -474,7 +474,7 @@ warnings about author names not being properly translated. To fix this, run
 <a href="https://github.com/apache/spark/blob/branch-1.1/dev/create-release/translate-contributors.py">this other script</a>,
 which fetches potential replacements from GitHub and JIRA. For instance:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd release-spark/dev/create-release
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>$ cd release-spark/dev/create-release
 # Set RELEASE_TAG and PREVIOUS_RELEASE_TAG
 $ export RELEASE_TAG=v1.1.1
 $ export PREVIOUS_RELEASE_TAG=v1.1.0
@@ -493,7 +493,7 @@ use the the following commands to identify large patches. Extra care must be tak
 commits from previous releases are not counted since git cannot easily associate commits that
 were back ported into different branches.</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Determine PR numbers closed only in the new release
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code># Determine PR numbers closed only in the new release
 $ git log v1.1.1 | grep "Closes #" | cut -d " " -f 5,6 | grep Closes | sort &gt; closed_1.1.1
 $ git log v1.1.0 | grep "Closes #" | cut -d " " -f 5,6 | grep Closes | sort &gt; closed_1.1.0
 $ diff --new-line-format="" --unchanged-line-format="" closed_1.1.1 closed_1.1.0 &gt; diff.txt
@@ -510,10 +510,10 @@ $ git log v1.1.1 --grep "$expr" --shortstat --oneline | grep -B 1 -e "[3-9][0-9]
 
 <h4>Update `HiveExternalCatalogVersionsSuite`</h4>
 
-<p>When a new release occurs, <code class="highlighter-rouge">PROCESS_TABLES.testingVersions</code> in <code class="highlighter-rouge">HiveExternalCatalogVersionsSuite</code>
+<p>When a new release occurs, <code class="language-plaintext highlighter-rouge">PROCESS_TABLES.testingVersions</code> in <code class="language-plaintext highlighter-rouge">HiveExternalCatalogVersionsSuite</code>
 must be updated shortly thereafter. This list should contain the latest release in all active
 maintenance branches, and no more.
-For example, as of this writing, it has value <code class="highlighter-rouge">val testingVersions = Seq("2.1.3", "2.2.2", "2.3.2")</code>.
+For example, as of this writing, it has value <code class="language-plaintext highlighter-rouge">val testingVersions = Seq("2.1.3", "2.2.2", "2.3.2")</code>.
 &#8220;2.4.0&#8221; will be added to the list when it&#8217;s released. &#8220;2.1.3&#8221; will be removed (and removed from the Spark dist mirrors)
 when the branch is no longer maintained. &#8220;2.3.2&#8221; will become &#8220;2.3.3&#8221; when &#8220;2.3.3&#8221; is released.</p>
 
@@ -521,7 +521,7 @@ when the branch is no longer maintained. &#8220;2.3.2&#8221; will become &#8220;
 
 <p>Once everything is working (website docs, website changes) create an announcement on the website
 and then send an e-mail to the mailing list. To create an announcement, create a post under
-<code class="highlighter-rouge">news/_posts</code> and then run <code class="highlighter-rouge">bundle exec jekyll build</code>.</p>
+<code class="language-plaintext highlighter-rouge">news/_posts</code> and then run <code class="language-plaintext highlighter-rouge">bundle exec jekyll build</code>.</p>
 
 <p>Enjoy an adult beverage of your choice, and congratulations on making a Spark release.</p>
 
diff --git a/site/releases/spark-release-0-8-0.html b/site/releases/spark-release-0-8-0.html
index b5da14f..d864805 100644
--- a/site/releases/spark-release-0-8-0.html
+++ b/site/releases/spark-release-0-8-0.html
@@ -227,7 +227,7 @@
 <p>Spark’s internal job scheduler has been refactored and extended to include more sophisticated scheduling policies. In particular, a <a href="http://spark.incubator.apache.org/docs/0.8.0/job-scheduling.html#scheduling-within-an-application">fair scheduler</a> implementation now allows multiple users to share an instance of Spark, which helps users running shorter jobs to achieve good performance, even when longer-running jobs are running in parallel. Support for topology-aware scheduli [...]
 
 <h3 id="easier-deployment-and-linking">Easier Deployment and Linking</h3>
-<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code class="highlighter-rouge">spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>.</p>
+<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code class="language-plaintext highlighter-rouge">spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>.</p>
 
 <h3 id="expanded-ec2-capabilities">Expanded EC2 Capabilities</h3>
 <p>Spark’s EC2 scripts now support launching in any availability zone. Support has also been added for EC2 instance types which use the newer “HVM” architecture. This includes the cluster compute (cc1/cc2) family of instance types. We’ve also added support for running newer versions of HDFS alongside Spark. Finally, we’ve added the ability to launch clusters with maintenance releases of Spark in addition to launching the newest release.</p>
@@ -237,12 +237,12 @@
 
 <h3 id="other-improvements">Other Improvements</h3>
 <ul>
-  <li>RDDs can now manually be dropped from memory with <code class="highlighter-rouge">unpersist</code>.</li>
-  <li>The RDD class includes the following new operations: <code class="highlighter-rouge">takeOrdered</code>, <code class="highlighter-rouge">zipPartitions</code>, <code class="highlighter-rouge">top</code>.</li>
-  <li>A <code class="highlighter-rouge">JobLogger</code> class has been added to produce archivable logs of a Spark workload.</li>
-  <li>The <code class="highlighter-rouge">RDD.coalesce</code> function now takes into account locality.</li>
-  <li>The <code class="highlighter-rouge">RDD.pipe</code> function has been extended to support passing environment variables to child processes.</li>
-  <li>Hadoop <code class="highlighter-rouge">save</code> functions now support an optional compression codec.</li>
+  <li>RDDs can now manually be dropped from memory with <code class="language-plaintext highlighter-rouge">unpersist</code>.</li>
+  <li>The RDD class includes the following new operations: <code class="language-plaintext highlighter-rouge">takeOrdered</code>, <code class="language-plaintext highlighter-rouge">zipPartitions</code>, <code class="language-plaintext highlighter-rouge">top</code>.</li>
+  <li>A <code class="language-plaintext highlighter-rouge">JobLogger</code> class has been added to produce archivable logs of a Spark workload.</li>
+  <li>The <code class="language-plaintext highlighter-rouge">RDD.coalesce</code> function now takes into account locality.</li>
+  <li>The <code class="language-plaintext highlighter-rouge">RDD.pipe</code> function has been extended to support passing environment variables to child processes.</li>
+  <li>Hadoop <code class="language-plaintext highlighter-rouge">save</code> functions now support an optional compression codec.</li>
   <li>You can now create a binary distribution of Spark which depends only on a Java runtime for easier deployment on a cluster.</li>
   <li>The examples build has been isolated from the core build, substantially reducing the potential for dependency conflicts.</li>
   <li>The Spark Streaming Twitter API has been updated to use OAuth authentication instead of the deprecated username/password authentication in Spark 0.7.0.</li>
@@ -253,10 +253,10 @@
 
 <h3 id="compatibility">Compatibility</h3>
 <ul>
-  <li><strong>This release changes Spark’s package name to &#8216;org.apache.spark&#8217;</strong>, so those upgrading from Spark 0.7 will need to adjust their imports accordingly. In addition, we’ve moved the <code class="highlighter-rouge">RDD</code> class to the org.apache.spark.rdd package (it was previously in the top-level package). The Spark artifacts published through Maven have also changed to the new package name.</li>
-  <li>In the Java API, use of Scala’s <code class="highlighter-rouge">Option</code> class has been replaced with <code class="highlighter-rouge">Optional</code> from the Guava library.</li>
-  <li>Linking against Spark for arbitrary Hadoop versions is now possible by specifying a dependency on <code class="highlighter-rouge">hadoop-client</code>, instead of rebuilding <code class="highlighter-rouge">spark-core</code> against your version of Hadoop. See the documentation <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a> for details.</li>
-  <li>If you are building Spark, you’ll now need to run <code class="highlighter-rouge">sbt/sbt assembly</code> instead of <code class="highlighter-rouge">package</code>.</li>
+  <li><strong>This release changes Spark’s package name to &#8216;org.apache.spark&#8217;</strong>, so those upgrading from Spark 0.7 will need to adjust their imports accordingly. In addition, we’ve moved the <code class="language-plaintext highlighter-rouge">RDD</code> class to the org.apache.spark.rdd package (it was previously in the top-level package). The Spark artifacts published through Maven have also changed to the new package name.</li>
+  <li>In the Java API, use of Scala’s <code class="language-plaintext highlighter-rouge">Option</code> class has been replaced with <code class="language-plaintext highlighter-rouge">Optional</code> from the Guava library.</li>
+  <li>Linking against Spark for arbitrary Hadoop versions is now possible by specifying a dependency on <code class="language-plaintext highlighter-rouge">hadoop-client</code>, instead of rebuilding <code class="language-plaintext highlighter-rouge">spark-core</code> against your version of Hadoop. See the documentation <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a> for details.</li>
+  <li>If you are building Spark, you’ll now need to run <code class="language-plaintext highlighter-rouge">sbt/sbt assembly</code> instead of <code class="language-plaintext highlighter-rouge">package</code>.</li>
 </ul>
 
 <h3 id="credits">Credits</h3>
@@ -296,7 +296,7 @@
   <li>Dmitriy Lyubimov &#8211; bug fix</li>
   <li>Chris Mattmann &#8211; Apache mentor</li>
   <li>David McCauley &#8211; JSON API improvement</li>
-  <li>Sean McNamara &#8211; added <code class="highlighter-rouge">takeOrdered</code> function, bug fixes, and a build fix</li>
+  <li>Sean McNamara &#8211; added <code class="language-plaintext highlighter-rouge">takeOrdered</code> function, bug fixes, and a build fix</li>
   <li>Mridul Muralidharan &#8211; YARN integration (lead) and scheduler improvements</li>
   <li>Marc Mercer &#8211; improvements to UI json output</li>
   <li>Christopher Nguyen &#8211; bug fixes</li>
diff --git a/site/releases/spark-release-0-8-1.html b/site/releases/spark-release-0-8-1.html
index f482848..a0d3ce4 100644
--- a/site/releases/spark-release-0-8-1.html
+++ b/site/releases/spark-release-0-8-1.html
@@ -217,7 +217,7 @@
 <ul>
   <li>Optimized hashtables for shuffle data - reduces memory and CPU consumption</li>
   <li>Efficient encoding for JobConfs - improves latency for stages reading large numbers of blocks from HDFS, S3, and HBase</li>
-  <li>Shuffle file consolidation (off by default) - reduces the number of files created in large shuffles for better filesystem performance. This change works best on filesystems newer than ext3 (we recommend ext4 or XFS), and it will be the default in Spark 0.9, but we’ve left it off by default for compatibility. We recommend users turn this on unless they are using ext3 by setting <code class="highlighter-rouge">spark.shuffle.consolidateFiles</code> to &#8220;true&#8221;.</li>
+  <li>Shuffle file consolidation (off by default) - reduces the number of files created in large shuffles for better filesystem performance. This change works best on filesystems newer than ext3 (we recommend ext4 or XFS), and it will be the default in Spark 0.9, but we’ve left it off by default for compatibility. We recommend users turn this on unless they are using ext3 by setting <code class="language-plaintext highlighter-rouge">spark.shuffle.consolidateFiles</code> to &#8220;true&#8 [...]
   <li>Torrent broadcast (off by default) - a faster broadcast implementation for large objects.</li>
   <li>Support for fetching large result sets - allows tasks to return large results without tuning Akka buffer sizes.</li>
 </ul>
@@ -231,15 +231,15 @@
 <ul>
   <li>It is now possible to set Spark config properties directly from Python</li>
   <li>Python now supports sort operations</li>
-  <li>Accumulators now have an explicitly named <code class="highlighter-rouge">add</code> method</li>
+  <li>Accumulators now have an explicitly named <code class="language-plaintext highlighter-rouge">add</code> method</li>
 </ul>
 
 <h3 id="new-operators-and-usability-improvements">New Operators and Usability Improvements</h3>
 <ul>
-  <li><code class="highlighter-rouge">local://</code> URI’s - allows users to specify files already present on slaves as dependencies</li>
+  <li><code class="language-plaintext highlighter-rouge">local://</code> URI’s - allows users to specify files already present on slaves as dependencies</li>
   <li>A new “result fetching” state has been added to the UI</li>
-  <li>New Spark Streaming operators: <code class="highlighter-rouge">transformWith</code>, <code class="highlighter-rouge">leftInnerJoin</code>, <code class="highlighter-rouge">rightOuterJoin</code></li>
-  <li>New Spark operators: <code class="highlighter-rouge">repartition</code></li>
+  <li>New Spark Streaming operators: <code class="language-plaintext highlighter-rouge">transformWith</code>, <code class="language-plaintext highlighter-rouge">leftInnerJoin</code>, <code class="language-plaintext highlighter-rouge">rightOuterJoin</code></li>
+  <li>New Spark operators: <code class="language-plaintext highlighter-rouge">repartition</code></li>
   <li>You can now run Spark applications as a different user in standalone and Mesos modes</li>
 </ul>
 
@@ -256,8 +256,8 @@
 <ul>
   <li>Michael Armbrust &#8211; build fix</li>
   <li>Pierre Borckmans &#8211; typo fix in documentation</li>
-  <li>Evan Chan &#8211; <code class="highlighter-rouge">local://</code> scheme for dependency jars</li>
-  <li>Ewen Cheslack-Postava &#8211; <code class="highlighter-rouge">add</code> method for python accumulators, support for setting config properties in python</li>
+  <li>Evan Chan &#8211; <code class="language-plaintext highlighter-rouge">local://</code> scheme for dependency jars</li>
+  <li>Ewen Cheslack-Postava &#8211; <code class="language-plaintext highlighter-rouge">add</code> method for python accumulators, support for setting config properties in python</li>
   <li>Mosharaf Chowdhury &#8211; optimized broadcast implementation</li>
   <li>Frank Dai &#8211; documentation fix</li>
   <li>Aaron Davidson &#8211; shuffle file consolidation, H/A mode for standalone scheduler, cleaned up representation of block IDs, several improvements and bug fixes</li>
@@ -270,7 +270,7 @@
   <li>Stephen Haberman &#8211; bug fix</li>
   <li>Haidar Hadi &#8211; documentation fix</li>
   <li>Nathan Howell &#8211; bug fix relating to YARN</li>
-  <li>Holden Karau &#8211; Java version of <code class="highlighter-rouge">mapPartitionsWithIndex</code></li>
+  <li>Holden Karau &#8211; Java version of <code class="language-plaintext highlighter-rouge">mapPartitionsWithIndex</code></li>
   <li>Du Li &#8211; bug fix in make-distribution.sh</li>
   <li>Raymond Liu &#8211; work on YARN 2.2 build</li>
   <li>Xi Liu &#8211; bug fix and code clean-up</li>
@@ -289,7 +289,7 @@
   <li>Mingfei Shi &#8211; documentation for JobLogger</li>
   <li>Andre Schumacher &#8211; sortByKey in PySpark and associated changes</li>
   <li>Karthik Tunga &#8211; bug fix in launch script</li>
-  <li>Patrick Wendell &#8211; <code class="highlighter-rouge">repartition</code> operator, shuffle write metrics, various fixes and release management</li>
+  <li>Patrick Wendell &#8211; <code class="language-plaintext highlighter-rouge">repartition</code> operator, shuffle write metrics, various fixes and release management</li>
   <li>Neal Wiggins &#8211; import clean-up, documentation fixes</li>
   <li>Andrew Xia &#8211; bug fix in UI</li>
   <li>Reynold Xin &#8211; task killing, support for setting job properties in Spark shell, logging improvements, Kryo improvements, several bug fixes</li>
diff --git a/site/releases/spark-release-0-9-0.html b/site/releases/spark-release-0-9-0.html
index 0b1a7ed..cc9a72b 100644
--- a/site/releases/spark-release-0-9-0.html
+++ b/site/releases/spark-release-0-9-0.html
@@ -232,10 +232,10 @@
   <li>A new <a href="/docs/0.9.0/api/streaming/index.html#org.apache.spark.streaming.scheduler.StreamingListener">StreamingListener</a> interface has been added for monitoring statistics about the streaming computation.</li>
   <li>A few aspects of the API have been improved:
     <ul>
-      <li><code class="highlighter-rouge">DStream</code> and <code class="highlighter-rouge">PairDStream</code> classes have been moved from <code class="highlighter-rouge">org.apache.spark.streaming</code> to <code class="highlighter-rouge">org.apache.spark.streaming.dstream</code> to keep it consistent with <code class="highlighter-rouge">org.apache.spark.rdd.RDD</code>.</li>
-      <li><code class="highlighter-rouge">DStream.foreach</code> has been renamed to <code class="highlighter-rouge">foreachRDD</code> to make it explicit that it works for every RDD, not every element</li>
-      <li><code class="highlighter-rouge">StreamingContext.awaitTermination()</code> allows you wait for context shutdown and catch any exception that occurs in the streaming computation.
- *<code class="highlighter-rouge">StreamingContext.stop()</code> now allows stopping of StreamingContext without stopping the underlying SparkContext.</li>
+      <li><code class="language-plaintext highlighter-rouge">DStream</code> and <code class="language-plaintext highlighter-rouge">PairDStream</code> classes have been moved from <code class="language-plaintext highlighter-rouge">org.apache.spark.streaming</code> to <code class="language-plaintext highlighter-rouge">org.apache.spark.streaming.dstream</code> to keep it consistent with <code class="language-plaintext highlighter-rouge">org.apache.spark.rdd.RDD</code>.</li>
+      <li><code class="language-plaintext highlighter-rouge">DStream.foreach</code> has been renamed to <code class="language-plaintext highlighter-rouge">foreachRDD</code> to make it explicit that it works for every RDD, not every element</li>
+      <li><code class="language-plaintext highlighter-rouge">StreamingContext.awaitTermination()</code> allows you wait for context shutdown and catch any exception that occurs in the streaming computation.
+ *<code class="language-plaintext highlighter-rouge">StreamingContext.stop()</code> now allows stopping of StreamingContext without stopping the underlying SparkContext.</li>
     </ul>
   </li>
 </ul>
@@ -286,8 +286,8 @@
   <li>Spark’s standalone mode now supports submitting a driver program to run on the cluster instead of on the external machine submitting it. You can access this functionality through the <a href="/docs/0.9.0/spark-standalone.html#launching-applications-inside-the-cluster">org.apache.spark.deploy.Client</a> class.</li>
   <li>Large reduce operations now automatically spill data to disk if it does not fit in memory.</li>
   <li>Users of standalone mode can now limit how many cores an application will use by default if the application writer didn’t configure its size. Previously, such applications took all available cores on the cluster.</li>
-  <li><code class="highlighter-rouge">spark-shell</code> now supports the <code class="highlighter-rouge">-i</code> option to run a script on startup.</li>
-  <li>New <code class="highlighter-rouge">histogram</code> and <code class="highlighter-rouge">countDistinctApprox</code> operators have been added for working with numerical data.</li>
+  <li><code class="language-plaintext highlighter-rouge">spark-shell</code> now supports the <code class="language-plaintext highlighter-rouge">-i</code> option to run a script on startup.</li>
+  <li>New <code class="language-plaintext highlighter-rouge">histogram</code> and <code class="language-plaintext highlighter-rouge">countDistinctApprox</code> operators have been added for working with numerical data.</li>
   <li>YARN mode now supports distributing extra files with the application, and several bugs have been fixed.</li>
 </ul>
 
@@ -297,8 +297,8 @@
 
 <ul>
   <li>Scala programs now need to use Scala 2.10 instead of 2.9.</li>
-  <li>Scripts such as <code class="highlighter-rouge">spark-shell</code> and <code class="highlighter-rouge">pyspark</code> have been moved into the <code class="highlighter-rouge">bin</code> folder, while administrative scripts to start and stop standalone clusters have been moved into <code class="highlighter-rouge">sbin</code>.</li>
-  <li>Spark Streaming’s API has been changed to move external input sources into separate modules, <code class="highlighter-rouge">DStream</code> and <code class="highlighter-rouge">PairDStream</code> has been moved to package <code class="highlighter-rouge">org.apache.spark.streaming.dstream</code> and <code class="highlighter-rouge">DStream.foreach</code> has been renamed to <code class="highlighter-rouge">foreachRDD</code>. We expect the current API to be stable now that Spark Streami [...]
+  <li>Scripts such as <code class="language-plaintext highlighter-rouge">spark-shell</code> and <code class="language-plaintext highlighter-rouge">pyspark</code> have been moved into the <code class="language-plaintext highlighter-rouge">bin</code> folder, while administrative scripts to start and stop standalone clusters have been moved into <code class="language-plaintext highlighter-rouge">sbin</code>.</li>
+  <li>Spark Streaming’s API has been changed to move external input sources into separate modules, <code class="language-plaintext highlighter-rouge">DStream</code> and <code class="language-plaintext highlighter-rouge">PairDStream</code> has been moved to package <code class="language-plaintext highlighter-rouge">org.apache.spark.streaming.dstream</code> and <code class="language-plaintext highlighter-rouge">DStream.foreach</code> has been renamed to <code class="language-plaintext high [...]
   <li>While the old method of configuring Spark through Java system properties still works, we recommend that users update to the new [SparkConf], which is easier to inspect and use.</li>
 </ul>
 
@@ -361,7 +361,7 @@
   <li>Kay Ousterhout &#8211; several bug fixes and improvements to Spark scheduler</li>
   <li>Sean Owen &#8211; style fixes</li>
   <li>Nick Pentreath &#8211; ALS implicit feedback algorithm</li>
-  <li>Pillis &#8211; <code class="highlighter-rouge">Vector.random()</code> method</li>
+  <li>Pillis &#8211; <code class="language-plaintext highlighter-rouge">Vector.random()</code> method</li>
   <li>Imran Rashid &#8211; bug fix</li>
   <li>Ahir Reddy &#8211; support for SIMR</li>
   <li>Luca Rosellini &#8211; script loading for Scala shell</li>
diff --git a/site/releases/spark-release-1-0-0.html b/site/releases/spark-release-1-0-0.html
index 6d4b9f4..947403e 100644
--- a/site/releases/spark-release-1-0-0.html
+++ b/site/releases/spark-release-1-0-0.html
@@ -243,7 +243,7 @@
   <li>Spark has upgraded to Avro 1.7.6, adding support for Avro specific types.</li>
   <li>Internal instrumentation has been added to allow applications to monitor and instrument Spark jobs.</li>
   <li>Support for off-heap storage in Tachyon has been added via a special build target.</li>
-  <li>Datasets persisted with <code class="highlighter-rouge">DISK_ONLY</code> now write directly to disk, significantly improving memory usage for large datasets.</li>
+  <li>Datasets persisted with <code class="language-plaintext highlighter-rouge">DISK_ONLY</code> now write directly to disk, significantly improving memory usage for large datasets.</li>
   <li>Intermediate state created during a Spark job is now garbage collected when the corresponding RDDs become unreferenced, improving performance.</li>
   <li>Spark now includes a <a href="/docs/latest/api/java/index.html">Javadoc version</a> of all its API docs and a <a href="/docs/latest/api/scala/index.html">unified Scaladoc</a> for all modules.</li>
   <li>A new SparkContext.wholeTextFiles method lets you operate on small text files as individual records.</li>
diff --git a/site/releases/spark-release-1-0-1.html b/site/releases/spark-release-1-0-1.html
index d17aa54..b2ae2a6 100644
--- a/site/releases/spark-release-1-0-1.html
+++ b/site/releases/spark-release-1-0-1.html
@@ -249,13 +249,13 @@
 <ul>
   <li>Support for querying JSON datasets (<a href="https://issues.apache.org/jira/browse/SPARK-2060">SPARK-2060</a>).</li>
   <li>Improved reading and writing Parquet data, including support for nested records and arrays (<a href="https://issues.apache.org/jira/browse/SPARK-1293">SPARK-1293</a>, <a href="https://issues.apache.org/jira/browse/SPARK-2195">SPARK-2195</a>, <a href="https://issues.apache.org/jira/browse/SPARK-1913">SPARK-1913</a>, and <a href="https://issues.apache.org/jira/browse/SPARK-1487">SPARK-1487</a>).</li>
-  <li>Improved support for SQL commands (<code class="highlighter-rouge">CACHE TABLE</code>, <code class="highlighter-rouge">DESCRIBE</code>, SHOW TABLES) (<a href="https://issues.apache.org/jira/browse/SPARK-1968">SPARK-1968</a>, <a href="https://issues.apache.org/jira/browse/SPARK-2128">SPARK-2128</a>, and <a href="https://issues.apache.org/jira/browse/SPARK-1704">SPARK-1704</a>).</li>
+  <li>Improved support for SQL commands (<code class="language-plaintext highlighter-rouge">CACHE TABLE</code>, <code class="language-plaintext highlighter-rouge">DESCRIBE</code>, SHOW TABLES) (<a href="https://issues.apache.org/jira/browse/SPARK-1968">SPARK-1968</a>, <a href="https://issues.apache.org/jira/browse/SPARK-2128">SPARK-2128</a>, and <a href="https://issues.apache.org/jira/browse/SPARK-1704">SPARK-1704</a>).</li>
   <li>Support for SQL specific configuration (initially used for setting number of partitions) (<a href="https://issues.apache.org/jira/browse/SPARK-1508">SPARK-1508</a>).</li>
   <li>Idempotence for DDL operations (<a href="https://issues.apache.org/jira/browse/SPARK-2191">SPARK-2191</a>).</li>
 </ul>
 
 <h3 id="known-issues">Known Issues</h3>
-<p>This release contains one known issue: multi-statement lines the REPL with internal references (<code class="highlighter-rouge">&gt; val x = 10; val y = x + 10</code>) produce exceptions (<a href="https://issues.apache.org/jira/browse/SPARK-2452">SPARK-2452</a>). This will be fixed shortly on the 1.0 branch; the fix will be included in the 1.0.2 release.</p>
+<p>This release contains one known issue: multi-statement lines the REPL with internal references (<code class="language-plaintext highlighter-rouge">&gt; val x = 10; val y = x + 10</code>) produce exceptions (<a href="https://issues.apache.org/jira/browse/SPARK-2452">SPARK-2452</a>). This will be fixed shortly on the 1.0 branch; the fix will be included in the 1.0.2 release.</p>
 
 <h3 id="contributors">Contributors</h3>
 <p>The following developers contributed to this release:</p>
diff --git a/site/releases/spark-release-1-1-0.html b/site/releases/spark-release-1-1-0.html
index 6709536..4c36718 100644
--- a/site/releases/spark-release-1-1-0.html
+++ b/site/releases/spark-release-1-1-0.html
@@ -231,10 +231,10 @@
 <p>Spark 1.1.0 is backwards compatible with Spark 1.0.X. Some configuration option defaults have changed which might be relevant to existing users:</p>
 
 <ul>
-  <li>The default value of <code class="highlighter-rouge">spark.io.compression.codec</code> is now <code class="highlighter-rouge">snappy</code> for improved memory usage. Old behavior can be restored by switching to <code class="highlighter-rouge">lzf</code>.</li>
-  <li>The default value of <code class="highlighter-rouge">spark.broadcast.factory</code> is now <code class="highlighter-rouge">org.apache.spark.broadcast.TorrentBroadcastFactory</code> for improved efficiency of broadcasts. Old behavior can be restored by switching to <code class="highlighter-rouge">org.apache.spark.broadcast.HttpBroadcastFactory</code>.</li>
-  <li>PySpark now performs external spilling during aggregations. Old behavior can be restored by setting <code class="highlighter-rouge">spark.shuffle.spill</code> to <code class="highlighter-rouge">false</code>.</li>
-  <li>PySpark uses a new heuristic for determining the parallelism of shuffle operations. Old behavior can be restored by setting <code class="highlighter-rouge">spark.default.parallelism</code> to the number of cores in the cluster.</li>
+  <li>The default value of <code class="language-plaintext highlighter-rouge">spark.io.compression.codec</code> is now <code class="language-plaintext highlighter-rouge">snappy</code> for improved memory usage. Old behavior can be restored by switching to <code class="language-plaintext highlighter-rouge">lzf</code>.</li>
+  <li>The default value of <code class="language-plaintext highlighter-rouge">spark.broadcast.factory</code> is now <code class="language-plaintext highlighter-rouge">org.apache.spark.broadcast.TorrentBroadcastFactory</code> for improved efficiency of broadcasts. Old behavior can be restored by switching to <code class="language-plaintext highlighter-rouge">org.apache.spark.broadcast.HttpBroadcastFactory</code>.</li>
+  <li>PySpark now performs external spilling during aggregations. Old behavior can be restored by setting <code class="language-plaintext highlighter-rouge">spark.shuffle.spill</code> to <code class="language-plaintext highlighter-rouge">false</code>.</li>
+  <li>PySpark uses a new heuristic for determining the parallelism of shuffle operations. Old behavior can be restored by setting <code class="language-plaintext highlighter-rouge">spark.default.parallelism</code> to the number of cores in the cluster.</li>
 </ul>
 
 <h3 id="full-set-of-resolved-issues">Full Set of Resolved Issues</h3>
diff --git a/site/releases/spark-release-1-2-0.html b/site/releases/spark-release-1-2-0.html
index 3948847..41322c3 100644
--- a/site/releases/spark-release-1-2-0.html
+++ b/site/releases/spark-release-1-2-0.html
@@ -239,17 +239,17 @@
 <p>Spark 1.2 is binary compatible with Spark 1.0 and 1.1, so no code changes are necessary. This excludes APIs marked explicitly as unstable. Spark changes default configuration in a handful of cases for improved performance. Users who want to preserve identical configurations to Spark 1.1 can roll back these changes.</p>
 
 <ol>
-  <li><code class="highlighter-rouge">spark.shuffle.blockTransferService</code> has been changed from <code class="highlighter-rouge">nio</code> to <code class="highlighter-rouge">netty</code></li>
-  <li><code class="highlighter-rouge">spark.shuffle.manager</code> has been changed from <code class="highlighter-rouge">hash</code> to <code class="highlighter-rouge">sort</code></li>
-  <li>In PySpark, the default batch size has been changed to 0, which means the batch size is chosen based on the size of object.  Pre-1.2 behavior can be restored using <code class="highlighter-rouge">SparkContext([... args... ], batchSize=1024)</code>.</li>
+  <li><code class="language-plaintext highlighter-rouge">spark.shuffle.blockTransferService</code> has been changed from <code class="language-plaintext highlighter-rouge">nio</code> to <code class="language-plaintext highlighter-rouge">netty</code></li>
+  <li><code class="language-plaintext highlighter-rouge">spark.shuffle.manager</code> has been changed from <code class="language-plaintext highlighter-rouge">hash</code> to <code class="language-plaintext highlighter-rouge">sort</code></li>
+  <li>In PySpark, the default batch size has been changed to 0, which means the batch size is chosen based on the size of object.  Pre-1.2 behavior can be restored using <code class="language-plaintext highlighter-rouge">SparkContext([... args... ], batchSize=1024)</code>.</li>
   <li>Spark SQL has changed the following defaults:
     <ul>
-      <li><code class="highlighter-rouge">spark.sql.parquet.cacheMetadata</code>: <code class="highlighter-rouge">false</code> -&gt; <code class="highlighter-rouge">true</code></li>
-      <li><code class="highlighter-rouge">spark.sql.parquet.compression.codec</code>: <code class="highlighter-rouge">snappy</code> -&gt; <code class="highlighter-rouge">gzip</code></li>
-      <li><code class="highlighter-rouge">spark.sql.hive.convertMetastoreParquet</code>: <code class="highlighter-rouge">false</code> -&gt; <code class="highlighter-rouge">true</code></li>
-      <li><code class="highlighter-rouge">spark.sql.inMemoryColumnarStorage.compressed</code>: <code class="highlighter-rouge">false</code> -&gt; <code class="highlighter-rouge">true</code></li>
-      <li><code class="highlighter-rouge">spark.sql.inMemoryColumnarStorage.batchSize</code>: <code class="highlighter-rouge">1000</code> -&gt; <code class="highlighter-rouge">10000</code></li>
-      <li><code class="highlighter-rouge">spark.sql.autoBroadcastJoinThreshold</code>: <code class="highlighter-rouge">10000</code> -&gt; <code class="highlighter-rouge">10485760</code> (10 MB)</li>
+      <li><code class="language-plaintext highlighter-rouge">spark.sql.parquet.cacheMetadata</code>: <code class="language-plaintext highlighter-rouge">false</code> -&gt; <code class="language-plaintext highlighter-rouge">true</code></li>
+      <li><code class="language-plaintext highlighter-rouge">spark.sql.parquet.compression.codec</code>: <code class="language-plaintext highlighter-rouge">snappy</code> -&gt; <code class="language-plaintext highlighter-rouge">gzip</code></li>
+      <li><code class="language-plaintext highlighter-rouge">spark.sql.hive.convertMetastoreParquet</code>: <code class="language-plaintext highlighter-rouge">false</code> -&gt; <code class="language-plaintext highlighter-rouge">true</code></li>
+      <li><code class="language-plaintext highlighter-rouge">spark.sql.inMemoryColumnarStorage.compressed</code>: <code class="language-plaintext highlighter-rouge">false</code> -&gt; <code class="language-plaintext highlighter-rouge">true</code></li>
+      <li><code class="language-plaintext highlighter-rouge">spark.sql.inMemoryColumnarStorage.batchSize</code>: <code class="language-plaintext highlighter-rouge">1000</code> -&gt; <code class="language-plaintext highlighter-rouge">10000</code></li>
+      <li><code class="language-plaintext highlighter-rouge">spark.sql.autoBroadcastJoinThreshold</code>: <code class="language-plaintext highlighter-rouge">10000</code> -&gt; <code class="language-plaintext highlighter-rouge">10485760</code> (10 MB)</li>
     </ul>
   </li>
 </ol>
diff --git a/site/releases/spark-release-1-2-2.html b/site/releases/spark-release-1-2-2.html
index 0358d06..2a9c914 100644
--- a/site/releases/spark-release-1-2-2.html
+++ b/site/releases/spark-release-1-2-2.html
@@ -220,7 +220,7 @@
 
 <h4 id="pyspark">PySpark</h4>
 <ul>
-  <li>Jobs hang during <code class="highlighter-rouge">collect</code> operation (<a href="http://issues.apache.org/jira/browse/SPARK-6667">SPARK-6667</a>)</li>
+  <li>Jobs hang during <code class="language-plaintext highlighter-rouge">collect</code> operation (<a href="http://issues.apache.org/jira/browse/SPARK-6667">SPARK-6667</a>)</li>
   <li>Zip fails with serializer error (<a href="http://issues.apache.org/jira/browse/SPARK-5973">SPARK-5973</a>)</li>
   <li>Memory leak using Spark SQL with PySpark (<a href="http://issues.apache.org/jira/browse/SPARK-6055">SPARK-6055</a>)</li>
   <li>Hanging when using large broadcast variables (<a href="http://issues.apache.org/jira/browse/SPARK-5363">SPARK-5363</a>)</li>
diff --git a/site/releases/spark-release-1-3-0.html b/site/releases/spark-release-1-3-0.html
index 90adb95..d8a925c 100644
--- a/site/releases/spark-release-1-3-0.html
+++ b/site/releases/spark-release-1-3-0.html
@@ -228,13 +228,13 @@
 <h2 id="upgrading-to-spark-13">Upgrading to Spark 1.3</h2>
 <p>Spark 1.3 is binary compatible with Spark 1.X releases, so no code changes are necessary. This excludes API’s marked explicitly as unstable.</p>
 
-<p>As part of stabilizing the Spark SQL API, the <code class="highlighter-rouge">SchemaRDD</code> class has been renamed to <code class="highlighter-rouge">DataFrame</code>. Spark SQL&#8217;s <a href="http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#migration-guide">migration guide</a> describes the upgrade process in detail. Spark SQL also now requires that column identifiers which use reserved words (such as &#8220;string&#8221; or &#8220;table&#8221;) be escaped using bac [...]
+<p>As part of stabilizing the Spark SQL API, the <code class="language-plaintext highlighter-rouge">SchemaRDD</code> class has been renamed to <code class="language-plaintext highlighter-rouge">DataFrame</code>. Spark SQL&#8217;s <a href="http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#migration-guide">migration guide</a> describes the upgrade process in detail. Spark SQL also now requires that column identifiers which use reserved words (such as &#8220;string&#8221; or &#8 [...]
 
 <h3 id="known-issues">Known Issues</h3>
 <p>This release has few known issues which will be addressed in Spark 1.3.1:</p>
 
 <ul>
-  <li><a href="https://issues.apache.org/jira/browse/SPARK-6194">SPARK-6194</a>: A memory leak in PySPark&#8217;s <code class="highlighter-rouge">collect()</code>.</li>
+  <li><a href="https://issues.apache.org/jira/browse/SPARK-6194">SPARK-6194</a>: A memory leak in PySPark&#8217;s <code class="language-plaintext highlighter-rouge">collect()</code>.</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6222">SPARK-6222</a>: An issue with failure recovery in Spark Streaming.</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>: Spark SQL can&#8217;t read parquet data generated with Spark 1.1.</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6247">SPARK-6247</a>: Errors analyzing certain join types in Spark SQL.</li>
diff --git a/site/releases/spark-release-1-5-0.html b/site/releases/spark-release-1-5-0.html
index f3a557e..6558187 100644
--- a/site/releases/spark-release-1-5-0.html
+++ b/site/releases/spark-release-1-5-0.html
@@ -407,7 +407,7 @@
 <ul>
   <li>Optimized execution using manually managed memory (Tungsten) is now enabled by default, along with code generation for expression evaluation. These features can both be disabled by setting spark.sql.tungsten.enabled to false.</li>
   <li>Parquet schema merging is no longer enabled by default. It can be re-enabled by setting spark.sql.parquet.mergeSchema to true.</li>
-  <li>Resolution of strings to columns in Python now supports using dots (.) to qualify the column or access nested values. For example df[&#8216;table.column.nestedField&#8217;]. However, this means that if your column name contains any dots you must now escape them using backticks (e.g., <code class="highlighter-rouge">table.`column.with.dots`.nested</code>).</li>
+  <li>Resolution of strings to columns in Python now supports using dots (.) to qualify the column or access nested values. For example df[&#8216;table.column.nestedField&#8217;]. However, this means that if your column name contains any dots you must now escape them using backticks (e.g., <code class="language-plaintext highlighter-rouge">table.`column.with.dots`.nested</code>).</li>
   <li>In-memory columnar storage partition pruning is on by default. It can be disabled by setting spark.sql.inMemoryColumnarStorage.partitionPruning to false.</li>
   <li>Unlimited precision decimal columns are no longer supported, instead Spark SQL enforces a maximum precision of 38. When inferring schema from BigDecimal objects, a precision of (38, 18) is now used. When no precision is specified in DDL then the default remains Decimal(10, 0).</li>
   <li>Timestamps are now processed at a precision of 1us, rather than 100ns.</li>
diff --git a/site/releases/spark-release-1-6-0.html b/site/releases/spark-release-1-6-0.html
index 2f873ef..21c6324 100644
--- a/site/releases/spark-release-1-6-0.html
+++ b/site/releases/spark-release-1-6-0.html
@@ -240,7 +240,7 @@
       <li><a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241&#160;</a> <strong>Improved query planner for queries having distinct aggregations</strong> - Query plans of distinct aggregations are more robust when distinct columns have high cardinality.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-9858">SPARK-9858&#160;</a> <strong>Adaptive query execution</strong> - Initial support for automatically selecting the number of reducers for joins and aggregations.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10978">SPARK-10978</a> <strong>Avoiding double filters in Data Source API</strong> - When implementing a data source with filter pushdown, developers can now tell Spark SQL to avoid double evaluating a pushed-down filter.</li>
-      <li><a href="https://issues.apache.org/jira/browse/SPARK-11111">SPARK-11111</a> <strong>Fast null-safe joins</strong> - Joins using null-safe equality (<code class="highlighter-rouge">&lt;=&gt;</code>) will now execute using SortMergeJoin instead of computing a cartisian product.</li>
+      <li><a href="https://issues.apache.org/jira/browse/SPARK-11111">SPARK-11111</a> <strong>Fast null-safe joins</strong> - Joins using null-safe equality (<code class="language-plaintext highlighter-rouge">&lt;=&gt;</code>) will now execute using SortMergeJoin instead of computing a cartisian product.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10917">SPARK-10917</a>, <a href="https://issues.apache.org/jira/browse/SPARK-11149">SPARK-11149</a> <strong>In-memory Columnar Cache Performance</strong> - Significant (up to 14x) speed up when caching data that contains complex types in DataFrames or SQL.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11389">SPARK-11389</a> <strong>SQL Execution Using Off-Heap Memory</strong> - Support for configuring query execution to occur using off-heap memory to avoid GC overhead</li>
     </ul>
@@ -252,7 +252,7 @@
 <ul>
   <li><strong>API Updates</strong>
     <ul>
-      <li><a href="https://issues.apache.org/jira/browse/SPARK-2629">SPARK-2629&#160;</a> <strong>New improved state management</strong> - <code class="highlighter-rouge">mapWithState</code> - a DStream transformation for stateful stream processing, supercedes <code class="highlighter-rouge">updateStateByKey</code> in functionality and performance.</li>
+      <li><a href="https://issues.apache.org/jira/browse/SPARK-2629">SPARK-2629&#160;</a> <strong>New improved state management</strong> - <code class="language-plaintext highlighter-rouge">mapWithState</code> - a DStream transformation for stateful stream processing, supercedes <code class="language-plaintext highlighter-rouge">updateStateByKey</code> in functionality and performance.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11198">SPARK-11198</a> <strong>Kinesis record deaggregation</strong> - Kinesis streams have been upgraded to use KCL 1.4.0 and supports transparent deaggregation of KPL-aggregated records.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10891">SPARK-10891</a> <strong>Kinesis message handler function</strong> - Allows arbitrary function to be applied to a Kinesis record in the Kinesis receiver before to customize what data is to be stored in memory.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-6328">SPARK-6328&#160;</a> <strong>Python Streaming Listener API</strong> - Get streaming statistics (scheduling delays, batch processing times, etc.) in streaming.</li>
@@ -320,17 +320,17 @@
 <ul>
   <li><strong>MLlib</strong>
     <ul>
-      <li><code class="highlighter-rouge">spark.mllib.tree.GradientBoostedTrees</code> <code class="highlighter-rouge">validationTol</code> has changed semantics in 1.6. Previously, it was a threshold for absolute change in error. Now, it resembles the behavior of <code class="highlighter-rouge">GradientDescent</code> <code class="highlighter-rouge">convergenceTol</code>: For large errors, it uses relative error (relative to the previous error); for small errors (&lt; 0.01), it uses abso [...]
-      <li><code class="highlighter-rouge">spark.ml.feature.RegexTokenizer</code>: Previously, it did not convert strings to lowercase before tokenizing. Now, it converts to lowercase by default, with an option not to. This matches the behavior of the simpler Tokenizer transformer.</li>
+      <li><code class="language-plaintext highlighter-rouge">spark.mllib.tree.GradientBoostedTrees</code> <code class="language-plaintext highlighter-rouge">validationTol</code> has changed semantics in 1.6. Previously, it was a threshold for absolute change in error. Now, it resembles the behavior of <code class="language-plaintext highlighter-rouge">GradientDescent</code> <code class="language-plaintext highlighter-rouge">convergenceTol</code>: For large errors, it uses relative error  [...]
+      <li><code class="language-plaintext highlighter-rouge">spark.ml.feature.RegexTokenizer</code>: Previously, it did not convert strings to lowercase before tokenizing. Now, it converts to lowercase by default, with an option not to. This matches the behavior of the simpler Tokenizer transformer.</li>
     </ul>
   </li>
   <li><strong>SQL</strong>
     <ul>
       <li>The flag (spark.sql.tungsten.enabled) that turns off Tungsten mode and code generation has been removed. Tungsten mode and code generation are always enabled (<a href="https://issues.apache.org/jira/browse/SPARK-11644">SPARK-11644</a>).</li>
-      <li>Spark SQL&#8217;s partition discovery has been changed to only discover partition directories that are children of the given path. (i.e. if <code class="highlighter-rouge">path="/my/data/x=1"</code> then <code class="highlighter-rouge">x=1</code> will no longer be considered a partition but only children of <code class="highlighter-rouge">x=1</code>.) This behavior can be overridden by manually specifying the <code class="highlighter-rouge">basePath</code> that partitioning dis [...]
+      <li>Spark SQL&#8217;s partition discovery has been changed to only discover partition directories that are children of the given path. (i.e. if <code class="language-plaintext highlighter-rouge">path="/my/data/x=1"</code> then <code class="language-plaintext highlighter-rouge">x=1</code> will no longer be considered a partition but only children of <code class="language-plaintext highlighter-rouge">x=1</code>.) This behavior can be overridden by manually specifying the <code class= [...]
       <li>For a UDF, if it has primitive type input argument (a non-nullable input argument), when the value of this argument is null, this UDF will return null (<a href="https://issues.apache.org/jira/browse/SPARK-11725">SPARK-11725</a>).</li>
       <li>When casting a value of an integral type to timestamp (e.g. casting a long value to timestamp), the value is treated as being in seconds instead of milliseconds (<a href="https://issues.apache.org/jira/browse/SPARK-11724">SPARK-11724</a>).</li>
-      <li>With the improved query planner for queries having distinct aggregations (<a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241</a>), the plan of a query having a single distinct aggregation has been changed to a more robust version. To switch back to the plan generated by Spark 1.5&#8217;s planner, please set <code class="highlighter-rouge">spark.sql.specializeSingleDistinctAggPlanning</code> to <code class="highlighter-rouge">true</code> (<a href="https://issu [...]
+      <li>With the improved query planner for queries having distinct aggregations (<a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241</a>), the plan of a query having a single distinct aggregation has been changed to a more robust version. To switch back to the plan generated by Spark 1.5&#8217;s planner, please set <code class="language-plaintext highlighter-rouge">spark.sql.specializeSingleDistinctAggPlanning</code> to <code class="language-plaintext highlighter-rou [...]
       <li>getBoolean, getByte, getShort, getInt, getLong, getFloat and getDouble of a Row will throw a NullPointerException if the value at the given ordinal is a null (<a href="https://issues.apache.org/jira/browse/SPARK-11553">SPARK-11553</a>).</li>
       <li>variance is the alias of var_samp instead of var_pop (<a href="https://issues.apache.org/jira/browse/SPARK-11490">SPARK-11490</a>).</li>
       <li>The semantic of casting a String type value to a Boolean type value has been changed (<a href="https://issues.apache.org/jira/browse/SPARK-10442">SPARK-10442</a>). Casting any one of &#8220;t&#8221;, &#8220;true&#8221;, &#8220;y&#8221;, &#8220;yes&#8221;, and &#8220;1&#8221; will return true. Casting any of &#8220;f&#8221;, &#8220;false&#8221;, &#8220;n&#8221;, &#8220;no&#8221;, and &#8220;0&#8221; will return false. For other String literals, casting them to a Boolean type val [...]
@@ -341,7 +341,7 @@
 
 <h2 id="known-issues">Known issues</h2>
 <ul>
-  <li><a href="https://issues.apache.org/jira/browse/SPARK-12546">SPARK-12546</a> Save DataFrame/table as Parquet with dynamic partitions may cause OOM; this can be worked around by decreasing the memory used by both Spark and Parquet using <code class="highlighter-rouge">spark.memory.fraction</code> (for example, 0.4) and <code class="highlighter-rouge">parquet.memory.pool.ratio</code> (for example, 0.3, in Hadoop configuration, e.g. setting it in <code class="highlighter-rouge">core-si [...]
+  <li><a href="https://issues.apache.org/jira/browse/SPARK-12546">SPARK-12546</a> Save DataFrame/table as Parquet with dynamic partitions may cause OOM; this can be worked around by decreasing the memory used by both Spark and Parquet using <code class="language-plaintext highlighter-rouge">spark.memory.fraction</code> (for example, 0.4) and <code class="language-plaintext highlighter-rouge">parquet.memory.pool.ratio</code> (for example, 0.3, in Hadoop configuration, e.g. setting it in < [...]
 </ul>
 
 <h2 id="credits">Credits</h2>
diff --git a/site/releases/spark-release-2-1-0.html b/site/releases/spark-release-2-1-0.html
index ce992c1..20fef49 100644
--- a/site/releases/spark-release-2-1-0.html
+++ b/site/releases/spark-release-2-1-0.html
@@ -293,7 +293,7 @@
 <ul>
   <li>New ML algorithms in SparkR including LDA, Gaussian Mixture Models, ALS, Random Forest, Gradient Boosted Trees, and more</li>
   <li>Support for multinomial logistic regression providing similar functionality as the glmnet R package</li>
-  <li>Enable installing third party packages on workers using <code class="highlighter-rouge">spark.addFile</code> (<a href="https://issues.apache.org/jira/browse/SPARK-17577">SPARK-17577</a>).</li>
+  <li>Enable installing third party packages on workers using <code class="language-plaintext highlighter-rouge">spark.addFile</code> (<a href="https://issues.apache.org/jira/browse/SPARK-17577">SPARK-17577</a>).</li>
   <li>Standalone installable package built with the Apache Spark release. We will be submitting this to CRAN soon.</li>
 </ul>
 
diff --git a/site/releases/spark-release-2-2-0.html b/site/releases/spark-release-2-2-0.html
index 86f3a9c..d9f6789 100644
--- a/site/releases/spark-release-2-2-0.html
+++ b/site/releases/spark-release-2-2-0.html
@@ -205,7 +205,7 @@
 
 <p>Apache Spark 2.2.0 is the third release on the 2.x line. This release removes the experimental tag from Structured Streaming. In addition, this release focuses more on usability, stability, and polish, resolving over 1100 tickets.</p>
 
-<p>Additionally, we are excited to announce that <a href="https://pypi.python.org/pypi/pyspark">PySpark</a> is now available in pypi. To install just run <code class="highlighter-rouge">pip install pyspark</code>.</p>
+<p>Additionally, we are excited to announce that <a href="https://pypi.python.org/pypi/pyspark">PySpark</a> is now available in pypi. To install just run <code class="language-plaintext highlighter-rouge">pip install pyspark</code>.</p>
 
 <p>To download Apache Spark 2.2.0, visit the <a href="/downloads.html">downloads</a> page. You can consult JIRA for the <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&amp;version=12338275">detailed changes</a>. We have curated a list of high level changes here, grouped by major modules.</p>
 
@@ -386,7 +386,7 @@
   </li>
   <li><strong>MLlib</strong>
     <ul>
-      <li>SPARK-18613: spark.ml LDA classes should not expose spark.mllib in APIs.  In spark.ml.LDAModel, deprecated <code class="highlighter-rouge">oldLocalModel</code> and <code class="highlighter-rouge">getModel</code>.</li>
+      <li>SPARK-18613: spark.ml LDA classes should not expose spark.mllib in APIs.  In spark.ml.LDAModel, deprecated <code class="language-plaintext highlighter-rouge">oldLocalModel</code> and <code class="language-plaintext highlighter-rouge">getModel</code>.</li>
     </ul>
   </li>
   <li><strong>SparkR</strong>
diff --git a/site/releases/spark-release-2-2-1.html b/site/releases/spark-release-2-2-1.html
index c9eaac4..c309358 100644
--- a/site/releases/spark-release-2-2-1.html
+++ b/site/releases/spark-release-2-2-1.html
@@ -212,7 +212,7 @@
 <ul>
   <li><strong>Core and SQL</strong>
     <ul>
-      <li>SPARK-22472: added null check for top-level primitive types. Before this release, for datasets having top-level primitive types, and it has null values, it might return some unexpected results. For example, let&#8217;s say we have a parquet file with schema <code class="highlighter-rouge">&lt;a: Int&gt;</code>, and we read it into Scala Int. If column a has null values, when transformation is applied some unexpected value can be returned.</li>
+      <li>SPARK-22472: added null check for top-level primitive types. Before this release, for datasets having top-level primitive types, and it has null values, it might return some unexpected results. For example, let&#8217;s say we have a parquet file with schema <code class="language-plaintext highlighter-rouge">&lt;a: Int&gt;</code>, and we read it into Scala Int. If column a has null values, when transformation is applied some unexpected value can be returned.</li>
     </ul>
   </li>
 </ul>
diff --git a/site/releases/spark-release-2-3-0.html b/site/releases/spark-release-2-3-0.html
index 145d35e..eb7a73c 100644
--- a/site/releases/spark-release-2-3-0.html
+++ b/site/releases/spark-release-2-3-0.html
@@ -225,7 +225,7 @@
   <li><strong>Major features</strong>
     <ul>
       <li><strong>Spark on Kubernetes</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-18278">SPARK-18278</a>] A new kubernetes scheduler backend that supports native submission of spark jobs to a cluster managed by kubernetes. Note that this support is currently experimental and behavioral changes around configurations, container images and entrypoints should be expected.</li>
-      <li><strong>Vectorized ORC Reader</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-16060">SPARK-16060</a>] Adds support for new ORC reader that substantially improves the ORC scan throughput through vectorization (2-5x). To enable the reader, users can set <code class="highlighter-rouge">spark.sql.orc.impl</code> to <code class="highlighter-rouge">native</code>.</li>
+      <li><strong>Vectorized ORC Reader</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-16060">SPARK-16060</a>] Adds support for new ORC reader that substantially improves the ORC scan throughput through vectorization (2-5x). To enable the reader, users can set <code class="language-plaintext highlighter-rouge">spark.sql.orc.impl</code> to <code class="language-plaintext highlighter-rouge">native</code>.</li>
       <li><strong>Spark History Server V2</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-18085">SPARK-18085</a>] A new spark history server (SHS) backend that provides better scalability for large scale applications with a more efficient event storage mechanism.</li>
       <li><strong>Data source API V2</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-15689">SPARK-15689</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22386">SPARK-22386</a>] An experimental API for plugging in new data sources in Spark. The new API attempts to address several limitations of the V1 API and aims to facilitate development of high performant, easy-to-maintain, and extensible external data sources. Note that this API is still undergoing active deve [...]
       <li><strong>PySpark Performance Enhancements</strong>: [<a href="https://issues.apache.org/jira/browse/SPARK-22216">SPARK-22216</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21187">SPARK-21187</a>] Significant improvements in python performance and interoperability by fast data serialization and vectorized execution.</li>
@@ -237,7 +237,7 @@
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20331">SPARK-20331</a>] Better support for predicate pushdown for Hive partition pruning</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19112">SPARK-19112</a>] Support for ZStandard compression codec</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21113">SPARK-21113</a>] Support for read ahead input stream to amortize disk I/O cost in the spill reader</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22510">SPARK-22510</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22692">SPARK-22692</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21871">SPARK-21871</a>] Further stabilize the codegen framework to avoid hitting the <code class="highlighter-rouge">64KB</code> JVM bytecode limit on the Java method and Java compiler constant pool limit</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22510">SPARK-22510</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22692">SPARK-22692</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21871">SPARK-21871</a>] Further stabilize the codegen framework to avoid hitting the <code class="language-plaintext highlighter-rouge">64KB</code> JVM bytecode limit on the Java method and Java compiler constant pool limit</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23207">SPARK-23207</a>] Fixed a long standing bug in Spark where consecutive shuffle+repartition on a DataFrame could lead to incorrect answers in certain surgical cases</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22062">SPARK-22062</a>][<a href="https://issues.apache.org/jira/browse/SPARK-17788">SPARK-17788</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21907">SPARK-21907</a>] Fix various causes of OOMs</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22489">SPARK-22489</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22916">SPARK-22916</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22895">SPARK-22895</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20758">SPARK-20758</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22266">SPARK-22266</a>][<a href="https://issues.apache.org/jira/browse/SPARK-19122">SPARK-19122</a>][<a href="https://is [...]
@@ -246,13 +246,13 @@
   <li><strong>Other notable changes</strong>
     <ul>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20236">SPARK-20236</a>] Support Hive style dynamic partition overwrite semantics.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-4131">SPARK-4131</a>] Support <code class="highlighter-rouge">INSERT OVERWRITE DIRECTORY</code> to directly write data into the filesystem from a query</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-4131">SPARK-4131</a>] Support <code class="language-plaintext highlighter-rouge">INSERT OVERWRITE DIRECTORY</code> to directly write data into the filesystem from a query</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19285">SPARK-19285</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22945">SPARK-22945</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21499">SPARK-21499</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20586">SPARK-20586</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20416">SPARK-20416</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20668">SPARK-20668</a>] UDF enhancements</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20463">SPARK-20463</a>][<a href="https://issues.apache.org/jira/browse/SPARK-19951">SPARK-19951</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22934">SPARK-22934</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21055">SPARK-21055</a>][<a href="https://issues.apache.org/jira/browse/SPARK-17729">SPARK-17729</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20962">SPARK-20962</a>][<a href="https://is [...]
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20746">SPARK-20746</a>] More comprehensive SQL built-in functions</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21485">SPARK-21485</a>] Spark SQL documentation generation for built-in functions</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19810">SPARK-19810</a>] Remove support for Scala <code class="highlighter-rouge">2.10</code></li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22324">SPARK-22324</a>] Upgrade Arrow to <code class="highlighter-rouge">0.8.0</code> and Netty to <code class="highlighter-rouge">4.1.17</code></li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19810">SPARK-19810</a>] Remove support for Scala <code class="language-plaintext highlighter-rouge">2.10</code></li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22324">SPARK-22324</a>] Upgrade Arrow to <code class="language-plaintext highlighter-rouge">0.8.0</code> and Netty to <code class="language-plaintext highlighter-rouge">4.1.17</code></li>
     </ul>
   </li>
 </ul>
@@ -293,14 +293,14 @@
     <ul>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21866">SPARK-21866</a>]: Built-in support for reading images into a DataFrame (Scala/Java/Python)</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19634">SPARK-19634</a>]: DataFrame functions for descriptive summary statistics over vector columns (Scala/Java)</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-14516">SPARK-14516</a>]: <code class="highlighter-rouge">ClusteringEvaluator</code> for tuning clustering algorithms, supporting Cosine silhouette and squared Euclidean silhouette metrics (Scala/Java/Python)</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-14516">SPARK-14516</a>]: <code class="language-plaintext highlighter-rouge">ClusteringEvaluator</code> for tuning clustering algorithms, supporting Cosine silhouette and squared Euclidean silhouette metrics (Scala/Java/Python)</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-3181">SPARK-3181</a>]: Robust linear regression with Huber loss (Scala/Java/Python)</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-13969">SPARK-13969</a>]: <code class="highlighter-rouge">FeatureHasher</code> transformer (Scala/Java/Python)</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-13969">SPARK-13969</a>]: <code class="language-plaintext highlighter-rouge">FeatureHasher</code> transformer (Scala/Java/Python)</li>
       <li>Multiple column support for several feature transformers:
         <ul>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-13030">SPARK-13030</a>]: <code class="highlighter-rouge">OneHotEncoderEstimator</code> (Scala/Java/Python)</li>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22397">SPARK-22397</a>]: <code class="highlighter-rouge">QuantileDiscretizer</code> (Scala/Java)</li>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20542">SPARK-20542</a>]: <code class="highlighter-rouge">Bucketizer</code> (Scala/Java/Python)</li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-13030">SPARK-13030</a>]: <code class="language-plaintext highlighter-rouge">OneHotEncoderEstimator</code> (Scala/Java/Python)</li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22397">SPARK-22397</a>]: <code class="language-plaintext highlighter-rouge">QuantileDiscretizer</code> (Scala/Java)</li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20542">SPARK-20542</a>]: <code class="language-plaintext highlighter-rouge">Bucketizer</code> (Scala/Java/Python)</li>
         </ul>
       </li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21633">SPARK-21633</a>] and <a href="https://issues.apache.org/jira/browse/SPARK-21542">SPARK-21542</a>]: Improved support for custom pipeline components in Python.</li>
@@ -308,26 +308,26 @@
   </li>
   <li><strong>New Features</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21087">SPARK-21087</a>]: <code class="highlighter-rouge">CrossValidator</code> and <code class="highlighter-rouge">TrainValidationSplit</code> can collect all models when fitting (Scala/Java).  This allows you to inspect or save all fitted models.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19357">SPARK-19357</a>]: Meta-algorithms <code class="highlighter-rouge">CrossValidator</code>, <code class="highlighter-rouge">TrainValidationSplit, </code>OneVsRest` support a parallelism Param for fitting multiple sub-models in parallel Spark jobs</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21087">SPARK-21087</a>]: <code class="language-plaintext highlighter-rouge">CrossValidator</code> and <code class="language-plaintext highlighter-rouge">TrainValidationSplit</code> can collect all models when fitting (Scala/Java).  This allows you to inspect or save all fitted models.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19357">SPARK-19357</a>]: Meta-algorithms <code class="language-plaintext highlighter-rouge">CrossValidator</code>, <code class="language-plaintext highlighter-rouge">TrainValidationSplit, </code>OneVsRest` support a parallelism Param for fitting multiple sub-models in parallel Spark jobs</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-17139">SPARK-17139</a>]: Model summary for multinomial logistic regression (Scala/Java/Python)</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-18710">SPARK-18710</a>]: Add offset in GLM</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20199">SPARK-20199</a>]: Added <code class="highlighter-rouge">featureSubsetStrategy</code> Param to <code class="highlighter-rouge">GBTClassifier</code> and <code class="highlighter-rouge">GBTRegressor</code>.  Using this to subsample features can significantly improve training speed; this option has been a key strength of <code class="highlighter-rouge">xgboost</code>.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20199">SPARK-20199</a>]: Added <code class="language-plaintext highlighter-rouge">featureSubsetStrategy</code> Param to <code class="language-plaintext highlighter-rouge">GBTClassifier</code> and <code class="language-plaintext highlighter-rouge">GBTRegressor</code>.  Using this to subsample features can significantly improve training speed; this option has been a key strength of <code class="language-plaintext highlighter-r [...]
     </ul>
   </li>
   <li><strong>Other Notable Changes</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22156">SPARK-22156</a>] Fixed <code class="highlighter-rouge">Word2Vec</code> learning rate scaling with <code class="highlighter-rouge">num</code> iterations.  The new learning rate is set to match the original <code class="highlighter-rouge">Word2Vec</code> C code and should give better results from training.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22289">SPARK-22289</a>] Add <code class="highlighter-rouge">JSON</code> support for Matrix parameters (This fixed a bug for ML persistence with <code class="highlighter-rouge">LogisticRegressionModel</code> when using bounds on coefficients.)</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22700">SPARK-22700</a>] <code class="highlighter-rouge">Bucketizer.transform</code> incorrectly drops row containing <code class="highlighter-rouge">NaN</code>.  When Param <code class="highlighter-rouge">handleInvalid</code> was set to “skip,” <code class="highlighter-rouge">Bucketizer</code> would drop a row with a valid value in the input column if another (irrelevant) column had a <code class="highlighter-rouge">NaN</cod [...]
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22446">SPARK-22446</a>] Catalyst optimizer sometimes caused <code class="highlighter-rouge">StringIndexerModel</code> to throw an incorrect “Unseen label” exception when <code class="highlighter-rouge">handleInvalid</code> was set to “error.”  This could happen for filtered data, due to predicate push-down, causing errors even after invalid rows had already been filtered from the input dataset.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22156">SPARK-22156</a>] Fixed <code class="language-plaintext highlighter-rouge">Word2Vec</code> learning rate scaling with <code class="language-plaintext highlighter-rouge">num</code> iterations.  The new learning rate is set to match the original <code class="language-plaintext highlighter-rouge">Word2Vec</code> C code and should give better results from training.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22289">SPARK-22289</a>] Add <code class="language-plaintext highlighter-rouge">JSON</code> support for Matrix parameters (This fixed a bug for ML persistence with <code class="language-plaintext highlighter-rouge">LogisticRegressionModel</code> when using bounds on coefficients.)</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22700">SPARK-22700</a>] <code class="language-plaintext highlighter-rouge">Bucketizer.transform</code> incorrectly drops row containing <code class="language-plaintext highlighter-rouge">NaN</code>.  When Param <code class="language-plaintext highlighter-rouge">handleInvalid</code> was set to “skip,” <code class="language-plaintext highlighter-rouge">Bucketizer</code> would drop a row with a valid value in the input column i [...]
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22446">SPARK-22446</a>] Catalyst optimizer sometimes caused <code class="language-plaintext highlighter-rouge">StringIndexerModel</code> to throw an incorrect “Unseen label” exception when <code class="language-plaintext highlighter-rouge">handleInvalid</code> was set to “error.”  This could happen for filtered data, due to predicate push-down, causing errors even after invalid rows had already been filtered from the input d [...]
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21681">SPARK-21681</a>] Fixed an edge case bug in multinomial logistic regression that resulted in incorrect coefficients when some features had zero variance.</li>
       <li>Major optimizations:
         <ul>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22707">SPARK-22707</a>] Reduced memory consumption for <code class="highlighter-rouge">CrossValidator</code></li>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22949">SPARK-22949</a>] Reduced memory consumption for <code class="highlighter-rouge">TrainValidationSplit</code></li>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21690">SPARK-21690</a>] <code class="highlighter-rouge">Imputer</code> should train using a single pass over the data</li>
-          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-14371">SPARK-14371</a>] <code class="highlighter-rouge">OnlineLDAOptimizer</code> avoids collecting statistics to the driver for each mini-batch.</li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22707">SPARK-22707</a>] Reduced memory consumption for <code class="language-plaintext highlighter-rouge">CrossValidator</code></li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22949">SPARK-22949</a>] Reduced memory consumption for <code class="language-plaintext highlighter-rouge">TrainValidationSplit</code></li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21690">SPARK-21690</a>] <code class="language-plaintext highlighter-rouge">Imputer</code> should train using a single pass over the data</li>
+          <li>[<a href="https://issues.apache.org/jira/browse/SPARK-14371">SPARK-14371</a>] <code class="language-plaintext highlighter-rouge">OnlineLDAOptimizer</code> avoids collecting statistics to the driver for each mini-batch.</li>
         </ul>
       </li>
     </ul>
@@ -344,7 +344,7 @@
   <li><strong>Major features</strong>
     <ul>
       <li>Improved function parity between SQL and R</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22933">SPARK-22933</a>]: Structured Streaming APIs for <code class="highlighter-rouge">withWatermark</code>, <code class="highlighter-rouge">trigger</code>, <code class="highlighter-rouge">partitionBy</code> and stream-stream joins</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22933">SPARK-22933</a>]: Structured Streaming APIs for <code class="language-plaintext highlighter-rouge">withWatermark</code>, <code class="language-plaintext highlighter-rouge">trigger</code>, <code class="language-plaintext highlighter-rouge">partitionBy</code> and stream-stream joins</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21266">SPARK-21266</a>]: SparkR UDF with DDL-formatted schema support</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-20726">SPARK-20726</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22924">SPARK-22924</a>][<a href="https://issues.apache.org/jira/browse/SPARK-22843">SPARK-22843</a>] Several new Dataframe API Wrappers</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-15767">SPARK-15767</a>][<a href="https://issues.apache.org/jira/browse/SPARK-21622">SPARK-21622</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20917">SPARK-20917</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20307">SPARK-20307</a>][<a href="https://issues.apache.org/jira/browse/SPARK-20906">SPARK-20906</a>] Several new SparkML API Wrappers</li>
@@ -359,7 +359,7 @@
 <ul>
   <li><strong>Optimizations</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-5484">SPARK-5484</a>] Pregel now checkpoints periodically to avoid <code class="highlighter-rouge">StackOverflowErrors</code></li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-5484">SPARK-5484</a>] Pregel now checkpoints periodically to avoid <code class="language-plaintext highlighter-rouge">StackOverflowErrors</code></li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21491">SPARK-21491</a>] Small performance improvement in several places</li>
     </ul>
   </li>
@@ -372,12 +372,12 @@
 <ul>
   <li><strong>Python</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23122">SPARK-23122</a>] Deprecate <code class="highlighter-rouge">register*</code> for UDFs in <code class="highlighter-rouge">SQLContext</code> and <code class="highlighter-rouge">Catalog</code> in PySpark</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23122">SPARK-23122</a>] Deprecate <code class="language-plaintext highlighter-rouge">register*</code> for UDFs in <code class="language-plaintext highlighter-rouge">SQLContext</code> and <code class="language-plaintext highlighter-rouge">Catalog</code> in PySpark</li>
     </ul>
   </li>
   <li><strong>MLlib</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-13030">SPARK-13030</a>] <code class="highlighter-rouge">OneHotEncoder</code> has been deprecated and will be removed in 3.0. It has been replaced by the new <code class="highlighter-rouge">OneHotEncoderEstimator</code>. Note that <code class="highlighter-rouge">OneHotEncoderEstimator</code> will be renamed to <code class="highlighter-rouge">OneHotEncoder</code> in 3.0 (but <code class="highlighter-rouge">OneHotEncoderEstimat [...]
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-13030">SPARK-13030</a>] <code class="language-plaintext highlighter-rouge">OneHotEncoder</code> has been deprecated and will be removed in 3.0. It has been replaced by the new <code class="language-plaintext highlighter-rouge">OneHotEncoderEstimator</code>. Note that <code class="language-plaintext highlighter-rouge">OneHotEncoderEstimator</code> will be renamed to <code class="language-plaintext highlighter-rouge">OneHotEnc [...]
     </ul>
   </li>
 </ul>
@@ -387,33 +387,33 @@
 <ul>
   <li><strong>SparkSQL</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22036">SPARK-22036</a>]: By default arithmetic operations between decimals return a rounded value if an exact representation is not possible (instead of returning <code class="highlighter-rouge">NULL</code> in the prior versions)</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22937">SPARK-22937</a>]: When all inputs are binary, SQL <code class="highlighter-rouge">elt()</code> returns an output as binary. Otherwise, it returns as a string. In the prior versions, it always returns as a string despite of input types.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22036">SPARK-22036</a>]: By default arithmetic operations between decimals return a rounded value if an exact representation is not possible (instead of returning <code class="language-plaintext highlighter-rouge">NULL</code> in the prior versions)</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22937">SPARK-22937</a>]: When all inputs are binary, SQL <code class="language-plaintext highlighter-rouge">elt()</code> returns an output as binary. Otherwise, it returns as a string. In the prior versions, it always returns as a string despite of input types.</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22895">SPARK-22895</a>]: The Join/Filter&#8217;s deterministic predicates that are after the first non-deterministic predicates are also pushed down/through the child operators, if possible. In the prior versions, these filters were not eligible for predicate pushdown.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22771">SPARK-22771</a>]: When all inputs are binary, <code class="highlighter-rouge">functions.concat()</code> returns an output as binary. Otherwise, it returns as a string. In the prior versions, it always returns as a string despite of input types.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22771">SPARK-22771</a>]: When all inputs are binary, <code class="language-plaintext highlighter-rouge">functions.concat()</code> returns an output as binary. Otherwise, it returns as a string. In the prior versions, it always returns as a string despite of input types.</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22489">SPARK-22489</a>]: When either of the join sides is broadcastable, we prefer to broadcasting the table that is explicitly specified in a broadcast hint.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22165">SPARK-22165</a>]: Partition column inference previously found incorrect common type for different inferred types, for example, previously it ended up with <code class="highlighter-rouge">double</code> type as the common type for <code class="highlighter-rouge">double</code> type and <code class="highlighter-rouge">date</code> type. Now it finds the correct common type for such conflicts. For details, see the <a href=" [...]
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22100">SPARK-22100</a>]: The <code class="highlighter-rouge">percentile_approx</code> function previously accepted <code class="highlighter-rouge">numeric</code> type input and outputted <code class="highlighter-rouge">double</code> type results. Now it supports <code class="highlighter-rouge">date</code> type, <code class="highlighter-rouge">timestamp</code> type and <code class="highlighter-rouge">numeric</code> types as i [...]
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21610">SPARK-21610</a>]: the queries from raw JSON/CSV files are disallowed when the referenced columns only include the internal corrupt record column (named <code class="highlighter-rouge">_corrupt_record</code> by default). Instead, you can cache or save the parsed results and then send the same query.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22165">SPARK-22165</a>]: Partition column inference previously found incorrect common type for different inferred types, for example, previously it ended up with <code class="language-plaintext highlighter-rouge">double</code> type as the common type for <code class="language-plaintext highlighter-rouge">double</code> type and <code class="language-plaintext highlighter-rouge">date</code> type. Now it finds the correct commo [...]
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22100">SPARK-22100</a>]: The <code class="language-plaintext highlighter-rouge">percentile_approx</code> function previously accepted <code class="language-plaintext highlighter-rouge">numeric</code> type input and outputted <code class="language-plaintext highlighter-rouge">double</code> type results. Now it supports <code class="language-plaintext highlighter-rouge">date</code> type, <code class="language-plaintext highlig [...]
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21610">SPARK-21610</a>]: the queries from raw JSON/CSV files are disallowed when the referenced columns only include the internal corrupt record column (named <code class="language-plaintext highlighter-rouge">_corrupt_record</code> by default). Instead, you can cache or save the parsed results and then send the same query.</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23421">SPARK-23421</a>]: Since Spark 2.2.1 and 2.3.0, the schema is always inferred at runtime when the data source tables have the columns that exist in both partition schema and data schema. The inferred schema does not have the partitioned columns. When reading the table, Spark respects the partition values of these overlapping columns instead of the values stored in the data source files. In 2.2.0 and 2.1.x release, the  [...]
     </ul>
   </li>
   <li><strong>PySpark</strong>
     <ul>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19732">SPARK-19732</a>]: <code class="highlighter-rouge">na.fill()</code> or <code class="highlighter-rouge">fillna</code> also accepts boolean and replaces nulls with booleans. In prior Spark versions, PySpark just ignores it and returns the original Dataset/DataFrame.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22395">SPARK-22395</a>]: Pandas <code class="highlighter-rouge">0.19.2</code> or upper is required for using Pandas related functionalities, such as <code class="highlighter-rouge">toPandas</code>, <code class="highlighter-rouge">createDataFrame</code> from Pandas DataFrame, etc.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-19732">SPARK-19732</a>]: <code class="language-plaintext highlighter-rouge">na.fill()</code> or <code class="language-plaintext highlighter-rouge">fillna</code> also accepts boolean and replaces nulls with booleans. In prior Spark versions, PySpark just ignores it and returns the original Dataset/DataFrame.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22395">SPARK-22395</a>]: Pandas <code class="language-plaintext highlighter-rouge">0.19.2</code> or upper is required for using Pandas related functionalities, such as <code class="language-plaintext highlighter-rouge">toPandas</code>, <code class="language-plaintext highlighter-rouge">createDataFrame</code> from Pandas DataFrame, etc.</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-22395">SPARK-22395</a>]: The behavior of timestamp values for Pandas related functionalities was changed to respect session timezone, which is ignored in the prior versions.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23328">SPARK-23328</a>]: <code class="highlighter-rouge">df.replace</code> does not allow to omit <code class="highlighter-rouge">value</code> when <code class="highlighter-rouge">to_replace</code> is not a dictionary. Previously, <code class="highlighter-rouge">value</code> could be omitted in the other cases and had <code class="highlighter-rouge">None</code> by default, which is counter-intuitive and error prone.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23328">SPARK-23328</a>]: <code class="language-plaintext highlighter-rouge">df.replace</code> does not allow to omit <code class="language-plaintext highlighter-rouge">value</code> when <code class="language-plaintext highlighter-rouge">to_replace</code> is not a dictionary. Previously, <code class="language-plaintext highlighter-rouge">value</code> could be omitted in the other cases and had <code class="language-plaintext  [...]
     </ul>
   </li>
   <li><strong>MLlib</strong>
     <ul>
-      <li><strong>Breaking API Changes</strong>: The class and trait hierarchy for logistic regression model summaries was changed to be cleaner and better accommodate the addition of the multi-class summary. This is a breaking change for user code that casts a <code class="highlighter-rouge">LogisticRegressionTrainingSummary</code> to a <code class="highlighter-rouge">BinaryLogisticRegressionTrainingSummary</code>. Users should instead use the <code class="highlighter-rouge">model.binar [...]
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21806">SPARK-21806</a>]: <code class="highlighter-rouge">BinaryClassificationMetrics.pr()</code>: first point (0.0, 1.0) is misleading and has been replaced by (0.0, p) where precision p matches the lowest recall point.</li>
+      <li><strong>Breaking API Changes</strong>: The class and trait hierarchy for logistic regression model summaries was changed to be cleaner and better accommodate the addition of the multi-class summary. This is a breaking change for user code that casts a <code class="language-plaintext highlighter-rouge">LogisticRegressionTrainingSummary</code> to a <code class="language-plaintext highlighter-rouge">BinaryLogisticRegressionTrainingSummary</code>. Users should instead use the <code [...]
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21806">SPARK-21806</a>]: <code class="language-plaintext highlighter-rouge">BinaryClassificationMetrics.pr()</code>: first point (0.0, 1.0) is misleading and has been replaced by (0.0, p) where precision p matches the lowest recall point.</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-16957">SPARK-16957</a>]: Decision trees now use weighted midpoints when choosing split values.  This may change results from model training.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-14657">SPARK-14657</a>]: <code class="highlighter-rouge">RFormula</code> without an intercept now outputs the reference category when encoding string terms, in order to match native R behavior.  This may change results from model training.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21027">SPARK-21027</a>]: The default parallelism used in <code class="highlighter-rouge">OneVsRest</code> is now set to 1 (i.e. serial). In 2.2 and earlier versions, the level of parallelism was set to the default threadpool size in Scala.  This may change performance.</li>
-      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21523">SPARK-21523</a>]: Upgraded Breeze to <code class="highlighter-rouge">0.13.2</code>.  This included an important bug fix in strong Wolfe line search for L-BFGS.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-14657">SPARK-14657</a>]: <code class="language-plaintext highlighter-rouge">RFormula</code> without an intercept now outputs the reference category when encoding string terms, in order to match native R behavior.  This may change results from model training.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21027">SPARK-21027</a>]: The default parallelism used in <code class="language-plaintext highlighter-rouge">OneVsRest</code> is now set to 1 (i.e. serial). In 2.2 and earlier versions, the level of parallelism was set to the default threadpool size in Scala.  This may change performance.</li>
+      <li>[<a href="https://issues.apache.org/jira/browse/SPARK-21523">SPARK-21523</a>]: Upgraded Breeze to <code class="language-plaintext highlighter-rouge">0.13.2</code>.  This included an important bug fix in strong Wolfe line search for L-BFGS.</li>
       <li>[<a href="https://issues.apache.org/jira/browse/SPARK-15526">SPARK-15526</a>]: The JPMML dependency is now shaded.</li>
       <li>Also see the “Bug fixes” section for behavior changes resulting from fixing bugs.</li>
     </ul>
@@ -423,7 +423,7 @@
 <h3 id="known-issues">Known Issues</h3>
 
 <ul>
-  <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23523">SPARK-23523</a>][SQL] Incorrect result caused by the rule <code class="highlighter-rouge">OptimizeMetadataOnlyQuery</code></li>
+  <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23523">SPARK-23523</a>][SQL] Incorrect result caused by the rule <code class="language-plaintext highlighter-rouge">OptimizeMetadataOnlyQuery</code></li>
   <li>[<a href="https://issues.apache.org/jira/browse/SPARK-23406">SPARK-23406</a>] Bugs in stream-stream self-joins</li>
 </ul>
 
diff --git a/site/releases/spark-release-2-3-1.html b/site/releases/spark-release-2-3-1.html
index 37ad7a9..e6e0d2d 100644
--- a/site/releases/spark-release-2-3-1.html
+++ b/site/releases/spark-release-2-3-1.html
@@ -212,7 +212,7 @@
 <ul>
   <li><strong>SQL</strong>
     <ul>
-      <li>SPARK-23173: all fields from schemas provided to the <code class="highlighter-rouge">from_json()</code> are now forced to be nullable. The original behavior can be restored by setting <code class="highlighter-rouge">spark.sql.fromJsonForceNullableSchema=false</code>.</li>
+      <li>SPARK-23173: all fields from schemas provided to the <code class="language-plaintext highlighter-rouge">from_json()</code> are now forced to be nullable. The original behavior can be restored by setting <code class="language-plaintext highlighter-rouge">spark.sql.fromJsonForceNullableSchema=false</code>.</li>
     </ul>
   </li>
 </ul>
diff --git a/site/releases/spark-release-2-4-0.html b/site/releases/spark-release-2-4-0.html
index 365ec11..46a1212 100644
--- a/site/releases/spark-release-2-4-0.html
+++ b/site/releases/spark-release-2-4-0.html
@@ -428,7 +428,7 @@
   <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25271">SPARK-25271</a>] CTAS with Hive parquet tables should leverage native parquet source</li>
   <li>[<a href="https://issues.apache.org/jira/browse/SPARK-24935">SPARK-24935</a>] Problem with Executing Hive UDAF&#8217;s from Spark 2.2 Onwards</li>
   <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25879">SPARK-25879</a>] Schema pruning fails when a nested field and top level field are selected</li>
-  <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25906">SPARK-25906</a>] spark-shell cannot handle <code class="highlighter-rouge">-i</code> option correctly</li>
+  <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25906">SPARK-25906</a>] spark-shell cannot handle <code class="language-plaintext highlighter-rouge">-i</code> option correctly</li>
   <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25921">SPARK-25921</a>] Python worker reuse causes Barrier tasks to run without BarrierTaskContext</li>
   <li>[<a href="https://issues.apache.org/jira/browse/SPARK-25918">SPARK-25918</a>] LOAD DATA LOCAL INPATH should handle a relative path</li>
 </ul>
diff --git a/site/releases/spark-release-2-4-1.html b/site/releases/spark-release-2-4-1.html
index 2bea27e..48db4b2 100644
--- a/site/releases/spark-release-2-4-1.html
+++ b/site/releases/spark-release-2-4-1.html
@@ -240,7 +240,7 @@
 <ul>
   <li><strong>CORE</strong>
     <ul>
-      <li><a href="https://issues.apache.org/jira/browse/SPARK-27419">[SPARK-27419]</a>: if <code class="highlighter-rouge">spark.executor.heartbeatInterval</code> is less than one second, it will always be set to zero resulting timeout.</li>
+      <li><a href="https://issues.apache.org/jira/browse/SPARK-27419">[SPARK-27419]</a>: if <code class="language-plaintext highlighter-rouge">spark.executor.heartbeatInterval</code> is less than one second, it will always be set to zero resulting timeout.</li>
     </ul>
   </li>
 </ul>
diff --git a/site/releases/spark-release-2-4-2.html b/site/releases/spark-release-2-4-2.html
index b97d844..3c4d6f6 100644
--- a/site/releases/spark-release-2-4-2.html
+++ b/site/releases/spark-release-2-4-2.html
@@ -211,7 +211,7 @@ Spark is still cross-published for 2.11 and 2.12 in Maven Central, and can be bu
 
 <h3 id="notable-changes">Notable changes</h3>
 <ul>
-  <li><a href="https://issues.apache.org/jira/browse/SPARK-27419">[SPARK-27419]</a>: When setting <code class="highlighter-rouge">spark.executor.heartbeatInterval</code> to a value less than 1 seconds, it will always fail because the value will be converted to 0 and the heartbeat will always timeout and finally kill the executor.</li>
+  <li><a href="https://issues.apache.org/jira/browse/SPARK-27419">[SPARK-27419]</a>: When setting <code class="language-plaintext highlighter-rouge">spark.executor.heartbeatInterval</code> to a value less than 1 seconds, it will always fail because the value will be converted to 0 and the heartbeat will always timeout and finally kill the executor.</li>
   <li>Revert <a href="https://issues.apache.org/jira/browse/SPARK-25250">[SPARK-25250]</a>: It may cause the job to hang forever, and is reverted in 2.4.2.</li>
 </ul>
 
diff --git a/site/releases/spark-release-3-0-0.html b/site/releases/spark-release-3-0-0.html
index 36fde12..cd37eb9 100644
--- a/site/releases/spark-release-3-0-0.html
+++ b/site/releases/spark-release-3-0-0.html
@@ -203,7 +203,7 @@
     <h2>Spark Release 3.0.0</h2>
 
 
-<p>Apache Spark 3.0.0 is the first release of the 3.x line. The vote passed on the 10th of June, 2020. This release is based on git tag <code class="highlighter-rouge">v3.0.0</code> which includes all commits up to June 10. Apache Spark 3.0 builds on many of the innovations from Spark 2.x, bringing new ideas as well as continuing long-term projects that have been in development. With the help of tremendous contributions from the open-source community, this release resolved more than 3400 [...]
+<p>Apache Spark 3.0.0 is the first release of the 3.x line. The vote passed on the 10th of June, 2020. This release is based on git tag <code class="language-plaintext highlighter-rouge">v3.0.0</code> which includes all commits up to June 10. Apache Spark 3.0 builds on many of the innovations from Spark 2.x, bringing new ideas as well as continuing long-term projects that have been in development. With the help of tremendous contributions from the open-source community, this release reso [...]
 
 <p>This year is Spark&#8217;s 10-year anniversary as an open source project. Since its initial release in 2010, Spark has grown to be one of the most active open source projects. Nowadays, Spark is the de facto unified engine for big data processing, data science, machine learning and data analytics workloads.</p>
 
@@ -281,7 +281,7 @@
   <li>Build Spark’s own datetime pattern definition (<a href="https://issues.apache.org/jira/browse/SPARK-31408">SPARK-31408</a>)</li>
   <li>Introduce ANSI store assignment policy for table insertion (<a href="https://issues.apache.org/jira/browse/SPARK-28495">SPARK-28495</a>)</li>
   <li>Follow ANSI store assignment rule in table insertion by default (<a href="https://issues.apache.org/jira/browse/SPARK-28885">SPARK-28885</a>)</li>
-  <li>Add a SQLConf <code class="highlighter-rouge">spark.sql.ansi.enabled</code> (<a href="https://issues.apache.org/jira/browse/SPARK-28989">SPARK-28989</a>)</li>
+  <li>Add a SQLConf <code class="language-plaintext highlighter-rouge">spark.sql.ansi.enabled</code> (<a href="https://issues.apache.org/jira/browse/SPARK-28989">SPARK-28989</a>)</li>
   <li>Support ANSI SQL filter clause for aggregate expression (<a href="https://issues.apache.org/jira/browse/SPARK-27986">SPARK-27986</a>)</li>
   <li>Support ANSI SQL OVERLAY function (<a href="https://issues.apache.org/jira/browse/SPARK-28077">SPARK-28077</a>)</li>
   <li>Support ANSI nested bracketed comments (<a href="https://issues.apache.org/jira/browse/SPARK-28880">SPARK-28880</a>)</li>
@@ -327,7 +327,7 @@
 <ul>
   <li>Support High Performance S3A committers (<a href="https://issues.apache.org/jira/browse/SPARK-23977">SPARK-23977</a>)</li>
   <li>Column pruning through nondeterministic expressions (<a href="https://issues.apache.org/jira/browse/SPARK-29768">SPARK-29768</a>)</li>
-  <li>Support <code class="highlighter-rouge">spark.sql.statistics.fallBackToHdfs</code> in data source tables (<a href="https://issues.apache.org/jira/browse/SPARK-25474">SPARK-25474</a>)</li>
+  <li>Support <code class="language-plaintext highlighter-rouge">spark.sql.statistics.fallBackToHdfs</code> in data source tables (<a href="https://issues.apache.org/jira/browse/SPARK-25474">SPARK-25474</a>)</li>
   <li>Allow partition pruning with subquery filters on file source (<a href="https://issues.apache.org/jira/browse/SPARK-26893">SPARK-26893</a>)</li>
   <li>Avoid pushdown of subqueries in data source filters (<a href="https://issues.apache.org/jira/browse/SPARK-25482">SPARK-25482</a>)</li>
   <li>Recursive data loading from file sources (<a href="https://issues.apache.org/jira/browse/SPARK-27990">SPARK-27990</a>)</li>
@@ -506,8 +506,8 @@
 <p>A few other behavior changes that are missed in the migration guide:</p>
 
 <ul>
-  <li>In Spark 3.0, the deprecated class <code class="highlighter-rouge">org.apache.spark.sql.streaming.ProcessingTime</code> has been removed. Use <code class="highlighter-rouge">org.apache.spark.sql.streaming.Trigger.ProcessingTime</code> instead. Likewise, <code class="highlighter-rouge">org.apache.spark.sql.execution.streaming.continuous.ContinuousTrigger</code> has been removed in favor of <code class="highlighter-rouge">Trigger.Continuous</code>, and <code class="highlighter-rouge" [...]
-  <li>Due to the upgrade of Scala 2.12, <code class="highlighter-rouge">DataStreamWriter.foreachBatch</code> is not source compatible for Scala program. You need to update your Scala source code to disambiguate between Scala function and  Java lambda. (<a href="https://issues.apache.org/jira/browse/SPARK-26132">SPARK-26132</a>)</li>
+  <li>In Spark 3.0, the deprecated class <code class="language-plaintext highlighter-rouge">org.apache.spark.sql.streaming.ProcessingTime</code> has been removed. Use <code class="language-plaintext highlighter-rouge">org.apache.spark.sql.streaming.Trigger.ProcessingTime</code> instead. Likewise, <code class="language-plaintext highlighter-rouge">org.apache.spark.sql.execution.streaming.continuous.ContinuousTrigger</code> has been removed in favor of <code class="language-plaintext highl [...]
+  <li>Due to the upgrade of Scala 2.12, <code class="language-plaintext highlighter-rouge">DataStreamWriter.foreachBatch</code> is not source compatible for Scala program. You need to update your Scala source code to disambiguate between Scala function and  Java lambda. (<a href="https://issues.apache.org/jira/browse/SPARK-26132">SPARK-26132</a>)</li>
 </ul>
 
 <p><em>Programming guides: <a href="https://spark.apache.org/docs/3.0.0/rdd-programming-guide.html">Spark RDD Programming Guide</a> and <a href="https://spark.apache.org/docs/3.0.0/sql-programming-guide.html">Spark SQL, DataFrames and Datasets Guide</a> and <a href="https://spark.apache.org/docs/3.0.0/structured-streaming-programming-guide.html">Structured Streaming Programming Guide</a>.</em></p>
@@ -577,9 +577,9 @@
 <h3 id="known-issues">Known Issues</h3>
 
 <ul>
-  <li>Streaming queries with <code class="highlighter-rouge">dropDuplicates</code> operator may not be able to restart with the checkpoint written by Spark 2.x. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31990">SPARK-31990</a>)</li>
+  <li>Streaming queries with <code class="language-plaintext highlighter-rouge">dropDuplicates</code> operator may not be able to restart with the checkpoint written by Spark 2.x. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31990">SPARK-31990</a>)</li>
   <li>In Web UI, the job list page may hang for more than 40 seconds. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31967">SPARK-31967</a>)</li>
-  <li>Set <code class="highlighter-rouge">io.netty.tryReflectionSetAccessible</code> for Arrow on JDK9+ (<a href="https://issues.apache.org/jira/browse/SPARK-29923">SPARK-29923</a>)</li>
+  <li>Set <code class="language-plaintext highlighter-rouge">io.netty.tryReflectionSetAccessible</code> for Arrow on JDK9+ (<a href="https://issues.apache.org/jira/browse/SPARK-29923">SPARK-29923</a>)</li>
   <li>With AWS SDK upgrade to 1.11.655, we strongly encourage the users that use S3N file system (open-source NativeS3FileSystem that is based on jets3t library) on Hadoop 2.7.3 to upgrade to use AWS Signature V4 and set the bucket endpoint or migrate to S3A (“s3a://” prefix) - jets3t library uses AWS v2 by default and s3.amazonaws.com as an endpoint. Otherwise, the 403 Forbidden error may be thrown in the following cases:
     <ul>
       <li>If a user accesses an S3 path that contains “+” characters and uses the legacy S3N file system, e.g. s3n://bucket/path/+file.</li>
@@ -588,7 +588,7 @@
 
     <p>Note that if you use S3AFileSystem, e.g. (“s3a://bucket/path”) to access S3 in S3Select or SQS connectors, then everything will work as expected. (<a href="https://issues.apache.org/jira/browse/SPARK-30968">SPARK-30968</a>)</p>
   </li>
-  <li>Parsing day of year using pattern letter &#8216;D&#8217; returns the wrong result if the year field is missing. This can happen in SQL functions like <code class="highlighter-rouge">to_timestamp</code> which parses datetime string to datetime values using a pattern string. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31939">SPARK-31939</a>)</li>
+  <li>Parsing day of year using pattern letter &#8216;D&#8217; returns the wrong result if the year field is missing. This can happen in SQL functions like <code class="language-plaintext highlighter-rouge">to_timestamp</code> which parses datetime string to datetime values using a pattern string. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31939">SPARK-31939</a>)</li>
   <li>Join/Window/Aggregate inside subqueries may lead to wrong results if the keys have values -0.0 and 0.0. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31958">SPARK-31958</a>)</li>
   <li>A window query may fail with ambiguous self-join error unexpectedly. This will be fixed in Spark 3.0.1. (<a href="https://issues.apache.org/jira/browse/SPARK-31956">SPARK-31956</a>)</li>
 </ul>
diff --git a/site/releases/spark-release-3-0-2.html b/site/releases/spark-release-3-0-2.html
index d3b50f4..cff944d 100644
--- a/site/releases/spark-release-3-0-2.html
+++ b/site/releases/spark-release-3-0-2.html
@@ -224,7 +224,7 @@
   <li><a href="https://issues.apache.org/jira/browse/SPARK-33591">[SPARK-33591]</a>: NULL is recognized as the &#8220;null&#8221; string in partition specs</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-33593">[SPARK-33593]</a>: Vector reader got incorrect data with binary partition value</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-33726">[SPARK-33726]</a>: Duplicate field names causes wrong answers during aggregation</li>
-  <li><a href="https://issues.apache.org/jira/browse/SPARK-33819">[SPARK-33819]</a>: SingleFileEventLogFileReader/RollingEventLogFilesFileReader should be <code class="highlighter-rouge"><span class="k">package</span> <span class="n">private</span></code></li>
+  <li><a href="https://issues.apache.org/jira/browse/SPARK-33819">[SPARK-33819]</a>: SingleFileEventLogFileReader/RollingEventLogFilesFileReader should be <code class="language-plaintext highlighter-rouge">package private</code></li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-33950">[SPARK-33950]</a>: ALTER TABLE .. DROP PARTITION doesn&#8217;t refresh cache</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-34011">[SPARK-34011]</a>: ALTER TABLE .. RENAME TO PARTITION doesn&#8217;t refresh cache</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-34027">[SPARK-34027]</a>: ALTER TABLE .. RECOVER PARTITIONS doesn&#8217;t refresh cache</li>
diff --git a/site/screencasts/index.html b/site/screencasts/index.html
index 4bc1c6c..00f54c9 100644
--- a/site/screencasts/index.html
+++ b/site/screencasts/index.html
@@ -208,7 +208,6 @@
       <div class="entry-meta">August 26, 2013</div>
     </header>
     <div class="entry-content"><p>In this Spark screencast, we create a standalone Apache Spark job in Scala. In the job, we create a spark context and read a file into an RDD of strings; then apply transformations and actions to the RDD and print out the results.</p>
-
 </div>
   </article>
 
@@ -218,7 +217,6 @@
       <div class="entry-meta">April 16, 2013</div>
     </header>
     <div class="entry-content"><p>In this third Spark screencast, we demonstrate more advanced use of RDD actions and transformations, as well as caching RDDs in memory.</p>
-
 </div>
   </article>
 
@@ -228,7 +226,6 @@
       <div class="entry-meta">April 11, 2013</div>
     </header>
     <div class="entry-content"><p>This is our 2nd Spark screencast. In it, we take a tour of the documentation available for Spark users online.</p>
-
 </div>
   </article>
 
@@ -242,7 +239,6 @@
   <li>Download and build Spark on a local machine (running OS X, but should be a similar process for Linux or Unix).</li>
   <li>Introduce the API using the Spark interactive shell to explore a file.</li>
 </ol>
-
 </div>
   </article>
 
diff --git a/site/security.html b/site/security.html
index a68a011..fa21cda 100644
--- a/site/security.html
+++ b/site/security.html
@@ -206,7 +206,7 @@
 for reporting vulnerabilities. Note that vulnerabilities should not be publicly disclosed until the project has
 responded.</p>
 
-<p>To report a possible security vulnerability, please email <code class="highlighter-rouge">security@spark.apache.org</code>. This is a
+<p>To report a possible security vulnerability, please email <code class="language-plaintext highlighter-rouge">security@spark.apache.org</code>. This is a
 non-public list that will reach the Apache Security team, as well as the Spark PMC.</p>
 
 <h2>Known Security Issues</h2>
@@ -226,7 +226,7 @@ non-public list that will reach the Apache Security team, as well as the Spark P
 <p>Description:</p>
 
 <p>In Apache Spark 2.4.5 and earlier, a standalone resource manager&#8217;s master may
-be configured to require authentication (<code class="highlighter-rouge">spark.authenticate</code>) via a
+be configured to require authentication (<code class="language-plaintext highlighter-rouge">spark.authenticate</code>) via a
 shared secret. When enabled, however, a specially-crafted RPC to the
 master can succeed in starting an application&#8217;s resources on the Spark
 cluster, even without the shared key. This can be leveraged to execute
@@ -262,7 +262,7 @@ shell commands on the host machine.</p>
 
 <p>Description:</p>
 
-<p>Prior to Spark 2.3.3, in certain situations Spark would write user data to local disk unencrypted, even if <code class="highlighter-rouge">spark.io.encryption.enabled=true</code>.  This includes cached blocks that are fetched to disk (controlled by <code class="highlighter-rouge">spark.maxRemoteBlockSizeFetchToMem</code>); in SparkR, using parallelize; in Pyspark, using broadcast and parallelize; and use of python udfs.</p>
+<p>Prior to Spark 2.3.3, in certain situations Spark would write user data to local disk unencrypted, even if <code class="language-plaintext highlighter-rouge">spark.io.encryption.enabled=true</code>.  This includes cached blocks that are fetched to disk (controlled by <code class="language-plaintext highlighter-rouge">spark.maxRemoteBlockSizeFetchToMem</code>); in SparkR, using parallelize; in Pyspark, using broadcast and parallelize; and use of python udfs.</p>
 
 <p>Mitigation:</p>
 
@@ -334,7 +334,7 @@ than a worker, the execution of code on the master is nevertheless unexpected.</
 <p>Mitigation:</p>
 
 <p>Enable authentication on any Spark standalone cluster that is not otherwise secured
-from unwanted access, for example by network-level restrictions. Use <code class="highlighter-rouge">spark.authenticate</code>
+from unwanted access, for example by network-level restrictions. Use <code class="language-plaintext highlighter-rouge">spark.authenticate</code>
 and related security properties described at https://spark.apache.org/docs/latest/security.html</p>
 
 <p>Credit:</p>
@@ -395,22 +395,22 @@ source code.</p>
 <p>Description:</p>
 
 <p>From version 1.3.0 onward, Spark&#8217;s standalone master exposes a REST API for job submission, in addition 
-to the submission mechanism used by <code class="highlighter-rouge">spark-submit</code>. In standalone, the config property 
-<code class="highlighter-rouge">spark.authenticate.secret</code> establishes a shared secret for authenticating requests to submit jobs via 
-<code class="highlighter-rouge">spark-submit</code>. However, the REST API does not use this or any other authentication mechanism, and this is 
+to the submission mechanism used by <code class="language-plaintext highlighter-rouge">spark-submit</code>. In standalone, the config property 
+<code class="language-plaintext highlighter-rouge">spark.authenticate.secret</code> establishes a shared secret for authenticating requests to submit jobs via 
+<code class="language-plaintext highlighter-rouge">spark-submit</code>. However, the REST API does not use this or any other authentication mechanism, and this is 
 not adequately documented. In this case, a user would be able to run a driver program without authenticating, 
 but not launch executors, using the REST API. This REST API is also used by Mesos, when set up to run in 
-cluster mode (i.e., when also running <code class="highlighter-rouge">MesosClusterDispatcher</code>), for job submission. Future versions of Spark 
-will improve documentation on these points, and prohibit setting <code class="highlighter-rouge">spark.authenticate.secret</code> when running 
+cluster mode (i.e., when also running <code class="language-plaintext highlighter-rouge">MesosClusterDispatcher</code>), for job submission. Future versions of Spark 
+will improve documentation on these points, and prohibit setting <code class="language-plaintext highlighter-rouge">spark.authenticate.secret</code> when running 
 the REST APIs, to make this clear. Future versions will also disable the REST API by default in the 
-standalone master by changing the default value of <code class="highlighter-rouge">spark.master.rest.enabled</code> to <code class="highlighter-rouge">false</code>.</p>
+standalone master by changing the default value of <code class="language-plaintext highlighter-rouge">spark.master.rest.enabled</code> to <code class="language-plaintext highlighter-rouge">false</code>.</p>
 
 <p>Mitigation:</p>
 
-<p>For standalone masters, disable the REST API by setting <code class="highlighter-rouge">spark.master.rest.enabled</code> to <code class="highlighter-rouge">false</code> if it is unused, 
+<p>For standalone masters, disable the REST API by setting <code class="language-plaintext highlighter-rouge">spark.master.rest.enabled</code> to <code class="language-plaintext highlighter-rouge">false</code> if it is unused, 
 and/or ensure that all network access to the REST API (port 6066 by default) is restricted to hosts that are 
-trusted to submit jobs. Mesos users can stop the <code class="highlighter-rouge">MesosClusterDispatcher</code>, though that will prevent them 
-from running jobs in cluster mode. Alternatively, they can ensure access to the <code class="highlighter-rouge">MesosRestSubmissionServer</code> 
+trusted to submit jobs. Mesos users can stop the <code class="language-plaintext highlighter-rouge">MesosClusterDispatcher</code>, though that will prevent them 
+from running jobs in cluster mode. Alternatively, they can ensure access to the <code class="language-plaintext highlighter-rouge">MesosRestSubmissionServer</code> 
 (port 7077 by default) is restricted to trusted hosts.</p>
 
 <p>Credit:</p>
@@ -555,7 +555,7 @@ Spark web UIs.</p>
 
 <p>Request:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET /app/?appId=Content-Type:%20multipart/related;%20boundary=_AppScan%0d%0a--
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>GET /app/?appId=Content-Type:%20multipart/related;%20boundary=_AppScan%0d%0a--
 _AppScan%0d%0aContent-Location:foo%0d%0aContent-Transfer-
 Encoding:base64%0d%0a%0d%0aPGh0bWw%2bPHNjcmlwdD5hbGVydCgiWFNTIik8L3NjcmlwdD48L2h0bWw%2b%0d%0a
 HTTP/1.1
@@ -563,7 +563,7 @@ HTTP/1.1
 
 <p>Excerpt from response:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;div class="row-fluid"&gt;No running application with ID Content-Type: multipart/related;
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;div class="row-fluid"&gt;No running application with ID Content-Type: multipart/related;
 boundary=_AppScan
 --_AppScan
 Content-Location:foo
@@ -574,7 +574,7 @@ PGh0bWw+PHNjcmlwdD5hbGVydCgiWFNTIik8L3NjcmlwdD48L2h0bWw+
 
 <p>Result: In the above payload the BASE64 data decodes as:</p>
 
-<div class="highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nt">&lt;html&gt;&lt;script&gt;</span><span class="nx">alert</span><span class="p">(</span><span class="s2">"XSS"</span><span class="p">)</span><span class="nt">&lt;/script&gt;&lt;/html&gt;</span>
+<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>&lt;html&gt;&lt;script&gt;alert("XSS")&lt;/script&gt;&lt;/html&gt;
 </code></pre></div></div>
 
 <p>Credit:</p>
diff --git a/site/third-party-projects.html b/site/third-party-projects.html
index eab81cb..949a333 100644
--- a/site/third-party-projects.html
+++ b/site/third-party-projects.html
@@ -205,7 +205,7 @@
 <p>To add a project, open a pull request against the <a href="https://github.com/apache/spark-website">spark-website</a> 
 repository. Add an entry to 
 <a href="https://github.com/apache/spark-website/blob/asf-site/third-party-projects.md">this markdown file</a>, 
-then run <code class="highlighter-rouge">jekyll build</code> to generate the HTML too. Include
+then run <code class="language-plaintext highlighter-rouge">jekyll build</code> to generate the HTML too. Include
 both in your pull request. See the README in this repo for more information.</p>
 
 <p>Note that all project and product names should follow <a href="/trademarks.html">trademark guidelines</a>.</p>
diff --git a/site/versioning-policy.html b/site/versioning-policy.html
index bd35c2d..56c4a90 100644
--- a/site/versioning-policy.html
+++ b/site/versioning-policy.html
@@ -208,7 +208,7 @@ These small differences account for Spark&#8217;s nature as a multi-module proje
 
 <h3>Spark Versions</h3>
 
-<p>Each Spark release will be versioned: <code class="highlighter-rouge">[MAJOR].[FEATURE].[MAINTENANCE]</code></p>
+<p>Each Spark release will be versioned: <code class="language-plaintext highlighter-rouge">[MAJOR].[FEATURE].[MAINTENANCE]</code></p>
 
 <ul>
   <li><strong>MAJOR</strong>: All releases with the same major version number will have API compatibility.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org