You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ma...@apache.org on 2016/08/04 02:13:23 UTC

[3/3] spark-website git commit: Trademarks page and some FAQ cleanup

Trademarks page and some FAQ cleanup


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/9700f2f4
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/9700f2f4
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/9700f2f4

Branch: refs/heads/asf-site
Commit: 9700f2f4afe566412bdb73b443b3aad99b375af1
Parents: 6123834
Author: Matei Zaharia <ma...@databricks.com>
Authored: Wed Aug 3 19:12:55 2016 -0700
Committer: Matei Zaharia <ma...@databricks.com>
Committed: Wed Aug 3 19:12:55 2016 -0700

----------------------------------------------------------------------
 _layouts/global.html                            |   2 +-
 faq.md                                          |  25 +-
 site/community.html                             |   2 +-
 site/documentation.html                         |   7 +-
 site/downloads.html                             |   2 +-
 site/examples.html                              |  62 ++---
 site/faq.html                                   |  27 +-
 site/graphx/index.html                          |   2 +-
 site/index.html                                 |   2 +-
 site/mailing-lists.html                         |   2 +-
 site/mllib/index.html                           |   2 +-
 site/news/amp-camp-2013-registration-ope.html   |   2 +-
 .../news/announcing-the-first-spark-summit.html |   2 +-
 .../news/fourth-spark-screencast-published.html |   2 +-
 site/news/index.html                            |  46 +++-
 site/news/nsdi-paper.html                       |   2 +-
 site/news/one-month-to-spark-summit-2015.html   |   2 +-
 .../proposals-open-for-spark-summit-east.html   |   2 +-
 ...registration-open-for-spark-summit-east.html |   2 +-
 .../news/run-spark-and-shark-on-amazon-emr.html |   2 +-
 site/news/spark-0-6-1-and-0-5-2-released.html   |   2 +-
 site/news/spark-0-6-2-released.html             |   2 +-
 site/news/spark-0-7-0-released.html             |   2 +-
 site/news/spark-0-7-2-released.html             |   2 +-
 site/news/spark-0-7-3-released.html             |   2 +-
 site/news/spark-0-8-0-released.html             |   2 +-
 site/news/spark-0-8-1-released.html             |   2 +-
 site/news/spark-0-9-0-released.html             |   2 +-
 site/news/spark-0-9-1-released.html             |   4 +-
 site/news/spark-0-9-2-released.html             |   4 +-
 site/news/spark-1-0-0-released.html             |   2 +-
 site/news/spark-1-0-1-released.html             |   2 +-
 site/news/spark-1-0-2-released.html             |   2 +-
 site/news/spark-1-1-0-released.html             |   4 +-
 site/news/spark-1-1-1-released.html             |   2 +-
 site/news/spark-1-2-0-released.html             |   2 +-
 site/news/spark-1-2-1-released.html             |   2 +-
 site/news/spark-1-2-2-released.html             |   4 +-
 site/news/spark-1-3-0-released.html             |   2 +-
 site/news/spark-1-4-0-released.html             |   2 +-
 site/news/spark-1-4-1-released.html             |   2 +-
 site/news/spark-1-5-0-released.html             |   2 +-
 site/news/spark-1-5-1-released.html             |   2 +-
 site/news/spark-1-5-2-released.html             |   2 +-
 site/news/spark-1-6-0-released.html             |   2 +-
 site/news/spark-1-6-1-released.html             |   2 +-
 site/news/spark-1-6-2-released.html             |   2 +-
 site/news/spark-2-0-0-released.html             |   2 +-
 site/news/spark-2.0.0-preview.html              |   2 +-
 .../spark-accepted-into-apache-incubator.html   |   2 +-
 site/news/spark-and-shark-in-the-news.html      |   4 +-
 site/news/spark-becomes-tlp.html                |   2 +-
 site/news/spark-featured-in-wired.html          |   2 +-
 .../spark-mailing-lists-moving-to-apache.html   |   2 +-
 site/news/spark-meetups.html                    |   2 +-
 site/news/spark-screencasts-published.html      |   2 +-
 site/news/spark-summit-2013-is-a-wrap.html      |   2 +-
 site/news/spark-summit-2014-videos-posted.html  |   2 +-
 site/news/spark-summit-2015-videos-posted.html  |   2 +-
 site/news/spark-summit-agenda-posted.html       |   2 +-
 .../spark-summit-east-2015-videos-posted.html   |   4 +-
 .../spark-summit-east-2016-cfp-closing.html     |   2 +-
 site/news/spark-summit-east-agenda-posted.html  |   2 +-
 .../news/spark-summit-europe-agenda-posted.html |   2 +-
 site/news/spark-summit-europe.html              |   2 +-
 .../spark-summit-june-2016-agenda-posted.html   |   2 +-
 site/news/spark-tips-from-quantifind.html       |   2 +-
 .../spark-user-survey-and-powered-by-page.html  |   2 +-
 site/news/spark-version-0-6-0-released.html     |   2 +-
 ...-wins-daytona-gray-sort-100tb-benchmark.html |   2 +-
 .../strata-exercises-now-available-online.html  |   2 +-
 .../news/submit-talks-to-spark-summit-2014.html |   2 +-
 .../news/submit-talks-to-spark-summit-2016.html |   2 +-
 .../submit-talks-to-spark-summit-east-2016.html |   2 +-
 .../submit-talks-to-spark-summit-eu-2016.html   |   2 +-
 site/news/two-weeks-to-spark-summit-2014.html   |   2 +-
 ...deo-from-first-spark-development-meetup.html |   2 +-
 site/releases/spark-release-0-3.html            |   2 +-
 site/releases/spark-release-0-5-0.html          |   2 +-
 site/releases/spark-release-0-5-1.html          |   2 +-
 site/releases/spark-release-0-5-2.html          |   2 +-
 site/releases/spark-release-0-6-0.html          |   2 +-
 site/releases/spark-release-0-6-1.html          |   2 +-
 site/releases/spark-release-0-6-2.html          |   2 +-
 site/releases/spark-release-0-7-0.html          |   2 +-
 site/releases/spark-release-0-7-2.html          |   2 +-
 site/releases/spark-release-0-7-3.html          |   2 +-
 site/releases/spark-release-0-8-0.html          |   6 +-
 site/releases/spark-release-0-8-1.html          |   2 +-
 site/releases/spark-release-0-9-0.html          |   2 +-
 site/releases/spark-release-0-9-1.html          |  22 +-
 site/releases/spark-release-0-9-2.html          |   2 +-
 site/releases/spark-release-1-0-0.html          |   2 +-
 site/releases/spark-release-1-0-1.html          |  10 +-
 site/releases/spark-release-1-0-2.html          |   4 +-
 site/releases/spark-release-1-1-0.html          |   8 +-
 site/releases/spark-release-1-1-1.html          |   2 +-
 site/releases/spark-release-1-2-0.html          |   4 +-
 site/releases/spark-release-1-2-1.html          |   2 +-
 site/releases/spark-release-1-2-2.html          |   2 +-
 site/releases/spark-release-1-3-0.html          |   8 +-
 site/releases/spark-release-1-3-1.html          |   8 +-
 site/releases/spark-release-1-4-0.html          |   6 +-
 site/releases/spark-release-1-4-1.html          |   2 +-
 site/releases/spark-release-1-5-0.html          |  32 +--
 site/releases/spark-release-1-5-1.html          |   2 +-
 site/releases/spark-release-1-5-2.html          |   2 +-
 site/releases/spark-release-1-6-0.html          |  22 +-
 site/releases/spark-release-1-6-1.html          |   2 +-
 site/releases/spark-release-1-6-2.html          |   2 +-
 site/releases/spark-release-2-0-0.html          |  38 +--
 site/research.html                              |   2 +-
 site/screencasts/1-first-steps-with-spark.html  |   2 +-
 .../2-spark-documentation-overview.html         |   2 +-
 .../3-transformations-and-caching.html          |   2 +-
 .../4-a-standalone-job-in-spark.html            |   2 +-
 site/screencasts/index.html                     |   2 +-
 site/sql/index.html                             |   2 +-
 site/streaming/index.html                       |   2 +-
 site/trademarks.html                            | 261 +++++++++++++++++++
 trademarks.md                                   |  62 +++++
 121 files changed, 618 insertions(+), 256 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/_layouts/global.html
----------------------------------------------------------------------
diff --git a/_layouts/global.html b/_layouts/global.html
index 8e69ab6..7e36392 100644
--- a/_layouts/global.html
+++ b/_layouts/global.html
@@ -199,7 +199,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="{{site.url}}trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index d8de575..f5c8565 100644
--- a/faq.md
+++ b/faq.md
@@ -33,21 +33,14 @@ Spark is a fast and general processing engine compatible with Hadoop data. It ca
 <p class="question">Do I need Hadoop to run Spark?</p>
 <p class="answer">No, but if you run on a cluster, you will need some form of shared file system (for example, NFS mounted at the same path on each node). If you have this type of filesystem, you can just deploy Spark in standalone mode.</p>
 
-<p class="question">How can I access data in S3?</p>
-<p class="answer">Use the <code>s3n://</code> URI scheme (<code>s3n://bucket/path</code>). You will also need to set your Amazon security credentials, either by setting the environment variables <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> before your program runs, or by setting <code>fs.s3.awsAccessKeyId</code> and <code>fs.s3.awsSecretAccessKey</code> in <code>SparkContext.hadoopConfiguration</code>.</p>
-
 <p class="question">Does Spark require modified versions of Scala or Python?</p>
 <p class="answer">No. Spark requires no changes to Scala or compiler plugins. The Python API uses the standard CPython implementation, and can call into existing C libraries for Python such as NumPy.</p>
 
-<p class="question">What are good resources for learning Scala?</p>
-<p class="answer">Check out <a href="http://www.artima.com/scalazine/articles/steps.html">First Steps to Scala</a> for a quick introduction, the <a href="http://www.scala-lang.org/docu/files/ScalaTutorial.pdf">Scala tutorial for Java programmers</a>, or the free online book <a href="http://www.artima.com/pins1ed/">Programming in Scala</a>. Scala is easy to transition to if you have Java experience or experience in a similarly high-level language (e.g. Ruby).</p>
-
-
-<p>In addition, Spark also has <a href="{{site.url}}docs/latest/java-programming-guide.html">Java</a> and <a href="{{site.url}}docs/latest/python-programming-guide.html">Python</a> APIs.</p>
-
 <p class="question">I understand Spark Streaming uses micro-batching. Does this increase latency?</p>
 
-While Spark does use a micro-batch execution model, this does not have much impact on applications, because the batches can be as short as 0.5 seconds. In most applications of streaming big data, the analytics is done over a larger window (say 10 minutes), or the latency to get data in is higher (e.g. sensors collect readings every 10 seconds). The benefit of Spark's micro-batch model is that it enables <a href="http://people.csail.mit.edu/matei/papers/2013/sosp_spark_streaming.pdf">exactly-once semantics</a>, meaning the system can recover all intermediate state and results on failure.
+<p class="answer">
+While Spark does use a micro-batch execution model, this does not have much impact on applications, because the batches can be as short as 0.5 seconds. In most applications of streaming big data, the analytics is done over a larger window (say 10 minutes), or the latency to get data in is higher (e.g. sensors collect readings every 10 seconds). Spark's model enables <a href="http://people.csail.mit.edu/matei/papers/2013/sosp_spark_streaming.pdf">exactly-once semantics and consistency</a>, meaning the system gives correct results despite slow nodes or failures.
+</p>
 
 <p class="question">Where can I find high-resolution versions of the Spark logo?</p>
 
@@ -60,6 +53,18 @@ While Spark does use a micro-batch execution model, this does not have much impa
   in all uses of these logos.
 </p>
 
+<p class="question">Can I provide commercial software or services based on Spark?</p>
+
+<p class="answer">
+Yes, as long as you respect the Apache Software Foundation's
+<a href="https://www.apache.org/licenses/">software license</a>
+and <a href="https://www.apache.org/foundation/marks/">trademark policy</a>.
+In particular, note that there are strong restrictions about how third-party products
+use the "Spark" name (names based on Spark are generally not allowed).
+Please also refer to our
+<a href="{{site.url}}trademarks.html">trademark policy summary</a>.
+</p>
+
 <p class="question">How can I contribute to Spark?</p>
 
 <p class="answer">See the <a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark wiki</a> for more information.</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/community.html
----------------------------------------------------------------------
diff --git a/site/community.html b/site/community.html
index 146335e..9671b8d 100644
--- a/site/community.html
+++ b/site/community.html
@@ -346,7 +346,7 @@ A wide range of contributors now develop the project (over 400 developers from 1
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/documentation.html
----------------------------------------------------------------------
diff --git a/site/documentation.html b/site/documentation.html
index 4290b04..652281d 100644
--- a/site/documentation.html
+++ b/site/documentation.html
@@ -253,13 +253,12 @@
 </ul>
 
 <h4><a name="meetup-videos"></a>Meetup Talk Videos</h4>
-<p>In addition to the videos listed below, you can also view <a href="http://www.meetup.com/spark-users/files/">all slides from Bay Area meetups here</a>.</p>
+<p>In addition to the videos listed below, you can also view <a href="http://www.meetup.com/spark-users/files/">all slides from Bay Area meetups here</a>.
 <style type="text/css">
   .video-meta-info {
     font-size: 0.95em;
   }
-</style>
-
+</style></p>
 <ul>
   <li><a href="http://www.youtube.com/watch?v=NUQ-8to2XAk&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a">Spark 1.0 and Beyond</a> (<a href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt">slides</a>) <span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 2014-04-23</span></li>
 
@@ -369,7 +368,7 @@ The <a href="/research.html">research page</a> lists some of the original motiva
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/downloads.html
----------------------------------------------------------------------
diff --git a/site/downloads.html b/site/downloads.html
index 12ca088..e8c8867 100644
--- a/site/downloads.html
+++ b/site/downloads.html
@@ -269,7 +269,7 @@ git clone git://github.com/apache/spark.git -b branch-2.0
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/examples.html
----------------------------------------------------------------------
diff --git a/site/examples.html b/site/examples.html
index 7fc7dd1..5431f5d 100644
--- a/site/examples.html
+++ b/site/examples.html
@@ -213,11 +213,11 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">text_file</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="p">)</span>
+<div class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">text_file</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="p">)</span>
 <span class="n">counts</span> <span class="o">=</span> <span class="n">text_file</span><span class="o">.</span><span class="n">flatMap</span><span class="p">(</span><span class="k">lambda</span> <span class="n">line</span><span class="p">:</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="p">(</span><span class="s">&quot; &quot;</span><span class="p">))</span> \
              <span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">word</span><span class="p">:</span> <span class="p">(</span><span class="n">word</span><span class="p">,</span> <span class="mi">1</span><span class="p">))</span> \
              <span class="o">.</span><span class="n">reduceByKey</span><span class="p">(</span><span class="k">lambda</span> <span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">:</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="p">)</span>
-<span class="n">counts</span><span class="o">.</span><span class="n">saveAsTextFile</span><span class="p">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="p">)</span></code></pre></figure>
+<span class="n">counts</span><span class="o">.</span><span class="n">saveAsTextFile</span><span class="p">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="p">)</span></code></pre></div>
 
 </div>
 </div>
@@ -225,11 +225,11 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">textFile</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">textFile</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">)</span>
 <span class="k">val</span> <span class="n">counts</span> <span class="k">=</span> <span class="n">textFile</span><span class="o">.</span><span class="n">flatMap</span><span class="o">(</span><span class="n">line</span> <span class="k">=&gt;</span> <span class="n">line</span><span class="o">.</span><span class="n">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">))</span>
                  <span class="o">.</span><span class="n">map</span><span class="o">(</span><span class="n">word</span> <span class="k">=&gt;</span> <span class="o">(</span><span class="n">word</span><span class="o">,</span> <span class="mi">1</span><span class="o">))</span>
                  <span class="o">.</span><span class="n">reduceByKey</span><span class="o">(</span><span class="k">_</span> <span class="o">+</span> <span class="k">_</span><span class="o">)</span>
-<span class="n">counts</span><span class="o">.</span><span class="n">saveAsTextFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">)</span></code></pre></figure>
+<span class="n">counts</span><span class="o">.</span><span class="n">saveAsTextFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">)</span></code></pre></div>
 
 </div>
 </div>
@@ -237,7 +237,7 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">);</span>
 <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">words</span> <span class="o">=</span> <span class="n">textFile</span><span class="o">.</span><span class="na">flatMap</span><span class="o">(</span><span class="k">new</span> <span class="n">FlatMapFunction</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">String</span><span class="o">&gt;()</span> <span class="o">{</span>
   <span class="kd">public</span> <span class="n">Iterable</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="nf">call</span><span class="o">(</span><span class="n">String</span> <span class="n">s</span><span class="o">)</span> <span class="o">{</span> <span class="k">return</span> <span class="n">Arrays</span><span class="o">.</span><span class="na">asList</span><span class="o">(</span><span class="n">s</span><span class="o">.</span><span class="na">split</span><span class="o">(</span><span class="s">&quot; &quot;</span><span class="o">));</span> <span class="o">}</span>
 <span class="o">});</span>
@@ -247,7 +247,7 @@ In this page, we will show examples using RDD API as well as examples using high
 <span class="n">JavaPairRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;</span> <span class="n">counts</span> <span class="o">=</span> <span class="n">pairs</span><span class="o">.</span><span class="na">reduceByKey</span><span class="o">(</span><span class="k">new</span> <span class="n">Function2</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">,</span> <span class="n">Integer</span><span class="o">,</span> <span class="n">Integer</span><span class="o">&gt;()</span> <span class="o">{</span>
   <span class="kd">public</span> <span class="n">Integer</span> <span class="nf">call</span><span class="o">(</span><span class="n">Integer</span> <span class="n">a</span><span class="o">,</span> <span class="n">Integer</span> <span class="n">b</span><span class="o">)</span> <span class="o">{</span> <span class="k">return</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="o">;</span> <span class="o">}</span>
 <span class="o">});</span>
-<span class="n">counts</span><span class="o">.</span><span class="na">saveAsTextFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">);</span></code></pre></figure>
+<span class="n">counts</span><span class="o">.</span><span class="na">saveAsTextFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">);</span></code></pre></div>
 
 </div>
 </div>
@@ -266,13 +266,13 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">sample</span><span class="p">(</span><span class="n">p</span><span class="p">):</span>
+<div class="highlight"><pre><code class="language-python" data-lang="python"><span class="k">def</span> <span class="nf">sample</span><span class="p">(</span><span class="n">p</span><span class="p">):</span>
     <span class="n">x</span><span class="p">,</span> <span class="n">y</span> <span class="o">=</span> <span class="n">random</span><span class="p">(),</span> <span class="n">random</span><span class="p">()</span>
     <span class="k">return</span> <span class="mi">1</span> <span class="k">if</span> <span class="n">x</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="o">*</span><span class="n">y</span> <span class="o">&lt;</span> <span class="mi">1</span> <span class="k">else</span> <span class="mi">0</span>
 
 <span class="n">count</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="p">(</span><span class="nb">xrange</span><span class="p">(</span><span class="mi">0</span><span class="p">,</span> <span class="n">NUM_SAMPLES</span><span class="p">))</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="n">sample</span><span class="p">)</span> \
              <span class="o">.</span><span class="n">reduce</span><span class="p">(</span><span class="k">lambda</span> <span class="n">a</span><span class="p">,</span> <span class="n">b</span><span class="p">:</span> <span class="n">a</span> <span class="o">+</span> <span class="n">b</span><span class="p">)</span>
-<span class="k">print</span> <span class="s">&quot;Pi is roughly </span><span class="si">%f</span><span class="s">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="p">)</span></code></pre></figure>
+<span class="k">print</span> <span class="s">&quot;Pi is roughly </span><span class="si">%f</span><span class="s">&quot;</span> <span class="o">%</span> <span class="p">(</span><span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="p">)</span></code></pre></div>
 
 </div>
 </div>
@@ -280,12 +280,12 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">count</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="o">(</span><span class="mi">1</span> <span class="n">to</span> <span class="nc">NUM_SAMPLES</span><span class="o">).</span><span class="n">map</span><span class="o">{</span><span class="n">i</span> <span class="k">=&gt;</span>
+<div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">count</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">parallelize</span><span class="o">(</span><span class="mi">1</span> <span class="n">to</span> <span class="nc">NUM_SAMPLES</span><span class="o">).</span><span class="n">map</span><span class="o">{</span><span class="n">i</span> <span class="k">=&gt;</span>
   <span class="k">val</span> <span class="n">x</span> <span class="k">=</span> <span class="nc">Math</span><span class="o">.</span><span class="n">random</span><span class="o">()</span>
   <span class="k">val</span> <span class="n">y</span> <span class="k">=</span> <span class="nc">Math</span><span class="o">.</span><span class="n">random</span><span class="o">()</span>
   <span class="k">if</span> <span class="o">(</span><span class="n">x</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="o">*</span><span class="n">y</span> <span class="o">&lt;</span> <span class="mi">1</span><span class="o">)</span> <span class="mi">1</span> <span class="k">else</span> <span class="mi">0</span>
 <span class="o">}.</span><span class="n">reduce</span><span class="o">(</span><span class="k">_</span> <span class="o">+</span> <span class="k">_</span><span class="o">)</span>
-<span class="n">println</span><span class="o">(</span><span class="s">&quot;Pi is roughly &quot;</span> <span class="o">+</span> <span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="nc">NUM_SAMPLES</span><span class="o">)</span></code></pre></figure>
+<span class="n">println</span><span class="o">(</span><span class="s">&quot;Pi is roughly &quot;</span> <span class="o">+</span> <span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="nc">NUM_SAMPLES</span><span class="o">)</span></code></pre></div>
 
 </div>
 </div>
@@ -293,7 +293,7 @@ In this page, we will show examples using RDD API as well as examples using high
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">List</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;</span> <span class="n">l</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;(</span><span class="n">NUM_SAMPLES</span><span class="o">);</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="n">List</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;</span> <span class="n">l</span> <span class="o">=</span> <span class="k">new</span> <span class="n">ArrayList</span><span class="o">&lt;</span><span class="n">Integer</span><span class="o">&gt;(</span><span class="n">NUM_SAMPLES</span><span class="o">);</span>
 <span class="k">for</span> <span class="o">(</span><span class="kt">int</span> <span class="n">i</span> <span class="o">=</span> <span class="mi">0</span><span class="o">;</span> <span class="n">i</span> <span class="o">&lt;</span> <span class="n">NUM_SAMPLES</span><span class="o">;</span> <span class="n">i</span><span class="o">++)</span> <span class="o">{</span>
   <span class="n">l</span><span class="o">.</span><span class="na">add</span><span class="o">(</span><span class="n">i</span><span class="o">);</span>
 <span class="o">}</span>
@@ -305,7 +305,7 @@ In this page, we will show examples using RDD API as well as examples using high
     <span class="k">return</span> <span class="n">x</span><span class="o">*</span><span class="n">x</span> <span class="o">+</span> <span class="n">y</span><span class="o">*</span><span class="n">y</span> <span class="o">&lt;</span> <span class="mi">1</span><span class="o">;</span>
   <span class="o">}</span>
 <span class="o">}).</span><span class="na">count</span><span class="o">();</span>
-<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Pi is roughly &quot;</span> <span class="o">+</span> <span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="o">);</span></code></pre></figure>
+<span class="n">System</span><span class="o">.</span><span class="na">out</span><span class="o">.</span><span class="na">println</span><span class="o">(</span><span class="s">&quot;Pi is roughly &quot;</span> <span class="o">+</span> <span class="mf">4.0</span> <span class="o">*</span> <span class="n">count</span> <span class="o">/</span> <span class="n">NUM_SAMPLES</span><span class="o">);</span></code></pre></div>
 
 </div>
 </div>
@@ -333,7 +333,7 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="p">)</span>
+<div class="highlight"><pre><code class="language-python" data-lang="python"><span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="p">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="p">)</span>
 
 <span class="c"># Creates a DataFrame having a single column named &quot;line&quot;</span>
 <span class="n">df</span> <span class="o">=</span> <span class="n">textFile</span><span class="o">.</span><span class="n">map</span><span class="p">(</span><span class="k">lambda</span> <span class="n">r</span><span class="p">:</span> <span class="n">Row</span><span class="p">(</span><span class="n">r</span><span class="p">))</span><span class="o">.</span><span class="n">toDF</span><span class="p">([</span><span class="s">&quot;line&quot;</span><span class="p">])</span>
@@ -343,7 +343,7 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <span class="c"># Counts errors mentioning MySQL</span>
 <span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">&quot;line&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">like</span><span class="p">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="p">))</span><span class="o">.</span><span class="n">count</span><span class="p">()</span>
 <span class="c"># Fetches the MySQL errors as an array of strings</span>
-<span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">&quot;line&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">like</span><span class="p">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="p">))</span><span class="o">.</span><span class="n">collect</span><span class="p">()</span></code></pre></figure>
+<span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="p">(</span><span class="n">col</span><span class="p">(</span><span class="s">&quot;line&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">like</span><span class="p">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="p">))</span><span class="o">.</span><span class="n">collect</span><span class="p">()</span></code></pre></div>
 
 </div>
 </div>
@@ -351,7 +351,7 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">textFile</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">)</span>
+<div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="k">val</span> <span class="n">textFile</span> <span class="k">=</span> <span class="n">sc</span><span class="o">.</span><span class="n">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">)</span>
 
 <span class="c1">// Creates a DataFrame having a single column named &quot;line&quot;</span>
 <span class="k">val</span> <span class="n">df</span> <span class="k">=</span> <span class="n">textFile</span><span class="o">.</span><span class="n">toDF</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">)</span>
@@ -361,7 +361,7 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <span class="c1">// Counts errors mentioning MySQL</span>
 <span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">).</span><span class="n">like</span><span class="o">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="o">)).</span><span class="n">count</span><span class="o">()</span>
 <span class="c1">// Fetches the MySQL errors as an array of strings</span>
-<span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">).</span><span class="n">like</span><span class="o">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="o">)).</span><span class="n">collect</span><span class="o">()</span></code></pre></figure>
+<span class="n">errors</span><span class="o">.</span><span class="n">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">).</span><span class="n">like</span><span class="o">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="o">)).</span><span class="n">collect</span><span class="o">()</span></code></pre></div>
 
 </div>
 </div>
@@ -369,7 +369,7 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Creates a DataFrame having a single column named &quot;line&quot;</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Creates a DataFrame having a single column named &quot;line&quot;</span>
 <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">String</span><span class="o">&gt;</span> <span class="n">textFile</span> <span class="o">=</span> <span class="n">sc</span><span class="o">.</span><span class="na">textFile</span><span class="o">(</span><span class="s">&quot;hdfs://...&quot;</span><span class="o">);</span>
 <span class="n">JavaRDD</span><span class="o">&lt;</span><span class="n">Row</span><span class="o">&gt;</span> <span class="n">rowRDD</span> <span class="o">=</span> <span class="n">textFile</span><span class="o">.</span><span class="na">map</span><span class="o">(</span>
   <span class="k">new</span> <span class="n">Function</span><span class="o">&lt;</span><span class="n">String</span><span class="o">,</span> <span class="n">Row</span><span class="o">&gt;()</span> <span class="o">{</span>
@@ -388,7 +388,7 @@ Also, programs based on DataFrame API will be automatically optimized by Spark
 <span class="c1">// Counts errors mentioning MySQL</span>
 <span class="n">errors</span><span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">).</span><span class="na">like</span><span class="o">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="o">)).</span><span class="na">count</span><span class="o">();</span>
 <span class="c1">// Fetches the MySQL errors as an array of strings</span>
-<span class="n">errors</span><span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">).</span><span class="na">like</span><span class="o">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="o">)).</span><span class="na">collect</span><span class="o">();</span></code></pre></figure>
+<span class="n">errors</span><span class="o">.</span><span class="na">filter</span><span class="o">(</span><span class="n">col</span><span class="o">(</span><span class="s">&quot;line&quot;</span><span class="o">).</span><span class="na">like</span><span class="o">(</span><span class="s">&quot;%MySQL%&quot;</span><span class="o">)).</span><span class="na">collect</span><span class="o">();</span></code></pre></div>
 
 </div>
 </div>
@@ -412,7 +412,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c"># Creates a DataFrame based on a table named &quot;people&quot;</span>
+<div class="highlight"><pre><code class="language-python" data-lang="python"><span class="c"># Creates a DataFrame based on a table named &quot;people&quot;</span>
 <span class="c"># stored in a MySQL database.</span>
 <span class="n">url</span> <span class="o">=</span> \
   <span class="s">&quot;jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword&quot;</span>
@@ -431,7 +431,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <span class="n">countsByAge</span><span class="o">.</span><span class="n">show</span><span class="p">()</span>
 
 <span class="c"># Saves countsByAge to S3 in the JSON format.</span>
-<span class="n">countsByAge</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="s">&quot;json&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s">&quot;s3a://...&quot;</span><span class="p">)</span></code></pre></figure>
+<span class="n">countsByAge</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="n">format</span><span class="p">(</span><span class="s">&quot;json&quot;</span><span class="p">)</span><span class="o">.</span><span class="n">save</span><span class="p">(</span><span class="s">&quot;s3a://...&quot;</span><span class="p">)</span></code></pre></div>
 
 </div>
 </div>
@@ -439,7 +439,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Creates a DataFrame based on a table named &quot;people&quot;</span>
+<div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Creates a DataFrame based on a table named &quot;people&quot;</span>
 <span class="c1">// stored in a MySQL database.</span>
 <span class="k">val</span> <span class="n">url</span> <span class="k">=</span>
   <span class="s">&quot;jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword&quot;</span>
@@ -458,7 +458,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <span class="n">countsByAge</span><span class="o">.</span><span class="n">show</span><span class="o">()</span>
 
 <span class="c1">// Saves countsByAge to S3 in the JSON format.</span>
-<span class="n">countsByAge</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="n">format</span><span class="o">(</span><span class="s">&quot;json&quot;</span><span class="o">).</span><span class="n">save</span><span class="o">(</span><span class="s">&quot;s3a://...&quot;</span><span class="o">)</span></code></pre></figure>
+<span class="n">countsByAge</span><span class="o">.</span><span class="n">write</span><span class="o">.</span><span class="n">format</span><span class="o">(</span><span class="s">&quot;json&quot;</span><span class="o">).</span><span class="n">save</span><span class="o">(</span><span class="s">&quot;s3a://...&quot;</span><span class="o">)</span></code></pre></div>
 
 </div>
 </div>
@@ -466,7 +466,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Creates a DataFrame based on a table named &quot;people&quot;</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Creates a DataFrame based on a table named &quot;people&quot;</span>
 <span class="c1">// stored in a MySQL database.</span>
 <span class="n">String</span> <span class="n">url</span> <span class="o">=</span>
   <span class="s">&quot;jdbc:mysql://yourIP:yourPort/test?user=yourUsername;password=yourPassword&quot;</span><span class="o">;</span>
@@ -485,7 +485,7 @@ A simple MySQL table "people" is used in the example and this table has two colu
 <span class="n">countsByAge</span><span class="o">.</span><span class="na">show</span><span class="o">();</span>
 
 <span class="c1">// Saves countsByAge to S3 in the JSON format.</span>
-<span class="n">countsByAge</span><span class="o">.</span><span class="na">write</span><span class="o">().</span><span class="na">format</span><span class="o">(</span><span class="s">&quot;json&quot;</span><span class="o">).</span><span class="na">save</span><span class="o">(</span><span class="s">&quot;s3a://...&quot;</span><span class="o">);</span></code></pre></figure>
+<span class="n">countsByAge</span><span class="o">.</span><span class="na">write</span><span class="o">().</span><span class="na">format</span><span class="o">(</span><span class="s">&quot;json&quot;</span><span class="o">).</span><span class="na">save</span><span class="o">(</span><span class="s">&quot;s3a://...&quot;</span><span class="o">);</span></code></pre></div>
 
 </div>
 </div>
@@ -516,7 +516,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <div class="tab-pane tab-pane-python active">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-python" data-lang="python"><span class="c"># Every record of this DataFrame contains the label and</span>
+<div class="highlight"><pre><code class="language-python" data-lang="python"><span class="c"># Every record of this DataFrame contains the label and</span>
 <span class="c"># features represented by a vector.</span>
 <span class="n">df</span> <span class="o">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="n">createDataFrame</span><span class="p">(</span><span class="n">data</span><span class="p">,</span> <span class="p">[</span><span class="s">&quot;label&quot;</span><span class="p">,</span> <span class="s">&quot;features&quot;</span><span class="p">])</span>
 
@@ -528,7 +528,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <span class="n">model</span> <span class="o">=</span> <span class="n">lr</span><span class="o">.</span><span class="n">fit</span><span class="p">(</span><span class="n">df</span><span class="p">)</span>
 
 <span class="c"># Given a dataset, predict each point&#39;s label, and show the results.</span>
-<span class="n">model</span><span class="o">.</span><span class="n">transform</span><span class="p">(</span><span class="n">df</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span></code></pre></figure>
+<span class="n">model</span><span class="o">.</span><span class="n">transform</span><span class="p">(</span><span class="n">df</span><span class="p">)</span><span class="o">.</span><span class="n">show</span><span class="p">()</span></code></pre></div>
 
 </div>
 </div>
@@ -536,7 +536,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <div class="tab-pane tab-pane-scala">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Every record of this DataFrame contains the label and</span>
+<div class="highlight"><pre><code class="language-scala" data-lang="scala"><span class="c1">// Every record of this DataFrame contains the label and</span>
 <span class="c1">// features represented by a vector.</span>
 <span class="k">val</span> <span class="n">df</span> <span class="k">=</span> <span class="n">sqlContext</span><span class="o">.</span><span class="n">createDataFrame</span><span class="o">(</span><span class="n">data</span><span class="o">).</span><span class="n">toDF</span><span class="o">(</span><span class="s">&quot;label&quot;</span><span class="o">,</span> <span class="s">&quot;features&quot;</span><span class="o">)</span>
 
@@ -551,7 +551,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <span class="k">val</span> <span class="n">weights</span> <span class="k">=</span> <span class="n">model</span><span class="o">.</span><span class="n">weights</span>
 
 <span class="c1">// Given a dataset, predict each point&#39;s label, and show the results.</span>
-<span class="n">model</span><span class="o">.</span><span class="n">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="n">show</span><span class="o">()</span></code></pre></figure>
+<span class="n">model</span><span class="o">.</span><span class="n">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="n">show</span><span class="o">()</span></code></pre></div>
 
 </div>
 </div>
@@ -559,7 +559,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <div class="tab-pane tab-pane-java">
 <div class="code code-tab">
 
-<figure class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Every record of this DataFrame contains the label and</span>
+<div class="highlight"><pre><code class="language-java" data-lang="java"><span class="c1">// Every record of this DataFrame contains the label and</span>
 <span class="c1">// features represented by a vector.</span>
 <span class="n">StructType</span> <span class="n">schema</span> <span class="o">=</span> <span class="k">new</span> <span class="nf">StructType</span><span class="o">(</span><span class="k">new</span> <span class="n">StructField</span><span class="o">[]{</span>
   <span class="k">new</span> <span class="nf">StructField</span><span class="o">(</span><span class="s">&quot;label&quot;</span><span class="o">,</span> <span class="n">DataTypes</span><span class="o">.</span><span class="na">DoubleType</span><span class="o">,</span> <span class="kc">false</span><span class="o">,</span> <span class="n">Metadata</span><span class="o">.</span><span class="na">empty</span><span class="o">()),</span>
@@ -578,7 +578,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 <span class="n">Vector</span> <span class="n">weights</span> <span class="o">=</span> <span class="n">model</span><span class="o">.</span><span class="na">weights</span><span class="o">();</span>
 
 <span class="c1">// Given a dataset, predict each point&#39;s label, and show the results.</span>
-<span class="n">model</span><span class="o">.</span><span class="na">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="na">show</span><span class="o">();</span></code></pre></figure>
+<span class="n">model</span><span class="o">.</span><span class="na">transform</span><span class="o">(</span><span class="n">df</span><span class="o">).</span><span class="na">show</span><span class="o">();</span></code></pre></div>
 
 </div>
 </div>
@@ -601,7 +601,7 @@ We learn to predict the labels from feature vectors using the Logistic Regressio
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/faq.html
----------------------------------------------------------------------
diff --git a/site/faq.html b/site/faq.html
index 3d263e7..38bb109 100644
--- a/site/faq.html
+++ b/site/faq.html
@@ -209,21 +209,14 @@ Spark is a fast and general processing engine compatible with Hadoop data. It ca
 <p class="question">Do I need Hadoop to run Spark?</p>
 <p class="answer">No, but if you run on a cluster, you will need some form of shared file system (for example, NFS mounted at the same path on each node). If you have this type of filesystem, you can just deploy Spark in standalone mode.</p>
 
-<p class="question">How can I access data in S3?</p>
-<p class="answer">Use the <code>s3n://</code> URI scheme (<code>s3n://bucket/path</code>). You will also need to set your Amazon security credentials, either by setting the environment variables <code>AWS_ACCESS_KEY_ID</code> and <code>AWS_SECRET_ACCESS_KEY</code> before your program runs, or by setting <code>fs.s3.awsAccessKeyId</code> and <code>fs.s3.awsSecretAccessKey</code> in <code>SparkContext.hadoopConfiguration</code>.</p>
-
 <p class="question">Does Spark require modified versions of Scala or Python?</p>
 <p class="answer">No. Spark requires no changes to Scala or compiler plugins. The Python API uses the standard CPython implementation, and can call into existing C libraries for Python such as NumPy.</p>
 
-<p class="question">What are good resources for learning Scala?</p>
-<p class="answer">Check out <a href="http://www.artima.com/scalazine/articles/steps.html">First Steps to Scala</a> for a quick introduction, the <a href="http://www.scala-lang.org/docu/files/ScalaTutorial.pdf">Scala tutorial for Java programmers</a>, or the free online book <a href="http://www.artima.com/pins1ed/">Programming in Scala</a>. Scala is easy to transition to if you have Java experience or experience in a similarly high-level language (e.g. Ruby).</p>
-
-
-<p>In addition, Spark also has <a href="/docs/latest/java-programming-guide.html">Java</a> and <a href="/docs/latest/python-programming-guide.html">Python</a> APIs.</p>
-
 <p class="question">I understand Spark Streaming uses micro-batching. Does this increase latency?</p>
 
-While Spark does use a micro-batch execution model, this does not have much impact on applications, because the batches can be as short as 0.5 seconds. In most applications of streaming big data, the analytics is done over a larger window (say 10 minutes), or the latency to get data in is higher (e.g. sensors collect readings every 10 seconds). The benefit of Spark's micro-batch model is that it enables <a href="http://people.csail.mit.edu/matei/papers/2013/sosp_spark_streaming.pdf">exactly-once semantics</a>, meaning the system can recover all intermediate state and results on failure.
+<p class="answer">
+While Spark does use a micro-batch execution model, this does not have much impact on applications, because the batches can be as short as 0.5 seconds. In most applications of streaming big data, the analytics is done over a larger window (say 10 minutes), or the latency to get data in is higher (e.g. sensors collect readings every 10 seconds). Spark's model enables <a href="http://people.csail.mit.edu/matei/papers/2013/sosp_spark_streaming.pdf">exactly-once semantics and consistency</a>, meaning the system gives correct results despite slow nodes or failures.
+</p>
 
 <p class="question">Where can I find high-resolution versions of the Spark logo?</p>
 
@@ -236,6 +229,18 @@ While Spark does use a micro-batch execution model, this does not have much impa
   in all uses of these logos.
 </p>
 
+<p class="question">Can I provide commercial software or services based on Spark?</p>
+
+<p class="answer">
+Yes, as long as you respect the Apache Software Foundation's
+<a href="https://www.apache.org/licenses/">software license</a>
+and <a href="https://www.apache.org/foundation/marks/">trademark policy</a>.
+In particular, note that there are strong restrictions about how third-party products
+use the "Spark" name (names based on Spark are generally not allowed).
+Please also refer to our
+<a href="/trademarks.html">trademark policy summary</a>.
+</p>
+
 <p class="question">How can I contribute to Spark?</p>
 
 <p class="answer">See the <a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark wiki</a> for more information.</p>
@@ -251,7 +256,7 @@ While Spark does use a micro-batch execution model, this does not have much impa
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/graphx/index.html
----------------------------------------------------------------------
diff --git a/site/graphx/index.html b/site/graphx/index.html
index 4ef0894..8929622 100644
--- a/site/graphx/index.html
+++ b/site/graphx/index.html
@@ -303,7 +303,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/index.html
----------------------------------------------------------------------
diff --git a/site/index.html b/site/index.html
index c33e6b2..9b0df71 100644
--- a/site/index.html
+++ b/site/index.html
@@ -365,7 +365,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/mailing-lists.html
----------------------------------------------------------------------
diff --git a/site/mailing-lists.html b/site/mailing-lists.html
index 00769c0..d5b78ee 100644
--- a/site/mailing-lists.html
+++ b/site/mailing-lists.html
@@ -195,7 +195,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/mllib/index.html
----------------------------------------------------------------------
diff --git a/site/mllib/index.html b/site/mllib/index.html
index 71a7042..08c1d24 100644
--- a/site/mllib/index.html
+++ b/site/mllib/index.html
@@ -331,7 +331,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/news/amp-camp-2013-registration-ope.html
----------------------------------------------------------------------
diff --git a/site/news/amp-camp-2013-registration-ope.html b/site/news/amp-camp-2013-registration-ope.html
index 51a10bd..ea9c702 100644
--- a/site/news/amp-camp-2013-registration-ope.html
+++ b/site/news/amp-camp-2013-registration-ope.html
@@ -201,7 +201,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/news/announcing-the-first-spark-summit.html
----------------------------------------------------------------------
diff --git a/site/news/announcing-the-first-spark-summit.html b/site/news/announcing-the-first-spark-summit.html
index c2d848f..1d1c1c5 100644
--- a/site/news/announcing-the-first-spark-summit.html
+++ b/site/news/announcing-the-first-spark-summit.html
@@ -205,7 +205,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/news/fourth-spark-screencast-published.html
----------------------------------------------------------------------
diff --git a/site/news/fourth-spark-screencast-published.html b/site/news/fourth-spark-screencast-published.html
index 5f4505d..783dbc5 100644
--- a/site/news/fourth-spark-screencast-published.html
+++ b/site/news/fourth-spark-screencast-published.html
@@ -205,7 +205,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/news/index.html
----------------------------------------------------------------------
diff --git a/site/news/index.html b/site/news/index.html
index 9f75820..0a099ed 100644
--- a/site/news/index.html
+++ b/site/news/index.html
@@ -191,6 +191,7 @@
       <div class="entry-date">July 26, 2016</div>
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">Spark 2.0.0</a>! Visit the <a href="/releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">release notes</a> to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>
+
 </div>
   </article>
 
@@ -210,6 +211,7 @@
       <div class="entry-date">June 16, 2016</div>
     </header>
     <div class="entry-content"><p>Call for presentations is now open for <a href="https://spark-summit.org/eu-2016/">Spark Summit EU</a>! The event will take place on October 25-27 in Brussels. Submissions are welcome across a variety of Spark-related topics, including applications, development, data science, enterprise, spark ecosystem and research. Please submit by July 1 to be considered.</p>
+
 </div>
   </article>
 
@@ -229,6 +231,7 @@
       <div class="entry-date">April 17, 2016</div>
     </header>
     <div class="entry-content"><p>The agenda for <a href="https://spark-summit.org/2016/">Spark Summit 2016</a> is now available! The summit kicks off on June 6th with a full day of Spark training followed by over 90+ talks featuring speakers from Airbnb, Baidu, Bloomberg, Databricks, Duke, IBM, Microsoft, Netflix, Uber, UC Berkeley. Check out the full <a href="https://spark-summit.org/2016/schedule/">schedule</a> and <a href="https://spark-summit.org/2016/register/">register</a> to attend!</p>
+
 </div>
   </article>
 
@@ -248,6 +251,7 @@
       <div class="entry-date">February 11, 2016</div>
     </header>
     <div class="entry-content"><p>Call for presentations is now open for <a href="https://spark-summit.org/2016/">Spark Summit San Francisco</a>! The event will take place on June 6-8 in San Francisco. Submissions are welcome across a variety of Spark-related topics, including applications, development, data science, business value, spark ecosystem and research. Please submit by February 29th to be considered.</p>
+
 </div>
   </article>
 
@@ -257,6 +261,7 @@
       <div class="entry-date">January 14, 2016</div>
     </header>
     <div class="entry-content"><p>The <a href="https://spark-summit.org/east-2016/schedule/">agenda for Spark Summit East</a> is now posted, with 60 talks from organizations including Netflix, Comcast, Blackrock, Bloomberg and others. The 2nd annual Spark Summit East will run February 16-18th in NYC and feature a full program of speakers along with Spark training opportunities. More details are available on the <a href="https://spark-summit.org/east-2016/schedule/">Spark Summit East website</a>, where you can also <a href="http://www.prevalentdesignevents.com/sparksummit2016/east/registration.aspx?source=header">register to attend</a>.</p>
+
 </div>
   </article>
 
@@ -279,6 +284,7 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">November 19, 2015</div>
     </header>
     <div class="entry-content"><p>Call for presentations is closing soon for <a href="https://spark-summit.org/east-2016/">Spark Summit East</a>! The event will take place on February 16th-18th in New York City. Submissions are welcome across a variety of Spark-related topics, including applications, development, data science, enterprise, and research. Please submit by November 22nd to be considered.</p>
+
 </div>
   </article>
 
@@ -298,6 +304,7 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">October 14, 2015</div>
     </header>
     <div class="entry-content"><p>Abstract submissions are now open for the 2nd <a href="https://spark-summit.org/east-2016/">Spark Summit East</a>! The event will take place on February 16th-18th in New York City. Submissions are welcome across a variety of Spark-related topics, including applications, development, data science, enterprise, and research.</p>
+
 </div>
   </article>
 
@@ -327,6 +334,7 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">September 7, 2015</div>
     </header>
     <div class="entry-content"><p>The <a href="http://spark-summit.org/eu-2015/schedule">agenda for Spark Summit Europe</a> is now posted, with 38 talks from organizations including Barclays, Netflix, Elsevier, Intel and others. This inaugural Spark conference in Europe will run October 27th-29th 2015 in Amsterdam and feature a full program of speakers along with Spark training opportunities. More details are available on the <a href="https://spark-summit.org/eu-2015/">Spark Summit Europe website</a>, where you can also <a href="https://www.prevalentdesignevents.com/sparksummit2015/europe/registration.aspx?source=header">register</a> to attend.</p>
+
 </div>
   </article>
 
@@ -346,6 +354,7 @@ With this release the Spark community continues to grow, with contributions from
       <div class="entry-date">June 29, 2015</div>
     </header>
     <div class="entry-content"><p>The videos and slides for Spark Summit 2015 are now all <a href="http://spark-summit.org/2015/#day-1">available online</a>! The talks include technical roadmap discussions, deep dives on Spark components, and use cases built on top of Spark.</p>
+
 </div>
   </article>
 
@@ -362,7 +371,7 @@ With this release the Spark community continues to grow, with contributions from
 <article class="hentry">
     <header class="entry-header">
       <h3 class="entry-title"><a href="/news/one-month-to-spark-summit-2015.html">One month to Spark Summit 2015 in San Francisco</a></h3>
-      <div class="entry-date">May 15, 2015</div>
+      <div class="entry-date">May 14, 2015</div>
     </header>
     <div class="entry-content"><p>There is one month left until <a href="https://spark-summit.org/2015/">Spark Summit 2015</a>, which
 will be held in San Francisco on June 15th to 17th.
@@ -374,9 +383,10 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
 <article class="hentry">
     <header class="entry-header">
       <h3 class="entry-title"><a href="/news/spark-summit-europe.html">Announcing Spark Summit Europe</a></h3>
-      <div class="entry-date">May 15, 2015</div>
+      <div class="entry-date">May 14, 2015</div>
     </header>
     <div class="entry-content"><p>Abstract submissions are now open for the first ever <a href="https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/">Spark Summit Europe</a>. The event will take place on October 27th to 29th in Amsterdam. Submissions are welcome across a variety of Spark related topics, including use cases and ongoing development.</p>
+
 </div>
   </article>
 
@@ -385,7 +395,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <h3 class="entry-title"><a href="/news/spark-summit-east-2015-videos-posted.html">Spark Summit East 2015 Videos Posted</a></h3>
       <div class="entry-date">April 20, 2015</div>
     </header>
-    <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top.</p>
+    <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top. </p>
 
 </div>
   </article>
@@ -395,7 +405,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <h3 class="entry-title"><a href="/news/spark-1-2-2-released.html">Spark 1.2.2 and 1.3.1 released</a></h3>
       <div class="entry-date">April 17, 2015</div>
     </header>
-    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers.</p>
+    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. </p>
 
 </div>
   </article>
@@ -507,7 +517,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
-Contributions to this release came from 28 developers.</p>
+Contributions to this release came from 28 developers. </p>
 
 </div>
   </article>
@@ -578,7 +588,7 @@ about the latest happenings in Spark.</p>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
-Contributions to this release came from 37 developers.</p>
+Contributions to this release came from 37 developers. </p>
 
 </div>
   </article>
@@ -626,6 +636,7 @@ hardened YARN support.</p>
       <div class="entry-date">December 19, 2013</div>
     </header>
     <div class="entry-content"><p>We&#8217;ve just posted <a href="/releases/spark-release-0-8-1.html" title="Spark Release 0.8.1">Spark Release 0.8.1</a>, a maintenance and performance release for the Scala 2.9 version of Spark. 0.8.1 includes support for YARN 2.2, a high availability mode for the standalone scheduler, optimizations to the shuffle, and many other improvements. We recommend that all users update to this release. Visit the <a href="/releases/spark-release-0-8-1.html" title="Spark Release 0.8.1">release notes</a> to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>
+
 </div>
   </article>
 
@@ -656,6 +667,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">September 25, 2013</div>
     </header>
     <div class="entry-content"><p>We&#8217;re proud to announce the release of <a href="/releases/spark-release-0-8-0.html" title="Spark Release 0.8.0">Apache Spark 0.8.0</a>. Spark 0.8.0 is a major release that includes many new capabilities and usability improvements. It\u2019s also our first release under the Apache incubator. It is the largest Spark release yet, with contributions from 67 developers and 24 companies. Major new features include an expanded monitoring framework and UI, a machine learning library, and support for running Spark inside of YARN.</p>
+
 </div>
   </article>
 
@@ -685,6 +697,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">July 23, 2013</div>
     </header>
     <div class="entry-content"><p>Want to learn how to use Spark, Shark, GraphX, and related technologies in person? The AMP Lab is hosting a two-day training workshop for them on August 29th and 30th in Berkeley. The workshop will include tutorials, talks from users, and over four hours of hands-on exercises. <a href="http://ampcamp.berkeley.edu/amp-camp-three-berkeley-2013/">Registration is now open on the AMP Camp website</a>, for a price of $250 per person. We recommend signing up early because last year&#8217;s workshop was sold out.</p>
+
 </div>
   </article>
 
@@ -705,6 +718,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 </ul>
 
 <p>Most users will probably want the User list, but individuals interested in contributing code to the project should also subscribe to the Dev list.</p>
+
 </div>
   </article>
 
@@ -714,6 +728,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">July 16, 2013</div>
     </header>
     <div class="entry-content"><p>We&#8217;ve just posted <a href="/releases/spark-release-0-7-3.html" title="Spark Release 0.7.3">Spark Release 0.7.3</a>, a maintenance release that contains several fixes, including streaming API updates and new functionality for adding JARs to a <code>spark-shell</code> session. We recommend that all users update to this release. Visit the <a href="/releases/spark-release-0-7-3.html" title="Spark Release 0.7.3">release notes</a> to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>
+
 </div>
   </article>
 
@@ -723,6 +738,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">June 21, 2013</div>
     </header>
     <div class="entry-content"><p>Spark, its creators at the AMP Lab, and some of its users were featured in a <a href="http://www.wired.com/wiredenterprise/2013/06/yahoo-amazon-amplab-spark/all/">Wired Enterprise article</a> a few days ago. Read on to learn a little about how Spark is being used in industry.</p>
+
 </div>
   </article>
 
@@ -732,6 +748,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">June 21, 2013</div>
     </header>
     <div class="entry-content"><p>Spark was recently <a href="http://mail-archives.apache.org/mod_mbox/incubator-general/201306.mbox/%3CCDE7B773.E9A48%25chris.a.mattmann%40jpl.nasa.gov%3E">accepted</a> into the <a href="http://incubator.apache.org">Apache Incubator</a>, which will serve as the long-term home for the project. While moving the source code and issue tracking to Apache will take some time, we are excited to be joining the community at Apache. Stay tuned on this site for updates on how the project hosting will change.</p>
+
 </div>
   </article>
 
@@ -741,6 +758,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">June 2, 2013</div>
     </header>
     <div class="entry-content"><p>We&#8217;re happy to announce the release of <a href="/releases/spark-release-0-7-2.html" title="Spark Release 0.7.2">Spark 0.7.2</a>, a new maintenance release that includes several bug fixes and improvements, as well as new code examples and API features. We recommend that all users update to this release. Head over to the <a href="/releases/spark-release-0-7-2.html" title="Spark Release 0.7.2">release notes</a> to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>
+
 </div>
   </article>
 
@@ -756,6 +774,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 <p>The second screencast is a 2 minute <a href="/screencasts/2-spark-documentation-overview.html">overview of the Spark documentation</a>.</p>
 
 <p>We hope you find these screencasts useful.</p>
+
 </div>
   </article>
 
@@ -765,6 +784,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">March 17, 2013</div>
     </header>
     <div class="entry-content"><p>At this year&#8217;s <a href="http://strataconf.com/strata2013">Strata</a> conference, the AMP Lab hosted a full day of tutorials on Spark, Shark, and Spark Streaming, including online exercises on Amazon EC2. Those exercises are now <a href="http://ampcamp.berkeley.edu/big-data-mini-course/">available online</a>, letting you learn Spark and Shark at your own pace on an EC2 cluster with real data. They are a great resource for learning the systems. You can also find <a href="http://ampcamp.berkeley.edu/amp-camp-two-strata-2013/">slides</a> from the Strata tutorials online, as well as <a href="http://ampcamp.berkeley.edu/amp-camp-one-berkeley-2012/">videos</a> from the AMP Camp workshop we held at Berkeley in August.</p>
+
 </div>
   </article>
 
@@ -774,6 +794,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">February 27, 2013</div>
     </header>
     <div class="entry-content"><p>We&#8217;re proud to announce the release of <a href="/releases/spark-release-0-7-0.html" title="Spark Release 0.7.0">Spark 0.7.0</a>, a new major version of Spark that adds several key features, including a <a href="/docs/latest/python-programming-guide.html">Python API</a> for Spark and an <a href="/docs/latest/streaming-programming-guide.html">alpha of Spark Streaming</a>. This release is the result of the largest group of contributors yet behind a Spark release &#8211; 31 contributors from inside and outside Berkeley. Head over to the <a href="/releases/spark-release-0-7-0.html" title="Spark Release 0.7.0">release notes</a> to read more about the new features, or <a href="/downloads.html">download</a> the release today.</p>
+
 </div>
   </article>
 
@@ -783,6 +804,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">February 24, 2013</div>
     </header>
     <div class="entry-content"><p>This weekend, Amazon posted an <a href="http://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923">article</a> and code that make it easy to launch Spark and Shark on Elastic MapReduce. The article includes examples of how to run both interactive Scala commands and SQL queries from Shark on data in S3. Head over to the <a href="http://aws.amazon.com/articles/Elastic-MapReduce/4926593393724923">Amazon article</a> for details. We&#8217;re very excited because, to our knowledge, this makes Spark the first non-Hadoop engine that you can launch with EMR.</p>
+
 </div>
   </article>
 
@@ -792,6 +814,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">February 7, 2013</div>
     </header>
     <div class="entry-content"><p>We recently released <a href="/releases/spark-release-0-6-2.html" title="Spark Release 0.6.2">Spark 0.6.2</a>, a new version of Spark. This is a maintenance release that includes several bug fixes and usability improvements (see the <a href="/releases/spark-release-0-6-2.html" title="Spark Release 0.6.2">release notes</a>). We recommend that all users upgrade to this release.</p>
+
 </div>
   </article>
 
@@ -806,6 +829,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 <li><a href="http://blog.quantifind.com/posts/logging-post/">Configuring Spark's logs</a></li>
 </ul>
 <p>Thanks for sharing this, and looking forward to see others!</p>
+
 </div>
   </article>
 
@@ -815,6 +839,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">December 21, 2012</div>
     </header>
     <div class="entry-content"><p>On December 18th, we held the first of a series of Spark development meetups, for people interested in learning the Spark codebase and contributing to the project. There was quite a bit more demand than we anticipated, with over 80 people signing up and 64 attending. The first meetup was an <a href="http://www.meetup.com/spark-users/events/94101942/">introduction to Spark internals</a>. Thanks to one of the attendees, there&#8217;s now a <a href="http://www.youtube.com/watch?v=49Hr5xZyTEA">video of the meetup</a> on YouTube. We&#8217;ve also posted the <a href="http://files.meetup.com/3138542/dev-meetup-dec-2012.pptx">slides</a>. Look to see more development meetups on Spark and Shark in the future.</p>
+
 </div>
   </article>
 
@@ -833,7 +858,8 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 <li><a href="http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/">DataInformed</a> interviewed two Spark users and wrote about their applications in anomaly detection, predictive analytics and data mining.</li>
 </ul>
 
-<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>.</p>
+<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>. </p>
+
 </div>
   </article>
 
@@ -843,6 +869,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">November 22, 2012</div>
     </header>
     <div class="entry-content"><p>Today we&#8217;ve made available two maintenance releases for Spark: <a href="/releases/spark-release-0-6-1.html" title="Spark Release 0.6.1">0.6.1</a> and <a href="/releases/spark-release-0-5-2.html" title="Spark Release 0.5.2">0.5.2</a>. They both contain important bug fixes as well as some new features, such as the ability to build against Hadoop 2 distributions. We recommend that users update to the latest version for their branch; for new users, we recommend <a href="/releases/spark-release-0-6-1.html" title="Spark Release 0.6.1">0.6.1</a>.</p>
+
 </div>
   </article>
 
@@ -852,6 +879,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">October 15, 2012</div>
     </header>
     <div class="entry-content"><p><a href="/releases/spark-release-0-6-0.html">Spark version 0.6.0</a> was released today, a major release that brings a wide range of performance improvements and new features, including a simpler standalone deploy mode and a Java API. Read more about it in the <a href="/releases/spark-release-0-6-0.html">release notes</a>.</p>
+
 </div>
   </article>
 
@@ -861,6 +889,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">April 25, 2012</div>
     </header>
     <div class="entry-content"><p>Our <a href="http://www.cs.berkeley.edu/~matei/papers/2012/nsdi_spark.pdf">paper on Spark</a> won the Best Paper Award at the <a href="http://www.usenix.org/nsdi12/">USENIX NSDI conference</a>. You can see a video of the talk, as well as slides, online on the <a href="https://www.usenix.org/conference/nsdi12/resilient-distributed-datasets-fault-tolerant-abstraction-memory-cluster-computing">NSDI website</a>.</p>
+
 </div>
   </article>
 
@@ -870,6 +899,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
       <div class="entry-date">January 10, 2012</div>
     </header>
     <div class="entry-content"><p>We&#8217;ve started hosting a regular <a href="http://www.meetup.com/spark-users/">Bay Area Spark User Meetup</a>. Sign up on the meetup.com page to be notified about events and meet other Spark developers and users.</p>
+
 </div>
   </article>
 
@@ -881,7 +911,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/news/nsdi-paper.html
----------------------------------------------------------------------
diff --git a/site/news/nsdi-paper.html b/site/news/nsdi-paper.html
index 400c437..cd4f693 100644
--- a/site/news/nsdi-paper.html
+++ b/site/news/nsdi-paper.html
@@ -201,7 +201,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/9700f2f4/site/news/one-month-to-spark-summit-2015.html
----------------------------------------------------------------------
diff --git a/site/news/one-month-to-spark-summit-2015.html b/site/news/one-month-to-spark-summit-2015.html
index 512b467..6bad0e3 100644
--- a/site/news/one-month-to-spark-summit-2015.html
+++ b/site/news/one-month-to-spark-summit-2015.html
@@ -207,7 +207,7 @@ online to attend in person. We hope you enjoy the event!</p>
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="https://www.apache.org/foundation/marks/">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org