You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/11/15 18:23:07 UTC

[2/3] spark-website git commit: Use site.baseurl, not site.url, to work with Jekyll 3.3. Require Jekyll 3.3. Again commit HTML consistent with Jekyll 3.3 output. Fix date problem with news posts that set date: by removing date:.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-10-02-spark-1-5-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-10-02-spark-1-5-1-released.md b/news/_posts/2015-10-02-spark-1-5-1-released.md
index f525cbf..d098de6 100644
--- a/news/_posts/2015-10-02-spark-1-5-1-released.md
+++ b/news/_posts/2015-10-02-spark-1-5-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">Spark 1.5.1</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">Spark 1.5.1</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-11-09-spark-1-5-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-11-09-spark-1-5-2-released.md b/news/_posts/2015-11-09-spark-1-5-2-released.md
index 21696c5..fbc6c71 100644
--- a/news/_posts/2015-11-09-spark-1-5-2-released.md
+++ b/news/_posts/2015-11-09-spark-1-5-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">Spark 1.5.2</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">Spark 1.5.2</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-01-04-spark-1-6-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-01-04-spark-1-6-0-released.md b/news/_posts/2016-01-04-spark-1-6-0-released.md
index 4e47772..b399ade 100644
--- a/news/_posts/2016-01-04-spark-1-6-0-released.md
+++ b/news/_posts/2016-01-04-spark-1-6-0-released.md
@@ -12,9 +12,9 @@ meta:
   _wpas_done_all: '1'
 ---
 We are happy to announce the availability of 
-<a href="{{site.url}}releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">Spark 1.6.0</a>! 
+<a href="{{site.baseurl}}/releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">Spark 1.6.0</a>! 
 Spark 1.6.0 is the seventh release on the API-compatible 1.X line. 
 With this release the Spark community continues to grow, with contributions from 248 developers!
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">release notes</a> 
-to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">release notes</a> 
+to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-03-09-spark-1-6-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-03-09-spark-1-6-1-released.md b/news/_posts/2016-03-09-spark-1-6-1-released.md
index adc2735..6e15537 100644
--- a/news/_posts/2016-03-09-spark-1-6-1-released.md
+++ b/news/_posts/2016-03-09-spark-1-6-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">Spark 1.6.1</a>! This maintenance release includes fixes across several areas of Spark, including signficant updates to the experimental Dataset API.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">Spark 1.6.1</a>! This maintenance release includes fixes across several areas of Spark, including signficant updates to the experimental Dataset API.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-06-25-spark-1-6-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-06-25-spark-1-6-2-released.md b/news/_posts/2016-06-25-spark-1-6-2-released.md
index d3d2beb..3c9bbf3 100644
--- a/news/_posts/2016-06-25-spark-1-6-2-released.md
+++ b/news/_posts/2016-06-25-spark-1-6-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">Spark 1.6.2</a>! This maintenance release includes fixes across several areas of Spark.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">Spark 1.6.2</a>! This maintenance release includes fixes across several areas of Spark.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-07-26-spark-2-0-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-07-26-spark-2-0-0-released.md b/news/_posts/2016-07-26-spark-2-0-0-released.md
index a9597e7..29c74c5 100644
--- a/news/_posts/2016-07-26-spark-2-0-0-released.md
+++ b/news/_posts/2016-07-26-spark-2-0-0-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">Spark 2.0.0</a>! Visit the <a href="{{site.url}}releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">Spark 2.0.0</a>! Visit the <a href="{{site.baseurl}}/releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-10-03-spark-2-0-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-10-03-spark-2-0-1-released.md b/news/_posts/2016-10-03-spark-2-0-1-released.md
index b13fb18..7cbca1a 100644
--- a/news/_posts/2016-10-03-spark-2-0-1-released.md
+++ b/news/_posts/2016-10-03-spark-2-0-1-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">Apache Spark 2.0.1</a>! Visit the <a href="{{site.url}}releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">Apache Spark 2.0.1</a>! Visit the <a href="{{site.baseurl}}/releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-11-07-spark-1-6-3-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-11-07-spark-1-6-3-released.md b/news/_posts/2016-11-07-spark-1-6-3-released.md
index d8957d2..b0f4498 100644
--- a/news/_posts/2016-11-07-spark-1-6-3-released.md
+++ b/news/_posts/2016-11-07-spark-1-6-3-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">Spark 1.6.3</a>! This maintenance release includes fixes across several areas of Spark.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">Spark 1.6.3</a>! This maintenance release includes fixes across several areas of Spark.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-11-14-spark-2-0-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-11-14-spark-2-0-2-released.md b/news/_posts/2016-11-14-spark-2-0-2-released.md
index 4570ec3..1f5c7e5 100644
--- a/news/_posts/2016-11-14-spark-2-0-2-released.md
+++ b/news/_posts/2016-11-14-spark-2-0-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">Apache Spark 2.0.2</a>! This maintenance release includes fixes across several areas of Spark, as well as Kafka 0.10 and runtime metrics support for Structured Streaming.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">Apache Spark 2.0.2</a>! This maintenance release includes fixes across several areas of Spark, as well as Kafka 0.10 and runtime metrics support for Structured Streaming.
 
-Visit the <a href="{{site.url}}releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2013-09-25-spark-release-0-8-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2013-09-25-spark-release-0-8-0.md b/releases/_posts/2013-09-25-spark-release-0-8-0.md
index 6ca6ecb..d74806a 100644
--- a/releases/_posts/2013-09-25-spark-release-0-8-0.md
+++ b/releases/_posts/2013-09-25-spark-release-0-8-0.md
@@ -19,7 +19,7 @@ You can download Spark 0.8.0 as either a <a href="http://spark-project.org/downl
 Spark now displays a variety of monitoring data in a web UI (by default at port 4040 on the driver node). A new job dashboard contains information about running, succeeded, and failed jobs, including percentile statistics covering task runtime, shuffled data, and garbage collection. The existing storage dashboard has been extended, and additional pages have been added to display total storage and task information per-executor. Finally, a new metrics library exposes internal Spark metrics through various API\u2019s including JMX and Ganglia.
 
 <p style="text-align: center;">
-<img src="{{site.url}}images/0.8.0-ui-screenshot.png" style="width:90%;">
+<img src="{{site.baseurl}}/images/0.8.0-ui-screenshot.png" style="width:90%;">
 </p>
 
 ### Machine Learning Library

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2013-12-19-spark-release-0-8-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2013-12-19-spark-release-0-8-1.md b/releases/_posts/2013-12-19-spark-release-0-8-1.md
index 89248d9..4dbe34c 100644
--- a/releases/_posts/2013-12-19-spark-release-0-8-1.md
+++ b/releases/_posts/2013-12-19-spark-release-0-8-1.md
@@ -15,10 +15,10 @@ meta:
 Apache Spark 0.8.1 is a maintenance and performance release for the Scala 2.9 version of Spark. It also adds several new features, such as standalone mode high availability, that will appear in Spark 0.9 but developers wanted to have in Scala 2.9. Contributions to 0.8.1 came from 41 developers.
 
 ### YARN 2.2 Support
-Support has been added for running Spark on YARN 2.2 and newer. Due to a change in the YARN API between previous versions and 2.2+, this was not supported in Spark 0.8.0. See the <a href="{{site.url}}docs/0.8.1/running-on-yarn.html">YARN documentation</a> for specific instructions on how to build Spark for YARN 2.2+. We've also included a pre-compiled binary for YARN 2.2.
+Support has been added for running Spark on YARN 2.2 and newer. Due to a change in the YARN API between previous versions and 2.2+, this was not supported in Spark 0.8.0. See the <a href="{{site.baseurl}}/docs/0.8.1/running-on-yarn.html">YARN documentation</a> for specific instructions on how to build Spark for YARN 2.2+. We've also included a pre-compiled binary for YARN 2.2.
 
 ### High Availability Mode for Standalone Cluster Manager
-The standalone cluster manager now has a high availability (H/A) mode which can tolerate master failures. This is particularly useful for long-running applications such as streaming jobs and the shark server, where the scheduler master previously represented a single point of failure. Instructions for deploying H/A mode are included <a href="{{site.url}}docs/0.8.1/spark-standalone.html#high-availability">in the documentation</a>. The current implementation uses Zookeeper for coordination.
+The standalone cluster manager now has a high availability (H/A) mode which can tolerate master failures. This is particularly useful for long-running applications such as streaming jobs and the shark server, where the scheduler master previously represented a single point of failure. Instructions for deploying H/A mode are included <a href="{{site.baseurl}}/docs/0.8.1/spark-standalone.html#high-availability">in the documentation</a>. The current implementation uses Zookeeper for coordination.
 
 ### Performance Optimizations
 This release adds several performance optimizations:

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-02-02-spark-release-0-9-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-02-02-spark-release-0-9-0.md b/releases/_posts/2014-02-02-spark-release-0-9-0.md
index edcce3a..7f9e107 100644
--- a/releases/_posts/2014-02-02-spark-release-0-9-0.md
+++ b/releases/_posts/2014-02-02-spark-release-0-9-0.md
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 0.9.0 is a major release that adds significant new features. It updates Spark to Scala 2.10, simplifies high availability, and updates numerous components of the project. This release includes a first version of [GraphX]({{site.url}}graphx/), a powerful new framework for graph processing that comes with a library of standard algorithms. In addition, [Spark Streaming]({{site.url}}streaming/) is now out of alpha, and includes significant optimizations and simplified high availability deployment.
+Spark 0.9.0 is a major release that adds significant new features. It updates Spark to Scala 2.10, simplifies high availability, and updates numerous components of the project. This release includes a first version of [GraphX]({{site.baseurl}}/graphx/), a powerful new framework for graph processing that comes with a library of standard algorithms. In addition, [Spark Streaming]({{site.baseurl}}/streaming/) is now out of alpha, and includes significant optimizations and simplified high availability deployment.
 
 You can download Spark 0.9.0 as either a
 <a href="http://d3kbcqa49mib13.cloudfront.net/spark-0.9.0-incubating.tgz" onClick="trackOutboundLink(this, 'Release Download Links', 'cloudfront_spark-0.9.0-incubating.tgz'); return false;">source package</a>
@@ -27,16 +27,16 @@ Spark now runs on Scala 2.10, letting users benefit from the language and librar
 
 ### Configuration System
 
-The new [SparkConf]({{site.url}}docs/latest/api/core/index.html#org.apache.spark.SparkConf) class is now the preferred way to configure advanced settings on your SparkContext, though the previous Java system property method still works. SparkConf is especially useful in tests to make sure properties don\u2019t stay set across tests.
+The new [SparkConf]({{site.baseurl}}/docs/latest/api/core/index.html#org.apache.spark.SparkConf) class is now the preferred way to configure advanced settings on your SparkContext, though the previous Java system property method still works. SparkConf is especially useful in tests to make sure properties don\u2019t stay set across tests.
 
 ### Spark Streaming Improvements
 
 Spark Streaming is now out of alpha, and comes with simplified high availability and several optimizations.
 
-* When running on a Spark standalone cluster with the [standalone cluster high availability mode]({{site.url}}docs/0.9.0/spark-standalone.html#high-availability), you can submit a Spark Streaming driver application to the cluster and have it automatically recovered if either the driver or the cluster master crashes.
+* When running on a Spark standalone cluster with the [standalone cluster high availability mode]({{site.baseurl}}/docs/0.9.0/spark-standalone.html#high-availability), you can submit a Spark Streaming driver application to the cluster and have it automatically recovered if either the driver or the cluster master crashes.
 * Windowed operators have been sped up by 30-50%.
 * Spark Streaming\u2019s input source plugins (e.g. for Twitter, Kafka and Flume) are now separate Maven modules, making it easier to pull in only the dependencies you need.
-* A new [StreamingListener]({{site.url}}docs/0.9.0/api/streaming/index.html#org.apache.spark.streaming.scheduler.StreamingListener) interface has been added for monitoring statistics about the streaming computation.
+* A new [StreamingListener]({{site.baseurl}}/docs/0.9.0/api/streaming/index.html#org.apache.spark.streaming.scheduler.StreamingListener) interface has been added for monitoring statistics about the streaming computation.
 * A few aspects of the API have been improved:
    * `DStream` and `PairDStream` classes have been moved from `org.apache.spark.streaming` to `org.apache.spark.streaming.dstream` to keep it consistent with `org.apache.spark.rdd.RDD`.
    * `DStream.foreach` has been renamed to `foreachRDD` to make it explicit that it works for every RDD, not every element
@@ -45,22 +45,22 @@ Spark Streaming is now out of alpha, and comes with simplified high availability
 
 ### GraphX Alpha
 
-[GraphX]({{site.url}}graphx/) is a new framework for graph processing that uses recent advances in graph-parallel computation. It lets you build a graph within a Spark program using the standard Spark operators, then process it with new graph operators that are optimized for distributed computation. It includes [basic transformations]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Graph), a [Pregel API]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Pregel$) for iterative computation, and a standard library of [graph loaders]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.util.GraphGenerators$) and [analytics algorithms]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.package). By offering these features *within* the Spark engine, GraphX can significantly speed up processing pipelines compared to workflows that use different engines.
+[GraphX]({{site.baseurl}}/graphx/) is a new framework for graph processing that uses recent advances in graph-parallel computation. It lets you build a graph within a Spark program using the standard Spark operators, then process it with new graph operators that are optimized for distributed computation. It includes [basic transformations]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Graph), a [Pregel API]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Pregel$) for iterative computation, and a standard library of [graph loaders]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.util.GraphGenerators$) and [analytics algorithms]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.package). By offering these features *within* the Spark engine, GraphX can significantly speed up processing pipelines compared to workflows that use different engines.
 
 GraphX features in this release include:
 
 * Building graphs from arbitrary Spark RDDs
 * Basic operations to transform graphs or extract subgraphs
 * An optimized Pregel API that takes advantage of graph partitioning and indexing
-* Standard algorithms including [PageRank]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.PageRank$), [connected components]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.ConnectedComponents$), [strongly connected components]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.StronglyConnectedComponents$), [SVD++]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.SVDPlusPlus$), and [triangle counting]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.TriangleCount$)
+* Standard algorithms including [PageRank]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.PageRank$), [connected components]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.ConnectedComponents$), [strongly connected components]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.StronglyConnectedComponents$), [SVD++]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.SVDPlusPlus$), and [triangle counting]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.TriangleCount$)
 * Interactive use from the Spark shell
 
 GraphX is still marked as alpha in this first release, but we recommend for new users to use it instead of the more limited Bagel API.
 
 ### MLlib Improvements
 
-* Spark\u2019s machine learning library (MLlib) is now [available in Python]({{site.url}}docs/0.9.0/mllib-guide.html#using-mllib-in-python), where it operates on NumPy data (currently requires Python 2.7 and NumPy 1.7)
-* A new algorithm has been added for [Naive Bayes classification]({{site.url}}docs/0.9.0/api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayes)
+* Spark\u2019s machine learning library (MLlib) is now [available in Python]({{site.baseurl}}/docs/0.9.0/mllib-guide.html#using-mllib-in-python), where it operates on NumPy data (currently requires Python 2.7 and NumPy 1.7)
+* A new algorithm has been added for [Naive Bayes classification]({{site.baseurl}}/docs/0.9.0/api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayes)
 * Alternating Least Squares models can now be used to predict ratings for multiple items in parallel
 * MLlib\u2019s documentation was expanded to include more examples in Scala, Java and Python
 
@@ -77,7 +77,7 @@ GraphX is still marked as alpha in this first release, but we recommend for new
 
 ### Core Engine
 
-* Spark\u2019s standalone mode now supports submitting a driver program to run on the cluster instead of on the external machine submitting it. You can access this functionality through the [org.apache.spark.deploy.Client]({{site.url}}docs/0.9.0/spark-standalone.html#launching-applications-inside-the-cluster) class.
+* Spark\u2019s standalone mode now supports submitting a driver program to run on the cluster instead of on the external machine submitting it. You can access this functionality through the [org.apache.spark.deploy.Client]({{site.baseurl}}/docs/0.9.0/spark-standalone.html#launching-applications-inside-the-cluster) class.
 * Large reduce operations now automatically spill data to disk if it does not fit in memory.
 * Users of standalone mode can now limit how many cores an application will use by default if the application writer didn\u2019t configure its size. Previously, such applications took all available cores on the cluster.
 * `spark-shell` now supports the `-i` option to run a script on startup.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-05-30-spark-release-1-0-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-05-30-spark-release-1-0-0.md b/releases/_posts/2014-05-30-spark-release-1-0-0.md
index acb6b3e..22d59f6 100644
--- a/releases/_posts/2014-05-30-spark-release-1-0-0.md
+++ b/releases/_posts/2014-05-30-spark-release-1-0-0.md
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 1.0.0 is a major release marking the start of the 1.X line. This release brings both a variety of new features and strong API compatibility guarantees throughout the 1.X line. Spark 1.0 adds a new major component, [Spark SQL]({{site.url}}docs/latest/sql-programming-guide.html), for loading and manipulating structured data in Spark. It includes major extensions to all of Spark\u2019s existing standard libraries ([ML]({{site.url}}docs/latest/mllib-guide.html), [Streaming]({{site.url}}docs/latest/streaming-programming-guide.html), and [GraphX]({{site.url}}docs/latest/graphx-programming-guide.html)) while also enhancing language support in Java and Python. Finally, Spark 1.0 brings operational improvements including full support for the Hadoop/YARN security model and a unified submission process for all supported cluster managers.
+Spark 1.0.0 is a major release marking the start of the 1.X line. This release brings both a variety of new features and strong API compatibility guarantees throughout the 1.X line. Spark 1.0 adds a new major component, [Spark SQL]({{site.baseurl}}/docs/latest/sql-programming-guide.html), for loading and manipulating structured data in Spark. It includes major extensions to all of Spark\u2019s existing standard libraries ([ML]({{site.baseurl}}/docs/latest/mllib-guide.html), [Streaming]({{site.baseurl}}/docs/latest/streaming-programming-guide.html), and [GraphX]({{site.baseurl}}/docs/latest/graphx-programming-guide.html)) while also enhancing language support in Java and Python. Finally, Spark 1.0 brings operational improvements including full support for the Hadoop/YARN security model and a unified submission process for all supported cluster managers.
 
 You can download Spark 1.0.0 as either a 
 <a href="http://d3kbcqa49mib13.cloudfront.net/spark-1.0.0.tgz" onClick="trackOutboundLink(this, 'Release Download Links', 'cloudfront_spark-1.0.0.tgz'); return false;">source package</a>
@@ -28,13 +28,13 @@ Spark 1.0.0 is the first release in the 1.X major line. Spark is guaranteeing st
 For users running in secured Hadoop environments, Spark now integrates with the Hadoop/YARN security model. Spark will authenticate job submission, securely transfer HDFS credentials, and authenticate communication between components.
 
 ### Operational and Packaging Improvements
-This release significantly simplifies the process of bundling and submitting a Spark application. A new [spark-submit tool]({{site.url}}docs/latest/submitting-applications.html) allows users to submit an application to any Spark cluster, including local clusters, Mesos, or YARN, through a common process. The documentation for bundling Spark applications has been substantially expanded. We\u2019ve also added a history server for  Spark\u2019s web UI, allowing users to view Spark application data after individual applications are finished.
+This release significantly simplifies the process of bundling and submitting a Spark application. A new [spark-submit tool]({{site.baseurl}}/docs/latest/submitting-applications.html) allows users to submit an application to any Spark cluster, including local clusters, Mesos, or YARN, through a common process. The documentation for bundling Spark applications has been substantially expanded. We\u2019ve also added a history server for  Spark\u2019s web UI, allowing users to view Spark application data after individual applications are finished.
 
 ### Spark SQL
-This release introduces [Spark SQL]({{site.url}}docs/latest/sql-programming-guide.html) as a new alpha component. Spark SQL provides support for loading and manipulating structured data in Spark, either from external structured data sources (currently Hive and Parquet) or by adding a schema to an existing RDD. Spark SQL\u2019s API interoperates with the RDD data model, allowing users to interleave Spark code with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to choose an efficient execution plan, and can automatically push predicates into storage formats like Parquet. In future releases, Spark SQL will also provide a common API to other storage systems.
+This release introduces [Spark SQL]({{site.baseurl}}/docs/latest/sql-programming-guide.html) as a new alpha component. Spark SQL provides support for loading and manipulating structured data in Spark, either from external structured data sources (currently Hive and Parquet) or by adding a schema to an existing RDD. Spark SQL\u2019s API interoperates with the RDD data model, allowing users to interleave Spark code with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to choose an efficient execution plan, and can automatically push predicates into storage formats like Parquet. In future releases, Spark SQL will also provide a common API to other storage systems.
 
 ### MLlib Improvements
-In 1.0.0, Spark\u2019s MLlib adds support for sparse feature vectors in Scala, Java, and Python. It takes advantage of sparsity in both storage and computation in linear methods, k-means, and naive Bayes. In addition, this release adds several new algorithms: scalable decision trees for both classification and regression, distributed matrix algorithms including SVD and PCA, model evaluation functions, and L-BFGS as an optimization primitive. The [MLlib programming guide]({{site.url}}docs/latest/mllib-guide.html) and code examples have also been greatly expanded.
+In 1.0.0, Spark\u2019s MLlib adds support for sparse feature vectors in Scala, Java, and Python. It takes advantage of sparsity in both storage and computation in linear methods, k-means, and naive Bayes. In addition, this release adds several new algorithms: scalable decision trees for both classification and regression, distributed matrix algorithms including SVD and PCA, model evaluation functions, and L-BFGS as an optimization primitive. The [MLlib programming guide]({{site.baseurl}}/docs/latest/mllib-guide.html) and code examples have also been greatly expanded.
 
 ### GraphX and Streaming Improvements
 In addition to usability and maintainability improvements, GraphX in Spark 1.0 brings substantial performance boosts in graph loading, edge reversal, and neighborhood computation. These operations now require less communication and produce simpler RDD graphs. Spark\u2019s Streaming module has added performance optimizations for stateful stream transformations, along with improved Flume support, and automated state cleanup for long running jobs.
@@ -43,7 +43,7 @@ In addition to usability and maintainability improvements, GraphX in Spark 1.0 b
 Spark 1.0 adds support for Java 8 [new lambda syntax](http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html) in its Java bindings. Java 8 supports a concise syntax for writing anonymous functions, similar to the closure syntax in Scala and Python. This change requires small changes for users of the current Java API, which are noted in the documentation. Spark\u2019s Python API has been extended to support several new functions. We\u2019ve also included several stability improvements in the Python API, particularly for large datasets. PySpark now supports running on YARN as well.
 
 ### Documentation
-Spark's [programming guide]({{site.url}}docs/latest/programming-guide.html) has been significantly expanded to centrally cover all supported languages and discuss more operators and aspects of the development life cycle. The [MLlib guide]({{site.url}}docs/latest/mllib-guide.html) has also been expanded with significantly more detail and examples for each algorithm, while documents on configuration, YARN and Mesos have also been revamped.
+Spark's [programming guide]({{site.baseurl}}/docs/latest/programming-guide.html) has been significantly expanded to centrally cover all supported languages and discuss more operators and aspects of the development life cycle. The [MLlib guide]({{site.baseurl}}/docs/latest/mllib-guide.html) has also been expanded with significantly more detail and examples for each algorithm, while documents on configuration, YARN and Mesos have also been revamped.
 
 ### Smaller Changes
 - PySpark now works with more Python versions than before -- Python 2.6+ instead of 2.7+, and NumPy 1.4+ instead of 1.7+.
@@ -52,12 +52,12 @@ Spark's [programming guide]({{site.url}}docs/latest/programming-guide.html) has
 - Support for off-heap storage in Tachyon has been added via a special build target.
 - Datasets persisted with `DISK_ONLY` now write directly to disk, significantly improving memory usage for large datasets.
 - Intermediate state created during a Spark job is now garbage collected when the corresponding RDDs become unreferenced, improving performance.
-- Spark now includes a [Javadoc version]({{site.url}}docs/latest/api/java/index.html) of all its API docs and a [unified Scaladoc]({{site.url}}docs/latest/api/scala/index.html) for all modules.
+- Spark now includes a [Javadoc version]({{site.baseurl}}/docs/latest/api/java/index.html) of all its API docs and a [unified Scaladoc]({{site.baseurl}}/docs/latest/api/scala/index.html) for all modules.
 - A new SparkContext.wholeTextFiles method lets you operate on small text files as individual records.
 
 
 ### Migrating to Spark 1.0
-While most of the Spark API remains the same as in 0.x versions, a few changes have been made for long-term flexibility, especially in the Java API (to support Java 8 lambdas). The documentation includes [migration information]({{site.url}}docs/latest/programming-guide.html#migrating-from-pre-10-versions-of-spark) to upgrade your applications.
+While most of the Spark API remains the same as in 0.x versions, a few changes have been made for long-term flexibility, especially in the Java API (to support Java 8 lambdas). The documentation includes [migration information]({{site.baseurl}}/docs/latest/programming-guide.html#migrating-from-pre-10-versions-of-spark) to upgrade your applications.
 
 ### Contributors
 The following developers contributed to this release:

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-09-11-spark-release-1-1-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-09-11-spark-release-1-1-0.md b/releases/_posts/2014-09-11-spark-release-1-1-0.md
index f4878a6..b12a727 100644
--- a/releases/_posts/2014-09-11-spark-release-1-1-0.md
+++ b/releases/_posts/2014-09-11-spark-release-1-1-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.1.0 is the first minor release on the 1.X line. This release brings operational and performance improvements in Spark core along with significant extensions to Spark\u2019s newest libraries: MLlib and Spark SQL. It also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module. Spark 1.1 represents the work of 171 contributors, the most to ever contribute to a Spark release!
 
-To download Spark 1.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Performance and Usability Improvements
 Across the board, Spark 1.1 adds features for improved stability and performance, particularly for large-scale workloads. Spark now performs [disk spilling for skewed blocks](https://issues.apache.org/jira/browse/SPARK-1777) during cache operations, guarding against memory overflows if a single RDD partition is large. Disk spilling during aggregations, introduced in Spark 1.0, has been [ported to PySpark](https://issues.apache.org/jira/browse/SPARK-2538). This release introduces a [new shuffle implementation](https://issues.apache.org/jira/browse/SPARK-2045) optimized for very large scale shuffles. This \u201csort-based shuffle\u201d will be become the default in the next release, and is now available to users. For jobs with large numbers of reducers, we recommend turning this on. This release also adds several usability improvements for monitoring the performance of long running or complex jobs. Among the changes are better [named accumulators](https://issues.apache.org/jira/browse/SPARK
 -2380) that display in Spark\u2019s UI, [dynamic updating of metrics](https://issues.apache.org/jira/browse/SPARK-2099) for progress tasks, and [reporting of input metrics](https://issues.apache.org/jira/browse/SPARK-1683) for tasks that read input data.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-11-26-spark-release-1-1-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-11-26-spark-release-1-1-1.md b/releases/_posts/2014-11-26-spark-release-1-1-1.md
index 4153942..ab067ee 100644
--- a/releases/_posts/2014-11-26-spark-release-1-1-1.md
+++ b/releases/_posts/2014-11-26-spark-release-1-1-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.1.1 is a maintenance release with bug fixes. This release is based on the [branch-1.1](https://github.com/apache/spark/tree/branch-1.1) maintenance branch of Spark. We recommend all 1.1.0 users to upgrade to this stable release. Contributions to this release came from 55 developers.
 
-To download Spark 1.1.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.1.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.1.1 contains bug fixes in several components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](http://s.apache.org/z9h) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-12-18-spark-release-1-2-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-12-18-spark-release-1-2-0.md b/releases/_posts/2014-12-18-spark-release-1-2-0.md
index d9dab5c..bb9a01c 100644
--- a/releases/_posts/2014-12-18-spark-release-1-2-0.md
+++ b/releases/_posts/2014-12-18-spark-release-1-2-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.2.0 is the third release on the 1.X line. This release brings performance and usability improvements in Spark\u2019s core engine, a major new API for MLlib, expanded ML support in Python, a fully H/A mode in Spark Streaming, and much more. GraphX has seen major performance and API improvements and graduates from an alpha component. Spark 1.2 represents the work of 172 contributors from more than 60 institutions in more than 1000 individual patches.
 
-To download Spark 1.2 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.2 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Spark Core
 In 1.2 Spark core upgrades two major subsystems to improve the performance and stability of very large scale shuffles. The first is Spark\u2019s communication manager used during bulk transfers, which upgrades to a [netty-based implementation](https://issues.apache.org/jira/browse/SPARK-2468). The second is Spark\u2019s shuffle mechanism, which upgrades to the [\u201csort based\u201d shuffle initially released in Spark 1.1](https://issues.apache.org/jira/browse/SPARK-3280). These both improve the performance and stability of very large scale shuffles. Spark also adds an [elastic scaling mechanism](https://issues.apache.org/jira/browse/SPARK-3174) designed to improve cluster utilization during long running ETL-style jobs. This is currently supported on YARN and will make its way to other cluster managers in future versions. Finally, Spark 1.2 adds support for Scala 2.11. For instructions on building for Scala 2.11 see the [build documentation](/docs/1.2.0/building-spark.html#building-for-scala-2
 11).

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-02-09-spark-release-1-2-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-02-09-spark-release-1-2-1.md b/releases/_posts/2015-02-09-spark-release-1-2-1.md
index 8bd5aef..3f5c579 100644
--- a/releases/_posts/2015-02-09-spark-release-1-2-1.md
+++ b/releases/_posts/2015-02-09-spark-release-1-2-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.2.1 is a maintenance release containing stability fixes. This release is based on the [branch-1.2](https://github.com/apache/spark/tree/branch-1.2) maintenance branch of Spark. We recommend all 1.2.0 users to upgrade to this stable release. Contributions to this release came from 69 developers.
 
-To download Spark 1.2.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.2.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.2.1 contains bug fixes in several components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](http://s.apache.org/Mpn) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-03-13-spark-release-1-3-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-03-13-spark-release-1-3-0.md b/releases/_posts/2015-03-13-spark-release-1-3-0.md
index bc9c4db..03230fa 100644
--- a/releases/_posts/2015-03-13-spark-release-1-3-0.md
+++ b/releases/_posts/2015-03-13-spark-release-1-3-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.3.0 is the fourth release on the 1.X line. This release brings a new DataFrame API alongside the graduation of Spark SQL from an alpha project. It also brings usability improvements in Spark\u2019s core engine and expansion of MLlib and Spark Streaming. Spark 1.3 represents the work of 174 contributors from more than 60 institutions in more than 1000 individual patches.
 
-To download Spark 1.3 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.3 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Spark Core
 Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports [multi level aggregation trees](https://issues.apache.org/jira/browse/SPARK-5430) to help speed up expensive reduce operations. [Improved error reporting](https://issues.apache.org/jira/browse/SPARK-5063) has been added for certain gotcha operations. Spark's Jetty dependency is [now shaded](https://issues.apache.org/jira/browse/SPARK-3996) to help avoid conflicts with user programs. Spark now supports [SSL encryption](https://issues.apache.org/jira/browse/SPARK-3883) for some communication endpoints. Finaly, realtime [GC metrics](https://issues.apache.org/jira/browse/SPARK-3428) and [record counts](https://issues.apache.org/jira/browse/SPARK-4874) have been added to the UI. 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-04-17-spark-release-1-2-2.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-04-17-spark-release-1-2-2.md b/releases/_posts/2015-04-17-spark-release-1-2-2.md
index e118849..2bc3974 100644
--- a/releases/_posts/2015-04-17-spark-release-1-2-2.md
+++ b/releases/_posts/2015-04-17-spark-release-1-2-2.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.2.2 is a maintenance release containing stability fixes. This release is based on the [branch-1.2](https://github.com/apache/spark/tree/branch-1.2) maintenance branch of Spark. We recommend all 1.2.1 users to upgrade to this stable release. Contributions to this release came from 39 developers.
 
-To download Spark 1.2.2 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.2.2 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.2.2 contains bug fixes in several components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%201.2.2%20ORDER%20BY%20priority%2C%20component) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-04-17-spark-release-1-3-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-04-17-spark-release-1-3-1.md b/releases/_posts/2015-04-17-spark-release-1-3-1.md
index dc7c5d4..40ce957 100644
--- a/releases/_posts/2015-04-17-spark-release-1-3-1.md
+++ b/releases/_posts/2015-04-17-spark-release-1-3-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.3.1 is a maintenance release containing stability fixes. This release is based on the [branch-1.3](https://github.com/apache/spark/tree/branch-1.3) maintenance branch of Spark. We recommend all 1.3.0 users to upgrade to this stable release. Contributions to this release came from 60 developers.
 
-To download Spark 1.3.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.3.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.3.1 contains several bug fixes in Spark SQL and assorted fixes in other components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%201.3.1%20ORDER%20BY%20priority%2C%20component) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-06-11-spark-release-1-4-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-06-11-spark-release-1-4-0.md b/releases/_posts/2015-06-11-spark-release-1-4-0.md
index b7c315a..e02310f 100644
--- a/releases/_posts/2015-06-11-spark-release-1-4-0.md
+++ b/releases/_posts/2015-06-11-spark-release-1-4-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.4.0 is the fifth release on the 1.X line. This release brings an R API to Spark. It also brings usability improvements in Spark\u2019s core engine and expansion of MLlib and Spark Streaming. Spark 1.4 represents the work of more than 210 contributors from more than 70 institutions in more than 1000 individual patches.
 
-To download Spark 1.4 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.4 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### SparkR
 Spark 1.4 is the first release to package SparkR, an R binding for Spark based

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-07-15-spark-release-1-4-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-07-15-spark-release-1-4-1.md b/releases/_posts/2015-07-15-spark-release-1-4-1.md
index 58b53b8..7664355 100644
--- a/releases/_posts/2015-07-15-spark-release-1-4-1.md
+++ b/releases/_posts/2015-07-15-spark-release-1-4-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.4.1 is a maintenance release containing stability fixes. This release is based on the [branch-1.4](https://github.com/apache/spark/tree/branch-1.4) maintenance branch of Spark. We recommend all 1.4.0 users to upgrade to this stable release. 85 developers contributed to this release.
 
-To download Spark 1.4.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.4.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.4.1 contains several bug fixes in Spark's DataFrame and data source support and assorted fixes in other components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%201.4.1%20ORDER%20BY%20priority%2C%20component) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-09-09-spark-release-1-5-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-09-09-spark-release-1-5-0.md b/releases/_posts/2015-09-09-spark-release-1-5-0.md
index b527f7f..70d3368 100644
--- a/releases/_posts/2015-09-09-spark-release-1-5-0.md
+++ b/releases/_posts/2015-09-09-spark-release-1-5-0.md
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 1.5.0 is the sixth release on the 1.x line. This release represents 1400+ patches from 230+ contributors and 80+ institutions. To download Spark 1.5.0 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+Spark 1.5.0 is the sixth release on the 1.x line. This release represents 1400+ patches from 230+ contributors and 80+ institutions. To download Spark 1.5.0 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 You can consult JIRA for the [detailed changes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&version=12332078). We have curated a list of high level changes here:
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md b/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
index 5889a35..3236467 100644
--- a/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
+++ b/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
@@ -20,6 +20,6 @@ This screencast marks the beginning of a series of hands-on screencasts we will
 
 <div class="video-container video-square shadow"><iframe width="755" height="705" src="//www.youtube.com/embed/bWorBGOFBWY?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-Check out the next spark screencast in the series, <a href="{{site.url}}screencasts/2-spark-documentation-overview.html">Spark Screencast #2 - Overview of Spark Documentation</a>.
+Check out the next spark screencast in the series, <a href="{{site.baseurl}}/screencasts/2-spark-documentation-overview.html">Spark Screencast #2 - Overview of Spark Documentation</a>.
 
-For more information and links to other Spark screencasts, check out the <a href="{{site.url}}documentation.html">Spark documentation page</a>.
+For more information and links to other Spark screencasts, check out the <a href="{{site.baseurl}}/documentation.html">Spark documentation page</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md b/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
index af5d281..1fd7b7d 100644
--- a/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
+++ b/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
@@ -12,11 +12,11 @@ This is our 2nd Spark screencast. In it, we take a tour of the documentation ava
 
 <div class="video-container video-square shadow"><iframe width="755" height="705" src="//www.youtube.com/embed/Dbqe_rv-NJQ?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-Check out the next spark screencast in the series, <a href="{{site.url}}screencasts/3-transformations-and-caching.html">Spark Screencast #3 - Transformations and Caching</a>.
+Check out the next spark screencast in the series, <a href="{{site.baseurl}}/screencasts/3-transformations-and-caching.html">Spark Screencast #3 - Transformations and Caching</a>.
 
 
 And here are links to the documentation shown in the video:
 <ul>
-  <li><a href="{{site.url}}documentation.html">Spark documentation page</a></li>
+  <li><a href="{{site.baseurl}}/documentation.html">Spark documentation page</a></li>
   <li><a href="http://ampcamp.berkeley.edu/big-data-mini-course-home">Amp Camp Mini Course</a></li>
 </ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-04-16-3-transformations-and-caching.md b/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
index bb8e367..7a4cb36 100644
--- a/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
+++ b/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
@@ -12,6 +12,6 @@ In this third Spark screencast, we demonstrate more advanced use of RDD actions
 
 <div class="video-container video-square shadow"><iframe width="755" height="705" src="//www.youtube.com/embed/TtvxKzO9jXE?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-Check out the next spark screencast in the series, <a href="{{site.url}}screencasts/4-a-standalone-job-in-spark.html">Spark Screencast #4 - A Standalone Job in Scala</a>.
+Check out the next spark screencast in the series, <a href="{{site.baseurl}}/screencasts/4-a-standalone-job-in-spark.html">Spark Screencast #4 - A Standalone Job in Scala</a>.
 
-For more information and links to other Spark screencasts, check out the <a href="{{site.url}}documentation.html">Spark documentation page</a>.
+For more information and links to other Spark screencasts, check out the <a href="{{site.baseurl}}/documentation.html">Spark documentation page</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md b/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
index 2bd3c2c..96fb8dc 100644
--- a/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
+++ b/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
@@ -13,4 +13,4 @@ In this Spark screencast, we create a standalone Apache Spark job in Scala. In t
 
 <div class="video-container video-16x9 shadow"><iframe width="755" height="425" src="//www.youtube.com/embed/GaBn-YjlR8Q?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-For more information and links to other Spark screencasts, check out the <a href="{{site.url}}documentation.html">Spark documentation page</a>.
+For more information and links to other Spark screencasts, check out the <a href="{{site.baseurl}}/documentation.html">Spark documentation page</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/documentation.html
----------------------------------------------------------------------
diff --git a/site/documentation.html b/site/documentation.html
index 60c1b59..2852976 100644
--- a/site/documentation.html
+++ b/site/documentation.html
@@ -255,12 +255,13 @@
 </ul>
 
 <h4><a name="meetup-videos"></a>Meetup Talk Videos</h4>
-<p>In addition to the videos listed below, you can also view <a href="http://www.meetup.com/spark-users/files/">all slides from Bay Area meetups here</a>.
+<p>In addition to the videos listed below, you can also view <a href="http://www.meetup.com/spark-users/files/">all slides from Bay Area meetups here</a>.</p>
 <style type="text/css">
   .video-meta-info {
     font-size: 0.95em;
   }
-</style></p>
+</style>
+
 <ul>
   <li><a href="http://www.youtube.com/watch?v=NUQ-8to2XAk&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a">Spark 1.0 and Beyond</a> (<a href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt">slides</a>) <span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 2014-04-23</span></li>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/index.html
----------------------------------------------------------------------
diff --git a/site/news/index.html b/site/news/index.html
index 5d9d27d..bfc8a8e 100644
--- a/site/news/index.html
+++ b/site/news/index.html
@@ -390,22 +390,22 @@ With this release the Spark community continues to grow, with contributions from
 
 <article class="hentry">
     <header class="entry-header">
-      <h3 class="entry-title"><a href="/news/one-month-to-spark-summit-2015.html">One month to Spark Summit 2015 in San Francisco</a></h3>
-      <div class="entry-date">May 14, 2015</div>
+      <h3 class="entry-title"><a href="/news/spark-summit-europe.html">Announcing Spark Summit Europe</a></h3>
+      <div class="entry-date">May 15, 2015</div>
     </header>
-    <div class="entry-content"><p>There is one month left until <a href="https://spark-summit.org/2015/">Spark Summit 2015</a>, which
-will be held in San Francisco on June 15th to 17th.
-The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presentations</a> from over 50 organizations using Spark, focused on use cases and ongoing development.</p>
-
+    <div class="entry-content"><p>Abstract submissions are now open for the first ever <a href="https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/">Spark Summit Europe</a>. The event will take place on October 27th to 29th in Amsterdam. Submissions are welcome across a variety of Spark related topics, including use cases and ongoing development.</p>
 </div>
   </article>
 
 <article class="hentry">
     <header class="entry-header">
-      <h3 class="entry-title"><a href="/news/spark-summit-europe.html">Announcing Spark Summit Europe</a></h3>
-      <div class="entry-date">May 14, 2015</div>
+      <h3 class="entry-title"><a href="/news/one-month-to-spark-summit-2015.html">One month to Spark Summit 2015 in San Francisco</a></h3>
+      <div class="entry-date">May 15, 2015</div>
     </header>
-    <div class="entry-content"><p>Abstract submissions are now open for the first ever <a href="https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/">Spark Summit Europe</a>. The event will take place on October 27th to 29th in Amsterdam. Submissions are welcome across a variety of Spark related topics, including use cases and ongoing development.</p>
+    <div class="entry-content"><p>There is one month left until <a href="https://spark-summit.org/2015/">Spark Summit 2015</a>, which
+will be held in San Francisco on June 15th to 17th.
+The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presentations</a> from over 50 organizations using Spark, focused on use cases and ongoing development.</p>
+
 </div>
   </article>
 
@@ -414,7 +414,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <h3 class="entry-title"><a href="/news/spark-summit-east-2015-videos-posted.html">Spark Summit East 2015 Videos Posted</a></h3>
       <div class="entry-date">April 20, 2015</div>
     </header>
-    <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top. </p>
+    <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top.</p>
 
 </div>
   </article>
@@ -424,7 +424,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <h3 class="entry-title"><a href="/news/spark-1-2-2-released.html">Spark 1.2.2 and 1.3.1 released</a></h3>
       <div class="entry-date">April 17, 2015</div>
     </header>
-    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. </p>
+    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers.</p>
 
 </div>
   </article>
@@ -536,7 +536,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
-Contributions to this release came from 28 developers. </p>
+Contributions to this release came from 28 developers.</p>
 
 </div>
   </article>
@@ -607,7 +607,7 @@ about the latest happenings in Spark.</p>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
-Contributions to this release came from 37 developers. </p>
+Contributions to this release came from 37 developers.</p>
 
 </div>
   </article>
@@ -780,11 +780,6 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
     </header>
     <div class="entry-content"><p>We have released the first two screencasts in a series of short hands-on video training courses we will be publishing to help new users get up and running with Spark in minutes.</p>
 
-<p>The first Spark screencast is called <a href="/screencasts/1-first-steps-with-spark.html">First Steps With Spark</a> and walks you through downloading and building Spark, as well as using the Spark shell, all in less than 10 minutes!</p>
-
-<p>The second screencast is a 2 minute <a href="/screencasts/2-spark-documentation-overview.html">overview of the Spark documentation</a>.</p>
-
-<p>We hope you find these screencasts useful.</p>
 </div>
   </article>
 
@@ -862,7 +857,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 <li><a href="http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/">DataInformed</a> interviewed two Spark users and wrote about their applications in anomaly detection, predictive analytics and data mining.</li>
 </ul>
 
-<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>. </p>
+<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>.</p>
 </div>
   </article>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-0-9-1-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-0-9-1-released.html b/site/news/spark-0-9-1-released.html
index 24669a9..40152b7 100644
--- a/site/news/spark-0-9-1-released.html
+++ b/site/news/spark-0-9-1-released.html
@@ -189,7 +189,7 @@
 <p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
-Contributions to this release came from 37 developers. </p>
+Contributions to this release came from 37 developers.</p>
 
 <p>Visit the <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">release notes</a> 
 to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-0-9-2-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-0-9-2-released.html b/site/news/spark-0-9-2-released.html
index 7c4ee38..70104b4 100644
--- a/site/news/spark-0-9-2-released.html
+++ b/site/news/spark-0-9-2-released.html
@@ -188,7 +188,7 @@
 
 <p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
-Contributions to this release came from 28 developers. </p>
+Contributions to this release came from 28 developers.</p>
 
 <p>Visit the <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">release notes</a> 
 to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-1-1-0-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-1-1-0-released.html b/site/news/spark-1-1-0-released.html
index 55bcdf0..42ae590 100644
--- a/site/news/spark-1-1-0-released.html
+++ b/site/news/spark-1-1-0-released.html
@@ -188,7 +188,7 @@
 
 <p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 171 developers!</p>
 
-<p>This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module. </p>
+<p>This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module.</p>
 
 <p>Visit the <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">release notes</a> to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-1-2-2-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-1-2-2-released.html b/site/news/spark-1-2-2-released.html
index f03b507..28ca3b1 100644
--- a/site/news/spark-1-2-2-released.html
+++ b/site/news/spark-1-2-2-released.html
@@ -186,7 +186,7 @@
     <h2>Spark 1.2.2 and 1.3.1 released</h2>
 
 
-<p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. </p>
+<p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers.</p>
 
 <p>To download either release, visit the <a href="/downloads.html">downloads</a> page.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-and-shark-in-the-news.html
----------------------------------------------------------------------
diff --git a/site/news/spark-and-shark-in-the-news.html b/site/news/spark-and-shark-in-the-news.html
index 7c964f7..3dac0cb 100644
--- a/site/news/spark-and-shark-in-the-news.html
+++ b/site/news/spark-and-shark-in-the-news.html
@@ -196,7 +196,7 @@
 <li><a href="http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/">DataInformed</a> interviewed two Spark users and wrote about their applications in anomaly detection, predictive analytics and data mining.</li>
 </ul>
 
-<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>. </p>
+<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>.</p>
 
 
 <p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-summit-east-2015-videos-posted.html
----------------------------------------------------------------------
diff --git a/site/news/spark-summit-east-2015-videos-posted.html b/site/news/spark-summit-east-2015-videos-posted.html
index e0cd003..fed7c12 100644
--- a/site/news/spark-summit-east-2015-videos-posted.html
+++ b/site/news/spark-summit-east-2015-videos-posted.html
@@ -186,7 +186,7 @@
     <h2>Spark Summit East 2015 Videos Posted</h2>
 
 
-<p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top. </p>
+<p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top.</p>
 
 <p>If you like what you see, consider joining us at the <a href="http://spark-summit.org/2015/agenda">2015 Spark Summit</a> in San Francisco.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-0-8-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-0-8-0.html b/site/releases/spark-release-0-8-0.html
index 4e0a4f9..5a5dbd5 100644
--- a/site/releases/spark-release-0-8-0.html
+++ b/site/releases/spark-release-0-8-0.html
@@ -210,13 +210,13 @@
 <p>Spark\u2019s internal job scheduler has been refactored and extended to include more sophisticated scheduling policies. In particular, a <a href="http://spark.incubator.apache.org/docs/0.8.0/job-scheduling.html#scheduling-within-an-application">fair scheduler</a> implementation now allows multiple users to share an instance of Spark, which helps users running shorter jobs to achieve good performance, even when longer-running jobs are running in parallel. Support for topology-aware scheduling has been extended, including the ability to take into account rack locality and support for multiple executors on a single machine.</p>
 
 <h3 id="easier-deployment-and-linking">Easier Deployment and Linking</h3>
-<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code>spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>. </p>
+<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code>spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>.</p>
 
 <h3 id="expanded-ec2-capabilities">Expanded EC2 Capabilities</h3>
 <p>Spark\u2019s EC2 scripts now support launching in any availability zone. Support has also been added for EC2 instance types which use the newer \u201cHVM\u201d architecture. This includes the cluster compute (cc1/cc2) family of instance types. We\u2019ve also added support for running newer versions of HDFS alongside Spark. Finally, we\u2019ve added the ability to launch clusters with maintenance releases of Spark in addition to launching the newest release.</p>
 
 <h3 id="improved-documentation">Improved Documentation</h3>
-<p>This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark. </p>
+<p>This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark.</p>
 
 <h3 id="other-improvements">Other Improvements</h3>
 <ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-0-9-1.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-0-9-1.html b/site/releases/spark-release-0-9-1.html
index fbc1d66..89a92d3 100644
--- a/site/releases/spark-release-0-9-1.html
+++ b/site/releases/spark-release-0-9-1.html
@@ -201,9 +201,9 @@
   <li>Fixed hash collision bug in external spilling [<a href="https://issues.apache.org/jira/browse/SPARK-1113">SPARK-1113</a>]</li>
   <li>Fixed conflict with Spark\u2019s log4j for users relying on other logging backends [<a href="https://issues.apache.org/jira/browse/SPARK-1190">SPARK-1190</a>]</li>
   <li>Fixed Graphx missing from Spark assembly jar in maven builds</li>
-  <li>Fixed silent failures due to map output status exceeding Akka frame size [<a href="https://issues.apache.org/jira/browse/SPARK-1244">SPARK-1244</a>] </li>
-  <li>Removed Spark\u2019s unnecessary direct dependency on ASM [<a href="https://issues.apache.org/jira/browse/SPARK-782">SPARK-782</a>] </li>
-  <li>Removed metrics-ganglia from default build due to LGPL license conflict [<a href="https://issues.apache.org/jira/browse/SPARK-1167">SPARK-1167</a>] </li>
+  <li>Fixed silent failures due to map output status exceeding Akka frame size [<a href="https://issues.apache.org/jira/browse/SPARK-1244">SPARK-1244</a>]</li>
+  <li>Removed Spark\u2019s unnecessary direct dependency on ASM [<a href="https://issues.apache.org/jira/browse/SPARK-782">SPARK-782</a>]</li>
+  <li>Removed metrics-ganglia from default build due to LGPL license conflict [<a href="https://issues.apache.org/jira/browse/SPARK-1167">SPARK-1167</a>]</li>
   <li>Fixed bug in distribution tarball not containing spark assembly jar [<a href="https://issues.apache.org/jira/browse/SPARK-1184">SPARK-1184</a>]</li>
   <li>Fixed bug causing infinite NullPointerException failures due to a null in map output locations [<a href="https://issues.apache.org/jira/browse/SPARK-1124">SPARK-1124</a>]</li>
   <li>Fixed bugs in post-job cleanup of scheduler\u2019s data structures</li>
@@ -219,7 +219,7 @@
   <li>Fixed bug making Spark application stall when YARN registration fails [<a href="https://issues.apache.org/jira/browse/SPARK-1032">SPARK-1032</a>]</li>
   <li>Race condition in getting HDFS delegation tokens in yarn-client mode [<a href="https://issues.apache.org/jira/browse/SPARK-1203">SPARK-1203</a>]</li>
   <li>Fixed bug in yarn-client mode not exiting properly [<a href="https://issues.apache.org/jira/browse/SPARK-1049">SPARK-1049</a>]</li>
-  <li>Fixed regression bug in ADD_JAR environment variable not correctly adding custom jars [<a href="https://issues.apache.org/jira/browse/SPARK-1089">SPARK-1089</a>] </li>
+  <li>Fixed regression bug in ADD_JAR environment variable not correctly adding custom jars [<a href="https://issues.apache.org/jira/browse/SPARK-1089">SPARK-1089</a>]</li>
 </ul>
 
 <h3 id="improvements-to-other-deployment-scenarios">Improvements to other deployment scenarios</h3>
@@ -230,19 +230,19 @@
 
 <h3 id="optimizations-to-mllib">Optimizations to MLLib</h3>
 <ul>
-  <li>Optimized memory usage of ALS [<a href="https://issues.apache.org/jira/browse/MLLIB-25">MLLIB-25</a>] </li>
+  <li>Optimized memory usage of ALS [<a href="https://issues.apache.org/jira/browse/MLLIB-25">MLLIB-25</a>]</li>
   <li>Optimized computation of YtY for implicit ALS [<a href="https://issues.apache.org/jira/browse/SPARK-1237">SPARK-1237</a>]</li>
   <li>Support for negative implicit input in ALS [<a href="https://issues.apache.org/jira/browse/MLLIB-22">MLLIB-22</a>]</li>
   <li>Setting of a random seed in ALS [<a href="https://issues.apache.org/jira/browse/SPARK-1238">SPARK-1238</a>]</li>
-  <li>Faster construction of features with intercept [<a href="https://issues.apache.org/jira/browse/SPARK-1260">SPARK-1260</a>] </li>
+  <li>Faster construction of features with intercept [<a href="https://issues.apache.org/jira/browse/SPARK-1260">SPARK-1260</a>]</li>
   <li>Check for intercept and weight in GLM\u2019s addIntercept [<a href="https://issues.apache.org/jira/browse/SPARK-1327">SPARK-1327</a>]</li>
 </ul>
 
 <h3 id="bug-fixes-and-better-api-parity-for-pyspark">Bug fixes and better API parity for PySpark</h3>
 <ul>
   <li>Fixed bug in Python de-pickling [<a href="https://issues.apache.org/jira/browse/SPARK-1135">SPARK-1135</a>]</li>
-  <li>Fixed bug in serialization of strings longer than 64K [<a href="https://issues.apache.org/jira/browse/SPARK-1043">SPARK-1043</a>] </li>
-  <li>Fixed bug that made jobs hang when base file is not available [<a href="https://issues.apache.org/jira/browse/SPARK-1025">SPARK-1025</a>] </li>
+  <li>Fixed bug in serialization of strings longer than 64K [<a href="https://issues.apache.org/jira/browse/SPARK-1043">SPARK-1043</a>]</li>
+  <li>Fixed bug that made jobs hang when base file is not available [<a href="https://issues.apache.org/jira/browse/SPARK-1025">SPARK-1025</a>]</li>
   <li>Added Missing RDD operations to PySpark - top, zip, foldByKey, repartition, coalesce, getStorageLevel, setName and toDebugString</li>
 </ul>
 
@@ -274,13 +274,13 @@
   <li>Kay Ousterhout - Multiple bug fixes in scheduler&#8217;s handling of task failures</li>
   <li>Kousuke Saruta - Use of https to access github</li>
   <li>Mark Grover  - Bug fix in distribution tar.gz</li>
-  <li>Matei Zaharia - Bug fixes in handling of task failures due to NPE,  and cleaning up of scheduler data structures </li>
+  <li>Matei Zaharia - Bug fixes in handling of task failures due to NPE,  and cleaning up of scheduler data structures</li>
   <li>Nan Zhu - Bug fixes in PySpark RDD.takeSample and adding of JARs using ADD_JAR -  and improvements to docs</li>
   <li>Nick Lanham - Added ability to make distribution tarballs with Tachyon</li>
   <li>Patrick Wendell - Bug fixes in ASM shading, fixes for log4j initialization, removing Ganglia due to LGPL license, and other miscallenous bug fixes</li>
   <li>Prabin Banka - RDD.zip and other missing RDD operations in PySpark</li>
   <li>Prashant Sharma - RDD.foldByKey in PySpark, and other PySpark doc improvements</li>
-  <li>Qiuzhuang - Bug fix in standalone worker </li>
+  <li>Qiuzhuang - Bug fix in standalone worker</li>
   <li>Raymond Liu - Changed working directory in ZookeeperPersistenceEngine</li>
   <li>Reynold Xin  - Improvements to docs and test infrastructure</li>
   <li>Sandy Ryza - Multiple important Yarn bug fixes and improvements</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-0-1.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-0-1.html b/site/releases/spark-release-1-0-1.html
index 4f9e0f9..78c88ea 100644
--- a/site/releases/spark-release-1-0-1.html
+++ b/site/releases/spark-release-1-0-1.html
@@ -258,8 +258,8 @@
   <li>Cheng Hao &#8211; SQL features</li>
   <li>Cheng Lian &#8211; SQL features</li>
   <li>Christian Tzolov &#8211; build improvmenet</li>
-  <li>Cl�ment MATHIEU &#8211; doc updates </li>
-  <li>CodingCat &#8211; doc updates and bug fix </li>
+  <li>Cl�ment MATHIEU &#8211; doc updates</li>
+  <li>CodingCat &#8211; doc updates and bug fix</li>
   <li>Colin McCabe &#8211; bug fix</li>
   <li>Daoyuan &#8211; SQL joins</li>
   <li>David Lemieux &#8211; bug fix</li>
@@ -275,7 +275,7 @@
   <li>Kan Zhang &#8211; PySpark SQL features</li>
   <li>Kay Ousterhout &#8211; documentation fix</li>
   <li>LY Lai &#8211; bug fix</li>
-  <li>Lars Albertsson &#8211; bug fix </li>
+  <li>Lars Albertsson &#8211; bug fix</li>
   <li>Lei Zhang &#8211; SQL fix and feature</li>
   <li>Mark Hamstra &#8211; bug fix</li>
   <li>Matei Zaharia &#8211; doc updates and bug fix</li>
@@ -297,7 +297,7 @@
   <li>Shixiong Zhu &#8211; code clean-up</li>
   <li>Szul, Piotr &#8211; bug fix</li>
   <li>Takuya UESHIN &#8211; bug fixes and SQL features</li>
-  <li>Thomas Graves &#8211; bug fix </li>
+  <li>Thomas Graves &#8211; bug fix</li>
   <li>Uri Laserson &#8211; bug fix</li>
   <li>Vadim Chekan &#8211; bug fix</li>
   <li>Varakhedi Sujeet &#8211; ec2 r3 support</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-0-2.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-0-2.html b/site/releases/spark-release-1-0-2.html
index fe36880..33b5cb1 100644
--- a/site/releases/spark-release-1-0-2.html
+++ b/site/releases/spark-release-1-0-2.html
@@ -268,7 +268,7 @@
   <li>johnnywalleye - Bug fixes in MLlib</li>
   <li>joyyoj - Bug fix in Streaming</li>
   <li>kballou - Doc fix</li>
-  <li>lianhuiwang - Doc fix </li>
+  <li>lianhuiwang - Doc fix</li>
   <li>witgo - Bug fix in sbt</li>
 </ul>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org