You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2016/11/15 18:23:06 UTC

[1/3] spark-website git commit: Use site.baseurl, not site.url, to work with Jekyll 3.3. Require Jekyll 3.3. Again commit HTML consistent with Jekyll 3.3 output. Fix date problem with news posts that set date: by removing date:.

Repository: spark-website
Updated Branches:
  refs/heads/asf-site 4e10a1ac1 -> d82e37220


http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-1-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-1-0.html b/site/releases/spark-release-1-1-0.html
index bf0f9e2..0df9f0d 100644
--- a/site/releases/spark-release-1-1-0.html
+++ b/site/releases/spark-release-1-1-0.html
@@ -197,7 +197,7 @@
 <p>Spark SQL adds a number of new features and performance improvements in this release. A <a href="http://spark.apache.org/docs/1.1.0/sql-programming-guide.html#running-the-thrift-jdbc-server">JDBC/ODBC server</a> allows users to connect to SparkSQL from many different applications and provides shared access to cached tables. A new module provides <a href="http://spark.apache.org/docs/1.1.0/sql-programming-guide.html#json-datasets">support for loading JSON data</a> directly into Spark\u2019s SchemaRDD format, including automatic schema inference. Spark SQL introduces <a href="http://spark.apache.org/docs/1.1.0/sql-programming-guide.html#other-configuration-options">dynamic bytecode generation</a> in this release, a technique which significantly speeds up execution for queries that perform complex expression evaluation.  This release also adds support for registering Python, Scala, and Java lambda functions as UDFs, which can then be called directly in SQL. Spark 1.1 adds a <a href="ht
 tp://spark.apache.org/docs/1.1.0/sql-programming-guide.html#programmatically-specifying-the-schema">public types API to allow users to create SchemaRDD\u2019s from custom data sources</a>. Finally, many optimizations have been added to the native Parquet support as well as throughout the engine.</p>
 
 <h3 id="mllib">MLlib</h3>
-<p>MLlib adds several new algorithms and optimizations in this release. 1.1 introduces a <a href="https://issues.apache.org/jira/browse/SPARK-2359">new library of statistical packages</a> which provides exploratory analytic functions. These include stratified sampling, correlations, chi-squared tests and support for creating random datasets. This release adds utilities for feature extraction (<a href="https://issues.apache.org/jira/browse/SPARK-2510">Word2Vec</a> and <a href="https://issues.apache.org/jira/browse/SPARK-2511">TF-IDF</a>) and feature transformation (<a href="https://issues.apache.org/jira/browse/SPARK-2272">normalization and standard scaling</a>). Also new are support for <a href="https://issues.apache.org/jira/browse/SPARK-1553">nonnegative matrix factorization</a> and <a href="https://issues.apache.org/jira/browse/SPARK-1782">SVD via Lanczos</a>. The decision tree algorithm has been <a href="https://issues.apache.org/jira/browse/SPARK-2478">added in Python and Java<
 /a>. A tree aggregation primitive has been added to help optimize many existing algorithms. Performance improves across the board in MLlib 1.1, with improvements of around 2-3X for many algorithms and up to 5X for large scale decision tree problems. </p>
+<p>MLlib adds several new algorithms and optimizations in this release. 1.1 introduces a <a href="https://issues.apache.org/jira/browse/SPARK-2359">new library of statistical packages</a> which provides exploratory analytic functions. These include stratified sampling, correlations, chi-squared tests and support for creating random datasets. This release adds utilities for feature extraction (<a href="https://issues.apache.org/jira/browse/SPARK-2510">Word2Vec</a> and <a href="https://issues.apache.org/jira/browse/SPARK-2511">TF-IDF</a>) and feature transformation (<a href="https://issues.apache.org/jira/browse/SPARK-2272">normalization and standard scaling</a>). Also new are support for <a href="https://issues.apache.org/jira/browse/SPARK-1553">nonnegative matrix factorization</a> and <a href="https://issues.apache.org/jira/browse/SPARK-1782">SVD via Lanczos</a>. The decision tree algorithm has been <a href="https://issues.apache.org/jira/browse/SPARK-2478">added in Python and Java<
 /a>. A tree aggregation primitive has been added to help optimize many existing algorithms. Performance improves across the board in MLlib 1.1, with improvements of around 2-3X for many algorithms and up to 5X for large scale decision tree problems.</p>
 
 <h3 id="graphx-and-spark-streaming">GraphX and Spark Streaming</h3>
 <p>Spark streaming adds a new data source <a href="https://issues.apache.org/jira/browse/SPARK-1981">Amazon Kinesis</a>. For the Apache Flume, a new mode is supported which <a href="https://issues.apache.org/jira/browse/SPARK-1729">pulls data from Flume</a>, simplifying deployment and providing high availability. The first of a set of <a href="https://issues.apache.org/jira/browse/SPARK-2438">streaming machine learning algorithms</a> is introduced with streaming linear regression. Finally, <a href="https://issues.apache.org/jira/browse/SPARK-1341">rate limiting</a> has been added for streaming inputs. GraphX adds <a href="https://issues.apache.org/jira/browse/SPARK-1991">custom storage levels for vertices and edges</a> along with <a href="https://issues.apache.org/jira/browse/SPARK-2748">improved numerical precision</a> across the board. Finally, GraphX adds a new label propagation algorithm.</p>
@@ -215,7 +215,7 @@
 
 <ul>
   <li>The default value of <code>spark.io.compression.codec</code> is now <code>snappy</code> for improved memory usage. Old behavior can be restored by switching to <code>lzf</code>.</li>
-  <li>The default value of <code>spark.broadcast.factory</code> is now <code>org.apache.spark.broadcast.TorrentBroadcastFactory</code> for improved efficiency of broadcasts. Old behavior can be restored by switching to <code>org.apache.spark.broadcast.HttpBroadcastFactory</code>. </li>
+  <li>The default value of <code>spark.broadcast.factory</code> is now <code>org.apache.spark.broadcast.TorrentBroadcastFactory</code> for improved efficiency of broadcasts. Old behavior can be restored by switching to <code>org.apache.spark.broadcast.HttpBroadcastFactory</code>.</li>
   <li>PySpark now performs external spilling during aggregations. Old behavior can be restored by setting <code>spark.shuffle.spill</code> to <code>false</code>.</li>
   <li>PySpark uses a new heuristic for determining the parallelism of shuffle operations. Old behavior can be restored by setting <code>spark.default.parallelism</code> to the number of cores in the cluster.</li>
 </ul>
@@ -275,7 +275,7 @@
   <li>Daneil Darabos &#8211; bug fixes and UI enhancements</li>
   <li>Daoyuan Wang &#8211; SQL fixes</li>
   <li>David Lemieux &#8211; bug fix</li>
-  <li>Davies Liu &#8211; PySpark fixes and spilling </li>
+  <li>Davies Liu &#8211; PySpark fixes and spilling</li>
   <li>DB Tsai &#8211; online summaries in MLlib and other MLlib features</li>
   <li>Derek Ma &#8211; bug fix</li>
   <li>Doris Xin &#8211; MLlib stats library and several fixes</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-2-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-2-0.html b/site/releases/spark-release-1-2-0.html
index 09e4007..2986afb 100644
--- a/site/releases/spark-release-1-2-0.html
+++ b/site/releases/spark-release-1-2-0.html
@@ -194,7 +194,7 @@
 <p>In 1.2 Spark core upgrades two major subsystems to improve the performance and stability of very large scale shuffles. The first is Spark\u2019s communication manager used during bulk transfers, which upgrades to a <a href="https://issues.apache.org/jira/browse/SPARK-2468">netty-based implementation</a>. The second is Spark\u2019s shuffle mechanism, which upgrades to the <a href="https://issues.apache.org/jira/browse/SPARK-3280">\u201csort based\u201d shuffle initially released in Spark 1.1</a>. These both improve the performance and stability of very large scale shuffles. Spark also adds an <a href="https://issues.apache.org/jira/browse/SPARK-3174">elastic scaling mechanism</a> designed to improve cluster utilization during long running ETL-style jobs. This is currently supported on YARN and will make its way to other cluster managers in future versions. Finally, Spark 1.2 adds support for Scala 2.11. For instructions on building for Scala 2.11 see the <a href="/docs/1.2.0/building-spark.ht
 ml#building-for-scala-211">build documentation</a>.</p>
 
 <h3 id="spark-streaming">Spark Streaming</h3>
-<p>This release includes two major feature additions to Spark\u2019s streaming library, a Python API and a write ahead log for full driver H/A. The <a href="https://issues.apache.org/jira/browse/SPARK-2377">Python API</a> covers almost all the DStream transformations and output operations. Input sources based on text files and text over sockets are currently supported. Support for Kafka and Flume input streams in Python will be added in the next release. Second, Spark streaming now features H/A driver support through a <a href="https://issues.apache.org/jira/browse/SPARK-3129">write ahead log (WAL)</a>. In Spark 1.1 and earlier, some buffered (received but not yet processed) data can be lost during driver restarts. To prevent this Spark 1.2 adds an optional WAL, which buffers received data into a fault-tolerant file system (e.g. HDFS). See the <a href="/docs/1.2.0/streaming-programming-guide.html">streaming programming guide</a> for more details. </p>
+<p>This release includes two major feature additions to Spark\u2019s streaming library, a Python API and a write ahead log for full driver H/A. The <a href="https://issues.apache.org/jira/browse/SPARK-2377">Python API</a> covers almost all the DStream transformations and output operations. Input sources based on text files and text over sockets are currently supported. Support for Kafka and Flume input streams in Python will be added in the next release. Second, Spark streaming now features H/A driver support through a <a href="https://issues.apache.org/jira/browse/SPARK-3129">write ahead log (WAL)</a>. In Spark 1.1 and earlier, some buffered (received but not yet processed) data can be lost during driver restarts. To prevent this Spark 1.2 adds an optional WAL, which buffers received data into a fault-tolerant file system (e.g. HDFS). See the <a href="/docs/1.2.0/streaming-programming-guide.html">streaming programming guide</a> for more details.</p>
 
 <h3 id="mllib">MLLib</h3>
 <p>Spark 1.2 previews a new set of machine learning API\u2019s in a package called spark.ml that <a href="https://issues.apache.org/jira/browse/SPARK-3530">supports learning pipelines</a>, where multiple algorithms are run in sequence with varying parameters. This type of pipeline is common in practical machine learning deployments. The new ML package uses Spark\u2019s SchemaRDD to represent <a href="https://issues.apache.org/jira/browse/SPARK-3573">ML datasets</a>, providing direct interoperability with Spark SQL. In addition to the new API, Spark 1.2 extends decision trees with two tree ensemble methods: <a href="https://issues.apache.org/jira/browse/SPARK-1545">random forests</a> and <a href="https://issues.apache.org/jira/browse/SPARK-1547">gradient-boosted trees</a>, among the most successful tree-based models for classification and regression. Finally, MLlib&#8217;s Python implementation receives a major update in 1.2 to simplify the process of adding Python APIs, along with better 
 Python API coverage.</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-3-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-3-0.html b/site/releases/spark-release-1-3-0.html
index 45180a7..ecfe27b 100644
--- a/site/releases/spark-release-1-3-0.html
+++ b/site/releases/spark-release-1-3-0.html
@@ -191,7 +191,7 @@
 <p>To download Spark 1.3 visit the <a href="/downloads.html">downloads</a> page.</p>
 
 <h3 id="spark-core">Spark Core</h3>
-<p>Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports <a href="https://issues.apache.org/jira/browse/SPARK-5430">multi level aggregation trees</a> to help speed up expensive reduce operations. <a href="https://issues.apache.org/jira/browse/SPARK-5063">Improved error reporting</a> has been added for certain gotcha operations. Spark&#8217;s Jetty dependency is <a href="https://issues.apache.org/jira/browse/SPARK-3996">now shaded</a> to help avoid conflicts with user programs. Spark now supports <a href="https://issues.apache.org/jira/browse/SPARK-3883">SSL encryption</a> for some communication endpoints. Finaly, realtime <a href="https://issues.apache.org/jira/browse/SPARK-3428">GC metrics</a> and <a href="https://issues.apache.org/jira/browse/SPARK-4874">record counts</a> have been added to the UI. </p>
+<p>Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports <a href="https://issues.apache.org/jira/browse/SPARK-5430">multi level aggregation trees</a> to help speed up expensive reduce operations. <a href="https://issues.apache.org/jira/browse/SPARK-5063">Improved error reporting</a> has been added for certain gotcha operations. Spark&#8217;s Jetty dependency is <a href="https://issues.apache.org/jira/browse/SPARK-3996">now shaded</a> to help avoid conflicts with user programs. Spark now supports <a href="https://issues.apache.org/jira/browse/SPARK-3883">SSL encryption</a> for some communication endpoints. Finaly, realtime <a href="https://issues.apache.org/jira/browse/SPARK-3428">GC metrics</a> and <a href="https://issues.apache.org/jira/browse/SPARK-4874">record counts</a> have been added to the UI.</p>
 
 <h3 id="dataframe-api">DataFrame API</h3>
 <p>Spark 1.3 adds a new <a href="/docs/1.3.0/sql-programming-guide.html#dataframes">DataFrames API</a> that provides powerful and convenient operators when working with structured datasets. The DataFrame is an evolution of the base RDD API that includes named fields along with schema information. It\u2019s easy to construct a DataFrame from sources such as Hive tables, JSON data, a JDBC database, or any implementation of Spark\u2019s new data source API. Data frames will become a common interchange format between Spark components and when importing and exporting data to other systems. Data frames are supported in Python, Scala, and Java.</p>
@@ -203,7 +203,7 @@
 <p>In this release Spark MLlib introduces several new algorithms: latent Dirichlet allocation (LDA) for <a href="https://issues.apache.org/jira/browse/SPARK-1405">topic modeling</a>, <a href="https://issues.apache.org/jira/browse/SPARK-2309">multinomial logistic regression</a> for multiclass classification, <a href="https://issues.apache.org/jira/browse/SPARK-5012">Gaussian mixture model (GMM)</a> and <a href="https://issues.apache.org/jira/browse/SPARK-4259">power iteration clustering</a> for clustering, <a href="https://issues.apache.org/jira/browse/SPARK-4001">FP-growth</a> for frequent pattern mining, and <a href="https://issues.apache.org/jira/browse/SPARK-4409">block matrix abstraction</a> for distributed linear algebra. Initial support has been added for <a href="https://issues.apache.org/jira/browse/SPARK-4587">model import/export</a> in exchangeable format, which will be expanded in future versions to cover more model types in Java/Python/Scala. The implementations of k-mea
 ns and ALS receive <a href="https://issues.apache.org/jira/browse/SPARK-3424, https://issues.apache.org/jira/browse/SPARK-3541">updates</a> that lead to significant performance gain. PySpark now supports the <a href="https://issues.apache.org/jira/browse/SPARK-4586">ML pipeline API</a> added in Spark 1.2, and <a href="https://issues.apache.org/jira/browse/SPARK-5094">gradient boosted trees</a> and <a href="https://issues.apache.org/jira/browse/SPARK-5012">Gaussian mixture model</a>. Finally, the ML pipeline API has been ported to support the new DataFrames abstraction.</p>
 
 <h3 id="spark-streaming">Spark Streaming</h3>
-<p>Spark 1.3 introduces a new <a href="https://issues.apache.org/jira/browse/SPARK-4964"><em>direct</em> Kafka API</a> (<a href="http://spark.apache.org/docs/1.3.0/streaming-kafka-integration.html">docs</a>) which enables exactly-once delivery without the use of write ahead logs. It also adds a <a href="https://issues.apache.org/jira/browse/SPARK-5047">Python Kafka API</a> along with infrastructure for additional Python API\u2019s in future releases. An online version of <a href="https://issues.apache.org/jira/browse/SPARK-4979">logistic regression</a> and the ability to read <a href="https://issues.apache.org/jira/browse/SPARK-4969">binary records</a> have also been added. For stateful operations, support has been added for loading of an <a href="https://issues.apache.org/jira/browse/SPARK-3660">initial state RDD</a>. Finally, the streaming programming guide has been updated to include information about SQL and DataFrame operations within streaming applications, and important clarific
 ations to the fault-tolerance semantics. </p>
+<p>Spark 1.3 introduces a new <a href="https://issues.apache.org/jira/browse/SPARK-4964"><em>direct</em> Kafka API</a> (<a href="http://spark.apache.org/docs/1.3.0/streaming-kafka-integration.html">docs</a>) which enables exactly-once delivery without the use of write ahead logs. It also adds a <a href="https://issues.apache.org/jira/browse/SPARK-5047">Python Kafka API</a> along with infrastructure for additional Python API\u2019s in future releases. An online version of <a href="https://issues.apache.org/jira/browse/SPARK-4979">logistic regression</a> and the ability to read <a href="https://issues.apache.org/jira/browse/SPARK-4969">binary records</a> have also been added. For stateful operations, support has been added for loading of an <a href="https://issues.apache.org/jira/browse/SPARK-3660">initial state RDD</a>. Finally, the streaming programming guide has been updated to include information about SQL and DataFrame operations within streaming applications, and important clarific
 ations to the fault-tolerance semantics.</p>
 
 <h3 id="graphx">GraphX</h3>
 <p>GraphX adds a handful of utility functions in this release, including conversion into a <a href="https://issues.apache.org/jira/browse/SPARK-4917">canonical edge graph</a>.</p>
@@ -219,7 +219,7 @@
 <ul>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6194">SPARK-6194</a>: A memory leak in PySPark&#8217;s <code>collect()</code>.</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6222">SPARK-6222</a>: An issue with failure recovery in Spark Streaming.</li>
-  <li><a href="https://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>: Spark SQL can&#8217;t read parquet data generated with Spark 1.1. </li>
+  <li><a href="https://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>: Spark SQL can&#8217;t read parquet data generated with Spark 1.1.</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6247">SPARK-6247</a>: Errors analyzing certain join types in Spark SQL.</li>
 </ul>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-3-1.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-3-1.html b/site/releases/spark-release-1-3-1.html
index b83a12f..ad353d9 100644
--- a/site/releases/spark-release-1-3-1.html
+++ b/site/releases/spark-release-1-3-1.html
@@ -196,10 +196,10 @@
 <h4 id="spark-sql">Spark SQL</h4>
 <ul>
   <li>Unable to use reserved words in DDL (<a href="http://issues.apache.org/jira/browse/SPARK-6250">SPARK-6250</a>)</li>
-  <li>Parquet no longer caches metadata (<a href="http://issues.apache.org/jira/browse/SPARK-6575">SPARK-6575</a>) </li>
+  <li>Parquet no longer caches metadata (<a href="http://issues.apache.org/jira/browse/SPARK-6575">SPARK-6575</a>)</li>
   <li>Bug when joining two Parquet tables (<a href="http://issues.apache.org/jira/browse/SPARK-6851">SPARK-6851</a>)</li>
-  <li>Unable to read parquet data generated by Spark 1.1.1 (<a href="http://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>) </li>
-  <li>Parquet data source may use wrong Hadoop FileSystem (<a href="http://issues.apache.org/jira/browse/SPARK-6330">SPARK-6330</a>) </li>
+  <li>Unable to read parquet data generated by Spark 1.1.1 (<a href="http://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>)</li>
+  <li>Parquet data source may use wrong Hadoop FileSystem (<a href="http://issues.apache.org/jira/browse/SPARK-6330">SPARK-6330</a>)</li>
 </ul>
 
 <h4 id="spark-streaming">Spark Streaming</h4>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-4-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-4-0.html b/site/releases/spark-release-1-4-0.html
index 434105b..64ef70f 100644
--- a/site/releases/spark-release-1-4-0.html
+++ b/site/releases/spark-release-1-4-0.html
@@ -250,7 +250,7 @@ Python coverage. MLlib also adds several new algorithms.</p>
 </ul>
 
 <h3 id="spark-streaming">Spark Streaming</h3>
-<p>Spark streaming adds visual instrumentation graphs and significantly improved debugging information in the UI. It also enhances support for both Kafka and Kinesis. </p>
+<p>Spark streaming adds visual instrumentation graphs and significantly improved debugging information in the UI. It also enhances support for both Kafka and Kinesis.</p>
 
 <ul>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-7602">SPARK-7602</a>: Visualization and monitoring in the streaming UI including batch drill down (<a href="https://issues.apache.org/jira/browse/SPARK-6796">SPARK-6796</a>, <a href="https://issues.apache.org/jira/browse/SPARK-6862">SPARK-6862</a>)</li>
@@ -276,7 +276,7 @@ Python coverage. MLlib also adds several new algorithms.</p>
 
 <h4 id="test-partners">Test Partners</h4>
 
-<p>Thanks to The following organizations, who helped benchmark or integration test release candidates: <br /> Intel, Palantir, Cloudera, Mesosphere, Huawei, Shopify, Netflix, Yahoo, UC Berkeley and Databricks. </p>
+<p>Thanks to The following organizations, who helped benchmark or integration test release candidates: <br /> Intel, Palantir, Cloudera, Mesosphere, Huawei, Shopify, Netflix, Yahoo, UC Berkeley and Databricks.</p>
 
 <h4 id="contributors">Contributors</h4>
 <ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-5-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-5-0.html b/site/releases/spark-release-1-5-0.html
index 42ea443..397348b 100644
--- a/site/releases/spark-release-1-5-0.html
+++ b/site/releases/spark-release-1-5-0.html
@@ -191,25 +191,25 @@
 <p>You can consult JIRA for the <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&amp;version=12332078">detailed changes</a>. We have curated a list of high level changes here:</p>
 
 <ul id="markdown-toc">
-  <li><a href="#apis-rdd-dataframe-and-sql">APIs: RDD, DataFrame and SQL</a></li>
-  <li><a href="#backend-execution-dataframe-and-sql">Backend Execution: DataFrame and SQL</a></li>
-  <li><a href="#integrations-data-sources-hive-hadoop-mesos-and-cluster-management">Integrations: Data Sources, Hive, Hadoop, Mesos and Cluster Management</a></li>
-  <li><a href="#r-language">R Language</a></li>
-  <li><a href="#machine-learning-and-advanced-analytics">Machine Learning and Advanced Analytics</a></li>
-  <li><a href="#spark-streaming">Spark Streaming</a></li>
-  <li><a href="#deprecations-removals-configs-and-behavior-changes">Deprecations, Removals, Configs, and Behavior Changes</a>    <ul>
-      <li><a href="#spark-core">Spark Core</a></li>
-      <li><a href="#spark-sql--dataframes">Spark SQL &amp; DataFrames</a></li>
-      <li><a href="#spark-streaming-1">Spark Streaming</a></li>
-      <li><a href="#mllib">MLlib</a></li>
+  <li><a href="#apis-rdd-dataframe-and-sql" id="markdown-toc-apis-rdd-dataframe-and-sql">APIs: RDD, DataFrame and SQL</a></li>
+  <li><a href="#backend-execution-dataframe-and-sql" id="markdown-toc-backend-execution-dataframe-and-sql">Backend Execution: DataFrame and SQL</a></li>
+  <li><a href="#integrations-data-sources-hive-hadoop-mesos-and-cluster-management" id="markdown-toc-integrations-data-sources-hive-hadoop-mesos-and-cluster-management">Integrations: Data Sources, Hive, Hadoop, Mesos and Cluster Management</a></li>
+  <li><a href="#r-language" id="markdown-toc-r-language">R Language</a></li>
+  <li><a href="#machine-learning-and-advanced-analytics" id="markdown-toc-machine-learning-and-advanced-analytics">Machine Learning and Advanced Analytics</a></li>
+  <li><a href="#spark-streaming" id="markdown-toc-spark-streaming">Spark Streaming</a></li>
+  <li><a href="#deprecations-removals-configs-and-behavior-changes" id="markdown-toc-deprecations-removals-configs-and-behavior-changes">Deprecations, Removals, Configs, and Behavior Changes</a>    <ul>
+      <li><a href="#spark-core" id="markdown-toc-spark-core">Spark Core</a></li>
+      <li><a href="#spark-sql--dataframes" id="markdown-toc-spark-sql--dataframes">Spark SQL &amp; DataFrames</a></li>
+      <li><a href="#spark-streaming-1" id="markdown-toc-spark-streaming-1">Spark Streaming</a></li>
+      <li><a href="#mllib" id="markdown-toc-mllib">MLlib</a></li>
     </ul>
   </li>
-  <li><a href="#known-issues">Known Issues</a>    <ul>
-      <li><a href="#sqldataframe">SQL/DataFrame</a></li>
-      <li><a href="#streaming">Streaming</a></li>
+  <li><a href="#known-issues" id="markdown-toc-known-issues">Known Issues</a>    <ul>
+      <li><a href="#sqldataframe" id="markdown-toc-sqldataframe">SQL/DataFrame</a></li>
+      <li><a href="#streaming" id="markdown-toc-streaming">Streaming</a></li>
     </ul>
   </li>
-  <li><a href="#credits">Credits</a></li>
+  <li><a href="#credits" id="markdown-toc-credits">Credits</a></li>
 </ul>
 
 <h3 id="apis-rdd-dataframe-and-sql">APIs: RDD, DataFrame and SQL</h3>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-6-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-6-0.html b/site/releases/spark-release-1-6-0.html
index ac240fc..6dcac58 100644
--- a/site/releases/spark-release-1-6-0.html
+++ b/site/releases/spark-release-1-6-0.html
@@ -191,13 +191,13 @@
 <p>You can consult JIRA for the <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12333083&amp;projectId=12315420">detailed changes</a>. We have curated a list of high level changes here:</p>
 
 <ul id="markdown-toc">
-  <li><a href="#spark-coresql">Spark Core/SQL</a></li>
-  <li><a href="#spark-streaming">Spark Streaming</a></li>
-  <li><a href="#mllib">MLlib</a></li>
-  <li><a href="#deprecations">Deprecations</a></li>
-  <li><a href="#changes-of-behavior">Changes of behavior</a></li>
-  <li><a href="#known-issues">Known issues</a></li>
-  <li><a href="#credits">Credits</a></li>
+  <li><a href="#spark-coresql" id="markdown-toc-spark-coresql">Spark Core/SQL</a></li>
+  <li><a href="#spark-streaming" id="markdown-toc-spark-streaming">Spark Streaming</a></li>
+  <li><a href="#mllib" id="markdown-toc-mllib">MLlib</a></li>
+  <li><a href="#deprecations" id="markdown-toc-deprecations">Deprecations</a></li>
+  <li><a href="#changes-of-behavior" id="markdown-toc-changes-of-behavior">Changes of behavior</a></li>
+  <li><a href="#known-issues" id="markdown-toc-known-issues">Known issues</a></li>
+  <li><a href="#credits" id="markdown-toc-credits">Credits</a></li>
 </ul>
 
 <h3 id="spark-coresql">Spark Core/SQL</h3>
@@ -220,7 +220,7 @@
     <ul>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10000">SPARK-10000</a> <strong>Unified Memory Management</strong>  - Shared memory for execution and caching instead of exclusive division of the regions.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11787">SPARK-11787</a> <strong>Parquet Performance</strong> - Improve Parquet scan performance when using flat schemas.</li>
-      <li><a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241&#160;</a> <strong>Improved query planner for queries having distinct aggregations</strong> - Query plans of distinct aggregations are more robust when distinct columns have high cardinality. </li>
+      <li><a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241&#160;</a> <strong>Improved query planner for queries having distinct aggregations</strong> - Query plans of distinct aggregations are more robust when distinct columns have high cardinality.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-9858">SPARK-9858&#160;</a> <strong>Adaptive query execution</strong> - Initial support for automatically selecting the number of reducers for joins and aggregations.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10978">SPARK-10978</a> <strong>Avoiding double filters in Data Source API</strong> - When implementing a data source with filter pushdown, developers can now tell Spark SQL to avoid double evaluating a pushed-down filter.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11111">SPARK-11111</a> <strong>Fast null-safe joins</strong> - Joins using null-safe equality (<code>&lt;=&gt;</code>) will now execute using SortMergeJoin instead of computing a cartisian product.</li>
@@ -233,7 +233,7 @@
 <h3 id="spark-streaming">Spark Streaming</h3>
 
 <ul>
-  <li><strong>API Updates</strong> 
+  <li><strong>API Updates</strong>
     <ul>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-2629">SPARK-2629&#160;</a> <strong>New improved state management</strong> - <code>mapWithState</code> - a DStream transformation for stateful stream processing, supercedes <code>updateStateByKey</code> in functionality and performance.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11198">SPARK-11198</a> <strong>Kinesis record deaggregation</strong> - Kinesis streams have been upgraded to use KCL 1.4.0 and supports transparent deaggregation of KPL-aggregated records.</li>
@@ -244,7 +244,7 @@
   <li><strong>UI Improvements</strong>
     <ul>
       <li>Made failures visible in the streaming tab, in the timelines, batch list, and batch details page.</li>
-      <li>Made output operations visible in the streaming tab as progress bars. </li>
+      <li>Made output operations visible in the streaming tab as progress bars.</li>
     </ul>
   </li>
 </ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-2-0-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-2-0-0.html b/site/releases/spark-release-2-0-0.html
index 05da019..d859bb0 100644
--- a/site/releases/spark-release-2-0-0.html
+++ b/site/releases/spark-release-2-0-0.html
@@ -191,30 +191,30 @@
 <p>To download Apache Spark 2.0.0, visit the <a href="http://spark.apache.org/downloads.html">downloads</a> page. You can consult JIRA for the <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&amp;version=12329449">detailed changes</a>. We have curated a list of high level changes here, grouped by major modules.</p>
 
 <ul id="markdown-toc">
-  <li><a href="#api-stability">API Stability</a></li>
-  <li><a href="#core-and-spark-sql">Core and Spark SQL</a>    <ul>
-      <li><a href="#programming-apis">Programming APIs</a></li>
-      <li><a href="#sql">SQL</a></li>
-      <li><a href="#new-features">New Features</a></li>
-      <li><a href="#performance-and-runtime">Performance and Runtime</a></li>
+  <li><a href="#api-stability" id="markdown-toc-api-stability">API Stability</a></li>
+  <li><a href="#core-and-spark-sql" id="markdown-toc-core-and-spark-sql">Core and Spark SQL</a>    <ul>
+      <li><a href="#programming-apis" id="markdown-toc-programming-apis">Programming APIs</a></li>
+      <li><a href="#sql" id="markdown-toc-sql">SQL</a></li>
+      <li><a href="#new-features" id="markdown-toc-new-features">New Features</a></li>
+      <li><a href="#performance-and-runtime" id="markdown-toc-performance-and-runtime">Performance and Runtime</a></li>
     </ul>
   </li>
-  <li><a href="#mllib">MLlib</a>    <ul>
-      <li><a href="#new-features-1">New features</a></li>
-      <li><a href="#speedscaling">Speed/scaling</a></li>
+  <li><a href="#mllib" id="markdown-toc-mllib">MLlib</a>    <ul>
+      <li><a href="#new-features-1" id="markdown-toc-new-features-1">New features</a></li>
+      <li><a href="#speedscaling" id="markdown-toc-speedscaling">Speed/scaling</a></li>
     </ul>
   </li>
-  <li><a href="#sparkr">SparkR</a></li>
-  <li><a href="#streaming">Streaming</a></li>
-  <li><a href="#dependency-packaging-and-operations">Dependency, Packaging, and Operations</a></li>
-  <li><a href="#removals-behavior-changes-and-deprecations">Removals, Behavior Changes and Deprecations</a>    <ul>
-      <li><a href="#removals">Removals</a></li>
-      <li><a href="#behavior-changes">Behavior Changes</a></li>
-      <li><a href="#deprecations">Deprecations</a></li>
+  <li><a href="#sparkr" id="markdown-toc-sparkr">SparkR</a></li>
+  <li><a href="#streaming" id="markdown-toc-streaming">Streaming</a></li>
+  <li><a href="#dependency-packaging-and-operations" id="markdown-toc-dependency-packaging-and-operations">Dependency, Packaging, and Operations</a></li>
+  <li><a href="#removals-behavior-changes-and-deprecations" id="markdown-toc-removals-behavior-changes-and-deprecations">Removals, Behavior Changes and Deprecations</a>    <ul>
+      <li><a href="#removals" id="markdown-toc-removals">Removals</a></li>
+      <li><a href="#behavior-changes" id="markdown-toc-behavior-changes">Behavior Changes</a></li>
+      <li><a href="#deprecations" id="markdown-toc-deprecations">Deprecations</a></li>
     </ul>
   </li>
-  <li><a href="#known-issues">Known Issues</a></li>
-  <li><a href="#credits">Credits</a></li>
+  <li><a href="#known-issues" id="markdown-toc-known-issues">Known Issues</a></li>
+  <li><a href="#credits" id="markdown-toc-credits">Credits</a></li>
 </ul>
 
 <h3 id="api-stability">API Stability</h3>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/sql/index.md
----------------------------------------------------------------------
diff --git a/sql/index.md b/sql/index.md
index 6978863..3e5c210 100644
--- a/sql/index.md
+++ b/sql/index.md
@@ -72,7 +72,7 @@ subproject: SQL
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 323px; display: inline-block">
-      <img src="{{site.url}}images/sql-hive-arch.png" style="width: 100%; max-width: 323px;">
+      <img src="{{site.baseurl}}/images/sql-hive-arch.png" style="width: 100%; max-width: 323px;">
       <div class="caption">Spark SQL can use existing Hive metastores, SerDes, and UDFs.</div>
     </div>
   </div>
@@ -90,7 +90,7 @@ subproject: SQL
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 323px; display: inline-block">
-      <img src="{{site.url}}images/jdbc.png" style="width: 75%; max-width: 323px;">
+      <img src="{{site.baseurl}}/images/jdbc.png" style="width: 75%; max-width: 323px;">
       <div class="caption">Use your existing BI tools to query big data.</div>
     </div>
   </div>
@@ -110,7 +110,7 @@ subproject: SQL
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 272px; display: inline-block; text-align: center;">
-      <img src="{{site.url}}images/sqlperf.png" style="width: 100%; max-width: 250px;">
+      <img src="{{site.baseurl}}/images/sqlperf.png" style="width: 100%; max-width: 250px;">
       <div class="caption" style="min-width: 272px;">Performance comparison between Shark and Spark SQL</div>
     </div>
   </div>
@@ -135,7 +135,7 @@ subproject: SQL
     </p>
     <p>
       If you have questions about the system, ask on the
-      <a href="{{site.url}}community.html#mailing-lists">Spark mailing lists</a>.
+      <a href="{{site.baseurl}}/community.html#mailing-lists">Spark mailing lists</a>.
     </p>
     <p>
       The Spark SQL developers welcome contributions. If you'd like to help out,
@@ -150,15 +150,15 @@ subproject: SQL
       To get started with Spark SQL:
     </p>
     <ul class="list-narrow">
-      <li><a href="{{site.url}}downloads.html">Download Spark</a>. It includes Spark SQL as a module.</li>
-      <li>Read the <a href="{{site.url}}docs/latest/sql-programming-guide.html">Spark SQL and DataFrame guide</a> to learn the API.</li>
+      <li><a href="{{site.baseurl}}/downloads.html">Download Spark</a>. It includes Spark SQL as a module.</li>
+      <li>Read the <a href="{{site.baseurl}}/docs/latest/sql-programming-guide.html">Spark SQL and DataFrame guide</a> to learn the API.</li>
     </ul>
   </div>
 </div>
 
 <div class="row">
   <div class="col-sm-12 col-center">
-    <a href="{{site.url}}downloads.html" class="btn btn-success btn-lg btn-multiline">
+    <a href="{{site.baseurl}}/downloads.html" class="btn btn-success btn-lg btn-multiline">
       Download Apache Spark<br/><span class="small">Includes Spark SQL</span>
     </a>
   </div>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/streaming/index.md
----------------------------------------------------------------------
diff --git a/streaming/index.md b/streaming/index.md
index 459d84f..37985a1 100644
--- a/streaming/index.md
+++ b/streaming/index.md
@@ -22,7 +22,7 @@ subproject: Streaming
     </p>
     <p>
       Spark Streaming brings Apache Spark's
-      <a href="{{site.url}}docs/latest/streaming-programming-guide.html">language-integrated API</a>
+      <a href="{{site.baseurl}}/docs/latest/streaming-programming-guide.html">language-integrated API</a>
       to stream processing, letting you write streaming jobs the same way you write batch jobs.
       It supports Java, Scala and Python.
     </p>
@@ -53,7 +53,7 @@ subproject: Streaming
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 300px; display: inline-block;">
-      <img src="{{site.url}}images/spark-streaming-recovery.png" style="width: 100%; max-width: 300px;">
+      <img src="{{site.baseurl}}/images/spark-streaming-recovery.png" style="width: 100%; max-width: 300px;">
     </div>
   </div>
 </div>
@@ -97,8 +97,8 @@ subproject: Streaming
       You can also define your own custom data sources.
     </p>
     <p>
-      You can run Spark Streaming on Spark's <a href="{{site.url}}docs/latest/spark-standalone.html">standalone cluster mode</a>
-      or <a href="{{site.url}}docs/latest/ec2-scripts.html">EC2</a>.
+      You can run Spark Streaming on Spark's <a href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone cluster mode</a>
+      or <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2</a>.
       It also includes a local run mode for development.
       In production,
       Spark Streaming uses <a href="http://zookeeper.apache.org">ZooKeeper</a> and <a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS</a> for high availability.
@@ -113,7 +113,7 @@ subproject: Streaming
     </p>
     <p>
       If you have questions about the system, ask on the
-      <a href="{{site.url}}community.html#mailing-lists">Spark mailing lists</a>.
+      <a href="{{site.baseurl}}/community.html#mailing-lists">Spark mailing lists</a>.
     </p>
     <p>
       The Spark Streaming developers welcome contributions. If you'd like to help out,
@@ -128,8 +128,8 @@ subproject: Streaming
       To get started with Spark Streaming:
     </p>
     <ul class="list-narrow">
-      <li><a href="{{site.url}}downloads.html">Download Spark</a>. It includes Streaming as a module.</li>
-      <li>Read the <a href="{{site.url}}docs/latest/streaming-programming-guide.html">Spark Streaming programming guide</a>, which includes a tutorial and describes system architecture, configuration and high availability.</li>
+      <li><a href="{{site.baseurl}}/downloads.html">Download Spark</a>. It includes Streaming as a module.</li>
+      <li>Read the <a href="{{site.baseurl}}/docs/latest/streaming-programming-guide.html">Spark Streaming programming guide</a>, which includes a tutorial and describes system architecture, configuration and high availability.</li>
       <li>Check out example programs in <a href="https://github.com/apache/spark/tree/master/examples/src/main/scala/org/apache/spark/examples/streaming">Scala</a> and <a href="https://github.com/apache/spark/tree/master/examples/src/main/java/org/apache/spark/examples/streaming">Java</a>.</li>
     </ul>
   </div>
@@ -137,7 +137,7 @@ subproject: Streaming
 
 <div class="row">
   <div class="col-sm-12 col-center">
-    <a href="{{site.url}}downloads.html" class="btn btn-success btn-lg btn-multiline">
+    <a href="{{site.baseurl}}/downloads.html" class="btn btn-success btn-lg btn-multiline">
       Download Apache Spark<br/><span class="small">Includes Spark Streaming</span>
     </a>
   </div>


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[2/3] spark-website git commit: Use site.baseurl, not site.url, to work with Jekyll 3.3. Require Jekyll 3.3. Again commit HTML consistent with Jekyll 3.3 output. Fix date problem with news posts that set date: by removing date:.

Posted by sr...@apache.org.
http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-10-02-spark-1-5-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-10-02-spark-1-5-1-released.md b/news/_posts/2015-10-02-spark-1-5-1-released.md
index f525cbf..d098de6 100644
--- a/news/_posts/2015-10-02-spark-1-5-1-released.md
+++ b/news/_posts/2015-10-02-spark-1-5-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">Spark 1.5.1</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">Spark 1.5.1</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-5-1.html" title="Spark Release 1.5.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-11-09-spark-1-5-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-11-09-spark-1-5-2-released.md b/news/_posts/2015-11-09-spark-1-5-2-released.md
index 21696c5..fbc6c71 100644
--- a/news/_posts/2015-11-09-spark-1-5-2-released.md
+++ b/news/_posts/2015-11-09-spark-1-5-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">Spark 1.5.2</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">Spark 1.5.2</a>! This maintenance release includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, R, Spark SQL, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-5-2.html" title="Spark Release 1.5.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-01-04-spark-1-6-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-01-04-spark-1-6-0-released.md b/news/_posts/2016-01-04-spark-1-6-0-released.md
index 4e47772..b399ade 100644
--- a/news/_posts/2016-01-04-spark-1-6-0-released.md
+++ b/news/_posts/2016-01-04-spark-1-6-0-released.md
@@ -12,9 +12,9 @@ meta:
   _wpas_done_all: '1'
 ---
 We are happy to announce the availability of 
-<a href="{{site.url}}releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">Spark 1.6.0</a>! 
+<a href="{{site.baseurl}}/releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">Spark 1.6.0</a>! 
 Spark 1.6.0 is the seventh release on the API-compatible 1.X line. 
 With this release the Spark community continues to grow, with contributions from 248 developers!
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">release notes</a> 
-to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-0.html" title="Spark Release 1.6.0">release notes</a> 
+to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-03-09-spark-1-6-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-03-09-spark-1-6-1-released.md b/news/_posts/2016-03-09-spark-1-6-1-released.md
index adc2735..6e15537 100644
--- a/news/_posts/2016-03-09-spark-1-6-1-released.md
+++ b/news/_posts/2016-03-09-spark-1-6-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">Spark 1.6.1</a>! This maintenance release includes fixes across several areas of Spark, including signficant updates to the experimental Dataset API.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">Spark 1.6.1</a>! This maintenance release includes fixes across several areas of Spark, including signficant updates to the experimental Dataset API.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-1.html" title="Spark Release 1.6.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-06-25-spark-1-6-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-06-25-spark-1-6-2-released.md b/news/_posts/2016-06-25-spark-1-6-2-released.md
index d3d2beb..3c9bbf3 100644
--- a/news/_posts/2016-06-25-spark-1-6-2-released.md
+++ b/news/_posts/2016-06-25-spark-1-6-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">Spark 1.6.2</a>! This maintenance release includes fixes across several areas of Spark.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">Spark 1.6.2</a>! This maintenance release includes fixes across several areas of Spark.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-2.html" title="Spark Release 1.6.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-07-26-spark-2-0-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-07-26-spark-2-0-0-released.md b/news/_posts/2016-07-26-spark-2-0-0-released.md
index a9597e7..29c74c5 100644
--- a/news/_posts/2016-07-26-spark-2-0-0-released.md
+++ b/news/_posts/2016-07-26-spark-2-0-0-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">Spark 2.0.0</a>! Visit the <a href="{{site.url}}releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">Spark 2.0.0</a>! Visit the <a href="{{site.baseurl}}/releases/spark-release-2-0-0.html" title="Spark Release 2.0.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-10-03-spark-2-0-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-10-03-spark-2-0-1-released.md b/news/_posts/2016-10-03-spark-2-0-1-released.md
index b13fb18..7cbca1a 100644
--- a/news/_posts/2016-10-03-spark-2-0-1-released.md
+++ b/news/_posts/2016-10-03-spark-2-0-1-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">Apache Spark 2.0.1</a>! Visit the <a href="{{site.url}}releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">Apache Spark 2.0.1</a>! Visit the <a href="{{site.baseurl}}/releases/spark-release-2-0-1.html" title="Spark Release 2.0.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-11-07-spark-1-6-3-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-11-07-spark-1-6-3-released.md b/news/_posts/2016-11-07-spark-1-6-3-released.md
index d8957d2..b0f4498 100644
--- a/news/_posts/2016-11-07-spark-1-6-3-released.md
+++ b/news/_posts/2016-11-07-spark-1-6-3-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">Spark 1.6.3</a>! This maintenance release includes fixes across several areas of Spark.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">Spark 1.6.3</a>! This maintenance release includes fixes across several areas of Spark.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-6-3.html" title="Spark Release 1.6.3">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2016-11-14-spark-2-0-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2016-11-14-spark-2-0-2-released.md b/news/_posts/2016-11-14-spark-2-0-2-released.md
index 4570ec3..1f5c7e5 100644
--- a/news/_posts/2016-11-14-spark-2-0-2-released.md
+++ b/news/_posts/2016-11-14-spark-2-0-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">Apache Spark 2.0.2</a>! This maintenance release includes fixes across several areas of Spark, as well as Kafka 0.10 and runtime metrics support for Structured Streaming.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">Apache Spark 2.0.2</a>! This maintenance release includes fixes across several areas of Spark, as well as Kafka 0.10 and runtime metrics support for Structured Streaming.
 
-Visit the <a href="{{site.url}}releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-2-0-2.html" title="Spark Release 2.0.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2013-09-25-spark-release-0-8-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2013-09-25-spark-release-0-8-0.md b/releases/_posts/2013-09-25-spark-release-0-8-0.md
index 6ca6ecb..d74806a 100644
--- a/releases/_posts/2013-09-25-spark-release-0-8-0.md
+++ b/releases/_posts/2013-09-25-spark-release-0-8-0.md
@@ -19,7 +19,7 @@ You can download Spark 0.8.0 as either a <a href="http://spark-project.org/downl
 Spark now displays a variety of monitoring data in a web UI (by default at port 4040 on the driver node). A new job dashboard contains information about running, succeeded, and failed jobs, including percentile statistics covering task runtime, shuffled data, and garbage collection. The existing storage dashboard has been extended, and additional pages have been added to display total storage and task information per-executor. Finally, a new metrics library exposes internal Spark metrics through various API\u2019s including JMX and Ganglia.
 
 <p style="text-align: center;">
-<img src="{{site.url}}images/0.8.0-ui-screenshot.png" style="width:90%;">
+<img src="{{site.baseurl}}/images/0.8.0-ui-screenshot.png" style="width:90%;">
 </p>
 
 ### Machine Learning Library

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2013-12-19-spark-release-0-8-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2013-12-19-spark-release-0-8-1.md b/releases/_posts/2013-12-19-spark-release-0-8-1.md
index 89248d9..4dbe34c 100644
--- a/releases/_posts/2013-12-19-spark-release-0-8-1.md
+++ b/releases/_posts/2013-12-19-spark-release-0-8-1.md
@@ -15,10 +15,10 @@ meta:
 Apache Spark 0.8.1 is a maintenance and performance release for the Scala 2.9 version of Spark. It also adds several new features, such as standalone mode high availability, that will appear in Spark 0.9 but developers wanted to have in Scala 2.9. Contributions to 0.8.1 came from 41 developers.
 
 ### YARN 2.2 Support
-Support has been added for running Spark on YARN 2.2 and newer. Due to a change in the YARN API between previous versions and 2.2+, this was not supported in Spark 0.8.0. See the <a href="{{site.url}}docs/0.8.1/running-on-yarn.html">YARN documentation</a> for specific instructions on how to build Spark for YARN 2.2+. We've also included a pre-compiled binary for YARN 2.2.
+Support has been added for running Spark on YARN 2.2 and newer. Due to a change in the YARN API between previous versions and 2.2+, this was not supported in Spark 0.8.0. See the <a href="{{site.baseurl}}/docs/0.8.1/running-on-yarn.html">YARN documentation</a> for specific instructions on how to build Spark for YARN 2.2+. We've also included a pre-compiled binary for YARN 2.2.
 
 ### High Availability Mode for Standalone Cluster Manager
-The standalone cluster manager now has a high availability (H/A) mode which can tolerate master failures. This is particularly useful for long-running applications such as streaming jobs and the shark server, where the scheduler master previously represented a single point of failure. Instructions for deploying H/A mode are included <a href="{{site.url}}docs/0.8.1/spark-standalone.html#high-availability">in the documentation</a>. The current implementation uses Zookeeper for coordination.
+The standalone cluster manager now has a high availability (H/A) mode which can tolerate master failures. This is particularly useful for long-running applications such as streaming jobs and the shark server, where the scheduler master previously represented a single point of failure. Instructions for deploying H/A mode are included <a href="{{site.baseurl}}/docs/0.8.1/spark-standalone.html#high-availability">in the documentation</a>. The current implementation uses Zookeeper for coordination.
 
 ### Performance Optimizations
 This release adds several performance optimizations:

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-02-02-spark-release-0-9-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-02-02-spark-release-0-9-0.md b/releases/_posts/2014-02-02-spark-release-0-9-0.md
index edcce3a..7f9e107 100644
--- a/releases/_posts/2014-02-02-spark-release-0-9-0.md
+++ b/releases/_posts/2014-02-02-spark-release-0-9-0.md
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 0.9.0 is a major release that adds significant new features. It updates Spark to Scala 2.10, simplifies high availability, and updates numerous components of the project. This release includes a first version of [GraphX]({{site.url}}graphx/), a powerful new framework for graph processing that comes with a library of standard algorithms. In addition, [Spark Streaming]({{site.url}}streaming/) is now out of alpha, and includes significant optimizations and simplified high availability deployment.
+Spark 0.9.0 is a major release that adds significant new features. It updates Spark to Scala 2.10, simplifies high availability, and updates numerous components of the project. This release includes a first version of [GraphX]({{site.baseurl}}/graphx/), a powerful new framework for graph processing that comes with a library of standard algorithms. In addition, [Spark Streaming]({{site.baseurl}}/streaming/) is now out of alpha, and includes significant optimizations and simplified high availability deployment.
 
 You can download Spark 0.9.0 as either a
 <a href="http://d3kbcqa49mib13.cloudfront.net/spark-0.9.0-incubating.tgz" onClick="trackOutboundLink(this, 'Release Download Links', 'cloudfront_spark-0.9.0-incubating.tgz'); return false;">source package</a>
@@ -27,16 +27,16 @@ Spark now runs on Scala 2.10, letting users benefit from the language and librar
 
 ### Configuration System
 
-The new [SparkConf]({{site.url}}docs/latest/api/core/index.html#org.apache.spark.SparkConf) class is now the preferred way to configure advanced settings on your SparkContext, though the previous Java system property method still works. SparkConf is especially useful in tests to make sure properties don\u2019t stay set across tests.
+The new [SparkConf]({{site.baseurl}}/docs/latest/api/core/index.html#org.apache.spark.SparkConf) class is now the preferred way to configure advanced settings on your SparkContext, though the previous Java system property method still works. SparkConf is especially useful in tests to make sure properties don\u2019t stay set across tests.
 
 ### Spark Streaming Improvements
 
 Spark Streaming is now out of alpha, and comes with simplified high availability and several optimizations.
 
-* When running on a Spark standalone cluster with the [standalone cluster high availability mode]({{site.url}}docs/0.9.0/spark-standalone.html#high-availability), you can submit a Spark Streaming driver application to the cluster and have it automatically recovered if either the driver or the cluster master crashes.
+* When running on a Spark standalone cluster with the [standalone cluster high availability mode]({{site.baseurl}}/docs/0.9.0/spark-standalone.html#high-availability), you can submit a Spark Streaming driver application to the cluster and have it automatically recovered if either the driver or the cluster master crashes.
 * Windowed operators have been sped up by 30-50%.
 * Spark Streaming\u2019s input source plugins (e.g. for Twitter, Kafka and Flume) are now separate Maven modules, making it easier to pull in only the dependencies you need.
-* A new [StreamingListener]({{site.url}}docs/0.9.0/api/streaming/index.html#org.apache.spark.streaming.scheduler.StreamingListener) interface has been added for monitoring statistics about the streaming computation.
+* A new [StreamingListener]({{site.baseurl}}/docs/0.9.0/api/streaming/index.html#org.apache.spark.streaming.scheduler.StreamingListener) interface has been added for monitoring statistics about the streaming computation.
 * A few aspects of the API have been improved:
    * `DStream` and `PairDStream` classes have been moved from `org.apache.spark.streaming` to `org.apache.spark.streaming.dstream` to keep it consistent with `org.apache.spark.rdd.RDD`.
    * `DStream.foreach` has been renamed to `foreachRDD` to make it explicit that it works for every RDD, not every element
@@ -45,22 +45,22 @@ Spark Streaming is now out of alpha, and comes with simplified high availability
 
 ### GraphX Alpha
 
-[GraphX]({{site.url}}graphx/) is a new framework for graph processing that uses recent advances in graph-parallel computation. It lets you build a graph within a Spark program using the standard Spark operators, then process it with new graph operators that are optimized for distributed computation. It includes [basic transformations]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Graph), a [Pregel API]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Pregel$) for iterative computation, and a standard library of [graph loaders]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.util.GraphGenerators$) and [analytics algorithms]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.package). By offering these features *within* the Spark engine, GraphX can significantly speed up processing pipelines compared to workflows that use different engines.
+[GraphX]({{site.baseurl}}/graphx/) is a new framework for graph processing that uses recent advances in graph-parallel computation. It lets you build a graph within a Spark program using the standard Spark operators, then process it with new graph operators that are optimized for distributed computation. It includes [basic transformations]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Graph), a [Pregel API]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.Pregel$) for iterative computation, and a standard library of [graph loaders]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.util.GraphGenerators$) and [analytics algorithms]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.package). By offering these features *within* the Spark engine, GraphX can significantly speed up processing pipelines compared to workflows that use different engines.
 
 GraphX features in this release include:
 
 * Building graphs from arbitrary Spark RDDs
 * Basic operations to transform graphs or extract subgraphs
 * An optimized Pregel API that takes advantage of graph partitioning and indexing
-* Standard algorithms including [PageRank]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.PageRank$), [connected components]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.ConnectedComponents$), [strongly connected components]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.StronglyConnectedComponents$), [SVD++]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.SVDPlusPlus$), and [triangle counting]({{site.url}}docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.TriangleCount$)
+* Standard algorithms including [PageRank]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.PageRank$), [connected components]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.ConnectedComponents$), [strongly connected components]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.StronglyConnectedComponents$), [SVD++]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.SVDPlusPlus$), and [triangle counting]({{site.baseurl}}/docs/0.9.0/api/graphx/index.html#org.apache.spark.graphx.lib.TriangleCount$)
 * Interactive use from the Spark shell
 
 GraphX is still marked as alpha in this first release, but we recommend for new users to use it instead of the more limited Bagel API.
 
 ### MLlib Improvements
 
-* Spark\u2019s machine learning library (MLlib) is now [available in Python]({{site.url}}docs/0.9.0/mllib-guide.html#using-mllib-in-python), where it operates on NumPy data (currently requires Python 2.7 and NumPy 1.7)
-* A new algorithm has been added for [Naive Bayes classification]({{site.url}}docs/0.9.0/api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayes)
+* Spark\u2019s machine learning library (MLlib) is now [available in Python]({{site.baseurl}}/docs/0.9.0/mllib-guide.html#using-mllib-in-python), where it operates on NumPy data (currently requires Python 2.7 and NumPy 1.7)
+* A new algorithm has been added for [Naive Bayes classification]({{site.baseurl}}/docs/0.9.0/api/mllib/index.html#org.apache.spark.mllib.classification.NaiveBayes)
 * Alternating Least Squares models can now be used to predict ratings for multiple items in parallel
 * MLlib\u2019s documentation was expanded to include more examples in Scala, Java and Python
 
@@ -77,7 +77,7 @@ GraphX is still marked as alpha in this first release, but we recommend for new
 
 ### Core Engine
 
-* Spark\u2019s standalone mode now supports submitting a driver program to run on the cluster instead of on the external machine submitting it. You can access this functionality through the [org.apache.spark.deploy.Client]({{site.url}}docs/0.9.0/spark-standalone.html#launching-applications-inside-the-cluster) class.
+* Spark\u2019s standalone mode now supports submitting a driver program to run on the cluster instead of on the external machine submitting it. You can access this functionality through the [org.apache.spark.deploy.Client]({{site.baseurl}}/docs/0.9.0/spark-standalone.html#launching-applications-inside-the-cluster) class.
 * Large reduce operations now automatically spill data to disk if it does not fit in memory.
 * Users of standalone mode can now limit how many cores an application will use by default if the application writer didn\u2019t configure its size. Previously, such applications took all available cores on the cluster.
 * `spark-shell` now supports the `-i` option to run a script on startup.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-05-30-spark-release-1-0-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-05-30-spark-release-1-0-0.md b/releases/_posts/2014-05-30-spark-release-1-0-0.md
index acb6b3e..22d59f6 100644
--- a/releases/_posts/2014-05-30-spark-release-1-0-0.md
+++ b/releases/_posts/2014-05-30-spark-release-1-0-0.md
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 1.0.0 is a major release marking the start of the 1.X line. This release brings both a variety of new features and strong API compatibility guarantees throughout the 1.X line. Spark 1.0 adds a new major component, [Spark SQL]({{site.url}}docs/latest/sql-programming-guide.html), for loading and manipulating structured data in Spark. It includes major extensions to all of Spark\u2019s existing standard libraries ([ML]({{site.url}}docs/latest/mllib-guide.html), [Streaming]({{site.url}}docs/latest/streaming-programming-guide.html), and [GraphX]({{site.url}}docs/latest/graphx-programming-guide.html)) while also enhancing language support in Java and Python. Finally, Spark 1.0 brings operational improvements including full support for the Hadoop/YARN security model and a unified submission process for all supported cluster managers.
+Spark 1.0.0 is a major release marking the start of the 1.X line. This release brings both a variety of new features and strong API compatibility guarantees throughout the 1.X line. Spark 1.0 adds a new major component, [Spark SQL]({{site.baseurl}}/docs/latest/sql-programming-guide.html), for loading and manipulating structured data in Spark. It includes major extensions to all of Spark\u2019s existing standard libraries ([ML]({{site.baseurl}}/docs/latest/mllib-guide.html), [Streaming]({{site.baseurl}}/docs/latest/streaming-programming-guide.html), and [GraphX]({{site.baseurl}}/docs/latest/graphx-programming-guide.html)) while also enhancing language support in Java and Python. Finally, Spark 1.0 brings operational improvements including full support for the Hadoop/YARN security model and a unified submission process for all supported cluster managers.
 
 You can download Spark 1.0.0 as either a 
 <a href="http://d3kbcqa49mib13.cloudfront.net/spark-1.0.0.tgz" onClick="trackOutboundLink(this, 'Release Download Links', 'cloudfront_spark-1.0.0.tgz'); return false;">source package</a>
@@ -28,13 +28,13 @@ Spark 1.0.0 is the first release in the 1.X major line. Spark is guaranteeing st
 For users running in secured Hadoop environments, Spark now integrates with the Hadoop/YARN security model. Spark will authenticate job submission, securely transfer HDFS credentials, and authenticate communication between components.
 
 ### Operational and Packaging Improvements
-This release significantly simplifies the process of bundling and submitting a Spark application. A new [spark-submit tool]({{site.url}}docs/latest/submitting-applications.html) allows users to submit an application to any Spark cluster, including local clusters, Mesos, or YARN, through a common process. The documentation for bundling Spark applications has been substantially expanded. We\u2019ve also added a history server for  Spark\u2019s web UI, allowing users to view Spark application data after individual applications are finished.
+This release significantly simplifies the process of bundling and submitting a Spark application. A new [spark-submit tool]({{site.baseurl}}/docs/latest/submitting-applications.html) allows users to submit an application to any Spark cluster, including local clusters, Mesos, or YARN, through a common process. The documentation for bundling Spark applications has been substantially expanded. We\u2019ve also added a history server for  Spark\u2019s web UI, allowing users to view Spark application data after individual applications are finished.
 
 ### Spark SQL
-This release introduces [Spark SQL]({{site.url}}docs/latest/sql-programming-guide.html) as a new alpha component. Spark SQL provides support for loading and manipulating structured data in Spark, either from external structured data sources (currently Hive and Parquet) or by adding a schema to an existing RDD. Spark SQL\u2019s API interoperates with the RDD data model, allowing users to interleave Spark code with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to choose an efficient execution plan, and can automatically push predicates into storage formats like Parquet. In future releases, Spark SQL will also provide a common API to other storage systems.
+This release introduces [Spark SQL]({{site.baseurl}}/docs/latest/sql-programming-guide.html) as a new alpha component. Spark SQL provides support for loading and manipulating structured data in Spark, either from external structured data sources (currently Hive and Parquet) or by adding a schema to an existing RDD. Spark SQL\u2019s API interoperates with the RDD data model, allowing users to interleave Spark code with SQL statements. Under the hood, Spark SQL uses the Catalyst optimizer to choose an efficient execution plan, and can automatically push predicates into storage formats like Parquet. In future releases, Spark SQL will also provide a common API to other storage systems.
 
 ### MLlib Improvements
-In 1.0.0, Spark\u2019s MLlib adds support for sparse feature vectors in Scala, Java, and Python. It takes advantage of sparsity in both storage and computation in linear methods, k-means, and naive Bayes. In addition, this release adds several new algorithms: scalable decision trees for both classification and regression, distributed matrix algorithms including SVD and PCA, model evaluation functions, and L-BFGS as an optimization primitive. The [MLlib programming guide]({{site.url}}docs/latest/mllib-guide.html) and code examples have also been greatly expanded.
+In 1.0.0, Spark\u2019s MLlib adds support for sparse feature vectors in Scala, Java, and Python. It takes advantage of sparsity in both storage and computation in linear methods, k-means, and naive Bayes. In addition, this release adds several new algorithms: scalable decision trees for both classification and regression, distributed matrix algorithms including SVD and PCA, model evaluation functions, and L-BFGS as an optimization primitive. The [MLlib programming guide]({{site.baseurl}}/docs/latest/mllib-guide.html) and code examples have also been greatly expanded.
 
 ### GraphX and Streaming Improvements
 In addition to usability and maintainability improvements, GraphX in Spark 1.0 brings substantial performance boosts in graph loading, edge reversal, and neighborhood computation. These operations now require less communication and produce simpler RDD graphs. Spark\u2019s Streaming module has added performance optimizations for stateful stream transformations, along with improved Flume support, and automated state cleanup for long running jobs.
@@ -43,7 +43,7 @@ In addition to usability and maintainability improvements, GraphX in Spark 1.0 b
 Spark 1.0 adds support for Java 8 [new lambda syntax](http://docs.oracle.com/javase/tutorial/java/javaOO/lambdaexpressions.html) in its Java bindings. Java 8 supports a concise syntax for writing anonymous functions, similar to the closure syntax in Scala and Python. This change requires small changes for users of the current Java API, which are noted in the documentation. Spark\u2019s Python API has been extended to support several new functions. We\u2019ve also included several stability improvements in the Python API, particularly for large datasets. PySpark now supports running on YARN as well.
 
 ### Documentation
-Spark's [programming guide]({{site.url}}docs/latest/programming-guide.html) has been significantly expanded to centrally cover all supported languages and discuss more operators and aspects of the development life cycle. The [MLlib guide]({{site.url}}docs/latest/mllib-guide.html) has also been expanded with significantly more detail and examples for each algorithm, while documents on configuration, YARN and Mesos have also been revamped.
+Spark's [programming guide]({{site.baseurl}}/docs/latest/programming-guide.html) has been significantly expanded to centrally cover all supported languages and discuss more operators and aspects of the development life cycle. The [MLlib guide]({{site.baseurl}}/docs/latest/mllib-guide.html) has also been expanded with significantly more detail and examples for each algorithm, while documents on configuration, YARN and Mesos have also been revamped.
 
 ### Smaller Changes
 - PySpark now works with more Python versions than before -- Python 2.6+ instead of 2.7+, and NumPy 1.4+ instead of 1.7+.
@@ -52,12 +52,12 @@ Spark's [programming guide]({{site.url}}docs/latest/programming-guide.html) has
 - Support for off-heap storage in Tachyon has been added via a special build target.
 - Datasets persisted with `DISK_ONLY` now write directly to disk, significantly improving memory usage for large datasets.
 - Intermediate state created during a Spark job is now garbage collected when the corresponding RDDs become unreferenced, improving performance.
-- Spark now includes a [Javadoc version]({{site.url}}docs/latest/api/java/index.html) of all its API docs and a [unified Scaladoc]({{site.url}}docs/latest/api/scala/index.html) for all modules.
+- Spark now includes a [Javadoc version]({{site.baseurl}}/docs/latest/api/java/index.html) of all its API docs and a [unified Scaladoc]({{site.baseurl}}/docs/latest/api/scala/index.html) for all modules.
 - A new SparkContext.wholeTextFiles method lets you operate on small text files as individual records.
 
 
 ### Migrating to Spark 1.0
-While most of the Spark API remains the same as in 0.x versions, a few changes have been made for long-term flexibility, especially in the Java API (to support Java 8 lambdas). The documentation includes [migration information]({{site.url}}docs/latest/programming-guide.html#migrating-from-pre-10-versions-of-spark) to upgrade your applications.
+While most of the Spark API remains the same as in 0.x versions, a few changes have been made for long-term flexibility, especially in the Java API (to support Java 8 lambdas). The documentation includes [migration information]({{site.baseurl}}/docs/latest/programming-guide.html#migrating-from-pre-10-versions-of-spark) to upgrade your applications.
 
 ### Contributors
 The following developers contributed to this release:

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-09-11-spark-release-1-1-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-09-11-spark-release-1-1-0.md b/releases/_posts/2014-09-11-spark-release-1-1-0.md
index f4878a6..b12a727 100644
--- a/releases/_posts/2014-09-11-spark-release-1-1-0.md
+++ b/releases/_posts/2014-09-11-spark-release-1-1-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.1.0 is the first minor release on the 1.X line. This release brings operational and performance improvements in Spark core along with significant extensions to Spark\u2019s newest libraries: MLlib and Spark SQL. It also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module. Spark 1.1 represents the work of 171 contributors, the most to ever contribute to a Spark release!
 
-To download Spark 1.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Performance and Usability Improvements
 Across the board, Spark 1.1 adds features for improved stability and performance, particularly for large-scale workloads. Spark now performs [disk spilling for skewed blocks](https://issues.apache.org/jira/browse/SPARK-1777) during cache operations, guarding against memory overflows if a single RDD partition is large. Disk spilling during aggregations, introduced in Spark 1.0, has been [ported to PySpark](https://issues.apache.org/jira/browse/SPARK-2538). This release introduces a [new shuffle implementation](https://issues.apache.org/jira/browse/SPARK-2045) optimized for very large scale shuffles. This \u201csort-based shuffle\u201d will be become the default in the next release, and is now available to users. For jobs with large numbers of reducers, we recommend turning this on. This release also adds several usability improvements for monitoring the performance of long running or complex jobs. Among the changes are better [named accumulators](https://issues.apache.org/jira/browse/SPARK
 -2380) that display in Spark\u2019s UI, [dynamic updating of metrics](https://issues.apache.org/jira/browse/SPARK-2099) for progress tasks, and [reporting of input metrics](https://issues.apache.org/jira/browse/SPARK-1683) for tasks that read input data.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-11-26-spark-release-1-1-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-11-26-spark-release-1-1-1.md b/releases/_posts/2014-11-26-spark-release-1-1-1.md
index 4153942..ab067ee 100644
--- a/releases/_posts/2014-11-26-spark-release-1-1-1.md
+++ b/releases/_posts/2014-11-26-spark-release-1-1-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.1.1 is a maintenance release with bug fixes. This release is based on the [branch-1.1](https://github.com/apache/spark/tree/branch-1.1) maintenance branch of Spark. We recommend all 1.1.0 users to upgrade to this stable release. Contributions to this release came from 55 developers.
 
-To download Spark 1.1.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.1.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.1.1 contains bug fixes in several components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](http://s.apache.org/z9h) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2014-12-18-spark-release-1-2-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2014-12-18-spark-release-1-2-0.md b/releases/_posts/2014-12-18-spark-release-1-2-0.md
index d9dab5c..bb9a01c 100644
--- a/releases/_posts/2014-12-18-spark-release-1-2-0.md
+++ b/releases/_posts/2014-12-18-spark-release-1-2-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.2.0 is the third release on the 1.X line. This release brings performance and usability improvements in Spark\u2019s core engine, a major new API for MLlib, expanded ML support in Python, a fully H/A mode in Spark Streaming, and much more. GraphX has seen major performance and API improvements and graduates from an alpha component. Spark 1.2 represents the work of 172 contributors from more than 60 institutions in more than 1000 individual patches.
 
-To download Spark 1.2 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.2 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Spark Core
 In 1.2 Spark core upgrades two major subsystems to improve the performance and stability of very large scale shuffles. The first is Spark\u2019s communication manager used during bulk transfers, which upgrades to a [netty-based implementation](https://issues.apache.org/jira/browse/SPARK-2468). The second is Spark\u2019s shuffle mechanism, which upgrades to the [\u201csort based\u201d shuffle initially released in Spark 1.1](https://issues.apache.org/jira/browse/SPARK-3280). These both improve the performance and stability of very large scale shuffles. Spark also adds an [elastic scaling mechanism](https://issues.apache.org/jira/browse/SPARK-3174) designed to improve cluster utilization during long running ETL-style jobs. This is currently supported on YARN and will make its way to other cluster managers in future versions. Finally, Spark 1.2 adds support for Scala 2.11. For instructions on building for Scala 2.11 see the [build documentation](/docs/1.2.0/building-spark.html#building-for-scala-2
 11).

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-02-09-spark-release-1-2-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-02-09-spark-release-1-2-1.md b/releases/_posts/2015-02-09-spark-release-1-2-1.md
index 8bd5aef..3f5c579 100644
--- a/releases/_posts/2015-02-09-spark-release-1-2-1.md
+++ b/releases/_posts/2015-02-09-spark-release-1-2-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.2.1 is a maintenance release containing stability fixes. This release is based on the [branch-1.2](https://github.com/apache/spark/tree/branch-1.2) maintenance branch of Spark. We recommend all 1.2.0 users to upgrade to this stable release. Contributions to this release came from 69 developers.
 
-To download Spark 1.2.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.2.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.2.1 contains bug fixes in several components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](http://s.apache.org/Mpn) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-03-13-spark-release-1-3-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-03-13-spark-release-1-3-0.md b/releases/_posts/2015-03-13-spark-release-1-3-0.md
index bc9c4db..03230fa 100644
--- a/releases/_posts/2015-03-13-spark-release-1-3-0.md
+++ b/releases/_posts/2015-03-13-spark-release-1-3-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.3.0 is the fourth release on the 1.X line. This release brings a new DataFrame API alongside the graduation of Spark SQL from an alpha project. It also brings usability improvements in Spark\u2019s core engine and expansion of MLlib and Spark Streaming. Spark 1.3 represents the work of 174 contributors from more than 60 institutions in more than 1000 individual patches.
 
-To download Spark 1.3 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.3 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Spark Core
 Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports [multi level aggregation trees](https://issues.apache.org/jira/browse/SPARK-5430) to help speed up expensive reduce operations. [Improved error reporting](https://issues.apache.org/jira/browse/SPARK-5063) has been added for certain gotcha operations. Spark's Jetty dependency is [now shaded](https://issues.apache.org/jira/browse/SPARK-3996) to help avoid conflicts with user programs. Spark now supports [SSL encryption](https://issues.apache.org/jira/browse/SPARK-3883) for some communication endpoints. Finaly, realtime [GC metrics](https://issues.apache.org/jira/browse/SPARK-3428) and [record counts](https://issues.apache.org/jira/browse/SPARK-4874) have been added to the UI. 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-04-17-spark-release-1-2-2.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-04-17-spark-release-1-2-2.md b/releases/_posts/2015-04-17-spark-release-1-2-2.md
index e118849..2bc3974 100644
--- a/releases/_posts/2015-04-17-spark-release-1-2-2.md
+++ b/releases/_posts/2015-04-17-spark-release-1-2-2.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.2.2 is a maintenance release containing stability fixes. This release is based on the [branch-1.2](https://github.com/apache/spark/tree/branch-1.2) maintenance branch of Spark. We recommend all 1.2.1 users to upgrade to this stable release. Contributions to this release came from 39 developers.
 
-To download Spark 1.2.2 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.2.2 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.2.2 contains bug fixes in several components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%201.2.2%20ORDER%20BY%20priority%2C%20component) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-04-17-spark-release-1-3-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-04-17-spark-release-1-3-1.md b/releases/_posts/2015-04-17-spark-release-1-3-1.md
index dc7c5d4..40ce957 100644
--- a/releases/_posts/2015-04-17-spark-release-1-3-1.md
+++ b/releases/_posts/2015-04-17-spark-release-1-3-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.3.1 is a maintenance release containing stability fixes. This release is based on the [branch-1.3](https://github.com/apache/spark/tree/branch-1.3) maintenance branch of Spark. We recommend all 1.3.0 users to upgrade to this stable release. Contributions to this release came from 60 developers.
 
-To download Spark 1.3.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.3.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.3.1 contains several bug fixes in Spark SQL and assorted fixes in other components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%201.3.1%20ORDER%20BY%20priority%2C%20component) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-06-11-spark-release-1-4-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-06-11-spark-release-1-4-0.md b/releases/_posts/2015-06-11-spark-release-1-4-0.md
index b7c315a..e02310f 100644
--- a/releases/_posts/2015-06-11-spark-release-1-4-0.md
+++ b/releases/_posts/2015-06-11-spark-release-1-4-0.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.4.0 is the fifth release on the 1.X line. This release brings an R API to Spark. It also brings usability improvements in Spark\u2019s core engine and expansion of MLlib and Spark Streaming. Spark 1.4 represents the work of more than 210 contributors from more than 70 institutions in more than 1000 individual patches.
 
-To download Spark 1.4 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.4 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### SparkR
 Spark 1.4 is the first release to package SparkR, an R binding for Spark based

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-07-15-spark-release-1-4-1.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-07-15-spark-release-1-4-1.md b/releases/_posts/2015-07-15-spark-release-1-4-1.md
index 58b53b8..7664355 100644
--- a/releases/_posts/2015-07-15-spark-release-1-4-1.md
+++ b/releases/_posts/2015-07-15-spark-release-1-4-1.md
@@ -13,7 +13,7 @@ meta:
 
 Spark 1.4.1 is a maintenance release containing stability fixes. This release is based on the [branch-1.4](https://github.com/apache/spark/tree/branch-1.4) maintenance branch of Spark. We recommend all 1.4.0 users to upgrade to this stable release. 85 developers contributed to this release.
 
-To download Spark 1.4.1 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download Spark 1.4.1 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 ### Fixes
 Spark 1.4.1 contains several bug fixes in Spark's DataFrame and data source support and assorted fixes in other components. Some of the more important fixes are highlighted below. You can visit the [Spark issue tracker](https://issues.apache.org/jira/issues/?jql=project%20%3D%20SPARK%20AND%20fixVersion%20%3D%201.4.1%20ORDER%20BY%20priority%2C%20component) for the full list of fixes.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/releases/_posts/2015-09-09-spark-release-1-5-0.md
----------------------------------------------------------------------
diff --git a/releases/_posts/2015-09-09-spark-release-1-5-0.md b/releases/_posts/2015-09-09-spark-release-1-5-0.md
index b527f7f..70d3368 100644
--- a/releases/_posts/2015-09-09-spark-release-1-5-0.md
+++ b/releases/_posts/2015-09-09-spark-release-1-5-0.md
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 1.5.0 is the sixth release on the 1.x line. This release represents 1400+ patches from 230+ contributors and 80+ institutions. To download Spark 1.5.0 visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+Spark 1.5.0 is the sixth release on the 1.x line. This release represents 1400+ patches from 230+ contributors and 80+ institutions. To download Spark 1.5.0 visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.
 
 You can consult JIRA for the [detailed changes](https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&version=12332078). We have curated a list of high level changes here:
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md b/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
index 5889a35..3236467 100644
--- a/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
+++ b/screencasts/_posts/2013-04-10-1-first-steps-with-spark.md
@@ -20,6 +20,6 @@ This screencast marks the beginning of a series of hands-on screencasts we will
 
 <div class="video-container video-square shadow"><iframe width="755" height="705" src="//www.youtube.com/embed/bWorBGOFBWY?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-Check out the next spark screencast in the series, <a href="{{site.url}}screencasts/2-spark-documentation-overview.html">Spark Screencast #2 - Overview of Spark Documentation</a>.
+Check out the next spark screencast in the series, <a href="{{site.baseurl}}/screencasts/2-spark-documentation-overview.html">Spark Screencast #2 - Overview of Spark Documentation</a>.
 
-For more information and links to other Spark screencasts, check out the <a href="{{site.url}}documentation.html">Spark documentation page</a>.
+For more information and links to other Spark screencasts, check out the <a href="{{site.baseurl}}/documentation.html">Spark documentation page</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md b/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
index af5d281..1fd7b7d 100644
--- a/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
+++ b/screencasts/_posts/2013-04-11-2-spark-documentation-overview.md
@@ -12,11 +12,11 @@ This is our 2nd Spark screencast. In it, we take a tour of the documentation ava
 
 <div class="video-container video-square shadow"><iframe width="755" height="705" src="//www.youtube.com/embed/Dbqe_rv-NJQ?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-Check out the next spark screencast in the series, <a href="{{site.url}}screencasts/3-transformations-and-caching.html">Spark Screencast #3 - Transformations and Caching</a>.
+Check out the next spark screencast in the series, <a href="{{site.baseurl}}/screencasts/3-transformations-and-caching.html">Spark Screencast #3 - Transformations and Caching</a>.
 
 
 And here are links to the documentation shown in the video:
 <ul>
-  <li><a href="{{site.url}}documentation.html">Spark documentation page</a></li>
+  <li><a href="{{site.baseurl}}/documentation.html">Spark documentation page</a></li>
   <li><a href="http://ampcamp.berkeley.edu/big-data-mini-course-home">Amp Camp Mini Course</a></li>
 </ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-04-16-3-transformations-and-caching.md b/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
index bb8e367..7a4cb36 100644
--- a/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
+++ b/screencasts/_posts/2013-04-16-3-transformations-and-caching.md
@@ -12,6 +12,6 @@ In this third Spark screencast, we demonstrate more advanced use of RDD actions
 
 <div class="video-container video-square shadow"><iframe width="755" height="705" src="//www.youtube.com/embed/TtvxKzO9jXE?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-Check out the next spark screencast in the series, <a href="{{site.url}}screencasts/4-a-standalone-job-in-spark.html">Spark Screencast #4 - A Standalone Job in Scala</a>.
+Check out the next spark screencast in the series, <a href="{{site.baseurl}}/screencasts/4-a-standalone-job-in-spark.html">Spark Screencast #4 - A Standalone Job in Scala</a>.
 
-For more information and links to other Spark screencasts, check out the <a href="{{site.url}}documentation.html">Spark documentation page</a>.
+For more information and links to other Spark screencasts, check out the <a href="{{site.baseurl}}/documentation.html">Spark documentation page</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
----------------------------------------------------------------------
diff --git a/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md b/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
index 2bd3c2c..96fb8dc 100644
--- a/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
+++ b/screencasts/_posts/2013-08-26-4-a-standalone-job-in-spark.md
@@ -13,4 +13,4 @@ In this Spark screencast, we create a standalone Apache Spark job in Scala. In t
 
 <div class="video-container video-16x9 shadow"><iframe width="755" height="425" src="//www.youtube.com/embed/GaBn-YjlR8Q?autohide=0&showinfo=0&list=PL-x35fyliRwhKT-NpTKprPW1bkbdDcTTW" frameborder="0" allowfullscreen></iframe></div>
 
-For more information and links to other Spark screencasts, check out the <a href="{{site.url}}documentation.html">Spark documentation page</a>.
+For more information and links to other Spark screencasts, check out the <a href="{{site.baseurl}}/documentation.html">Spark documentation page</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/documentation.html
----------------------------------------------------------------------
diff --git a/site/documentation.html b/site/documentation.html
index 60c1b59..2852976 100644
--- a/site/documentation.html
+++ b/site/documentation.html
@@ -255,12 +255,13 @@
 </ul>
 
 <h4><a name="meetup-videos"></a>Meetup Talk Videos</h4>
-<p>In addition to the videos listed below, you can also view <a href="http://www.meetup.com/spark-users/files/">all slides from Bay Area meetups here</a>.
+<p>In addition to the videos listed below, you can also view <a href="http://www.meetup.com/spark-users/files/">all slides from Bay Area meetups here</a>.</p>
 <style type="text/css">
   .video-meta-info {
     font-size: 0.95em;
   }
-</style></p>
+</style>
+
 <ul>
   <li><a href="http://www.youtube.com/watch?v=NUQ-8to2XAk&amp;list=PL-x35fyliRwiP3YteXbnhk0QGOtYLBT3a">Spark 1.0 and Beyond</a> (<a href="http://files.meetup.com/3138542/Spark%201.0%20Meetup.ppt">slides</a>) <span class="video-meta-info">by Patrick Wendell, at Cisco in San Jose, 2014-04-23</span></li>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/index.html
----------------------------------------------------------------------
diff --git a/site/news/index.html b/site/news/index.html
index 5d9d27d..bfc8a8e 100644
--- a/site/news/index.html
+++ b/site/news/index.html
@@ -390,22 +390,22 @@ With this release the Spark community continues to grow, with contributions from
 
 <article class="hentry">
     <header class="entry-header">
-      <h3 class="entry-title"><a href="/news/one-month-to-spark-summit-2015.html">One month to Spark Summit 2015 in San Francisco</a></h3>
-      <div class="entry-date">May 14, 2015</div>
+      <h3 class="entry-title"><a href="/news/spark-summit-europe.html">Announcing Spark Summit Europe</a></h3>
+      <div class="entry-date">May 15, 2015</div>
     </header>
-    <div class="entry-content"><p>There is one month left until <a href="https://spark-summit.org/2015/">Spark Summit 2015</a>, which
-will be held in San Francisco on June 15th to 17th.
-The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presentations</a> from over 50 organizations using Spark, focused on use cases and ongoing development.</p>
-
+    <div class="entry-content"><p>Abstract submissions are now open for the first ever <a href="https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/">Spark Summit Europe</a>. The event will take place on October 27th to 29th in Amsterdam. Submissions are welcome across a variety of Spark related topics, including use cases and ongoing development.</p>
 </div>
   </article>
 
 <article class="hentry">
     <header class="entry-header">
-      <h3 class="entry-title"><a href="/news/spark-summit-europe.html">Announcing Spark Summit Europe</a></h3>
-      <div class="entry-date">May 14, 2015</div>
+      <h3 class="entry-title"><a href="/news/one-month-to-spark-summit-2015.html">One month to Spark Summit 2015 in San Francisco</a></h3>
+      <div class="entry-date">May 15, 2015</div>
     </header>
-    <div class="entry-content"><p>Abstract submissions are now open for the first ever <a href="https://www.prevalentdesignevents.com/sparksummit2015/europe/speaker/">Spark Summit Europe</a>. The event will take place on October 27th to 29th in Amsterdam. Submissions are welcome across a variety of Spark related topics, including use cases and ongoing development.</p>
+    <div class="entry-content"><p>There is one month left until <a href="https://spark-summit.org/2015/">Spark Summit 2015</a>, which
+will be held in San Francisco on June 15th to 17th.
+The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presentations</a> from over 50 organizations using Spark, focused on use cases and ongoing development.</p>
+
 </div>
   </article>
 
@@ -414,7 +414,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <h3 class="entry-title"><a href="/news/spark-summit-east-2015-videos-posted.html">Spark Summit East 2015 Videos Posted</a></h3>
       <div class="entry-date">April 20, 2015</div>
     </header>
-    <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top. </p>
+    <div class="entry-content"><p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top.</p>
 
 </div>
   </article>
@@ -424,7 +424,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
       <h3 class="entry-title"><a href="/news/spark-1-2-2-released.html">Spark 1.2.2 and 1.3.1 released</a></h3>
       <div class="entry-date">April 17, 2015</div>
     </header>
-    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. </p>
+    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers.</p>
 
 </div>
   </article>
@@ -536,7 +536,7 @@ The Summit will contain <a href="https://spark-summit.org/2015/schedule/">presen
     </header>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
-Contributions to this release came from 28 developers. </p>
+Contributions to this release came from 28 developers.</p>
 
 </div>
   </article>
@@ -607,7 +607,7 @@ about the latest happenings in Spark.</p>
     <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
-Contributions to this release came from 37 developers. </p>
+Contributions to this release came from 37 developers.</p>
 
 </div>
   </article>
@@ -780,11 +780,6 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
     </header>
     <div class="entry-content"><p>We have released the first two screencasts in a series of short hands-on video training courses we will be publishing to help new users get up and running with Spark in minutes.</p>
 
-<p>The first Spark screencast is called <a href="/screencasts/1-first-steps-with-spark.html">First Steps With Spark</a> and walks you through downloading and building Spark, as well as using the Spark shell, all in less than 10 minutes!</p>
-
-<p>The second screencast is a 2 minute <a href="/screencasts/2-spark-documentation-overview.html">overview of the Spark documentation</a>.</p>
-
-<p>We hope you find these screencasts useful.</p>
 </div>
   </article>
 
@@ -862,7 +857,7 @@ Over 450 Spark developers and enthusiasts from 13 countries and more than 180 co
 <li><a href="http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/">DataInformed</a> interviewed two Spark users and wrote about their applications in anomaly detection, predictive analytics and data mining.</li>
 </ul>
 
-<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>. </p>
+<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>.</p>
 </div>
   </article>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-0-9-1-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-0-9-1-released.html b/site/news/spark-0-9-1-released.html
index 24669a9..40152b7 100644
--- a/site/news/spark-0-9-1-released.html
+++ b/site/news/spark-0-9-1-released.html
@@ -189,7 +189,7 @@
 <p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
-Contributions to this release came from 37 developers. </p>
+Contributions to this release came from 37 developers.</p>
 
 <p>Visit the <a href="/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">release notes</a> 
 to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-0-9-2-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-0-9-2-released.html b/site/news/spark-0-9-2-released.html
index 7c4ee38..70104b4 100644
--- a/site/news/spark-0-9-2-released.html
+++ b/site/news/spark-0-9-2-released.html
@@ -188,7 +188,7 @@
 
 <p>We are happy to announce the availability of <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
-Contributions to this release came from 28 developers. </p>
+Contributions to this release came from 28 developers.</p>
 
 <p>Visit the <a href="/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">release notes</a> 
 to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-1-1-0-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-1-1-0-released.html b/site/news/spark-1-1-0-released.html
index 55bcdf0..42ae590 100644
--- a/site/news/spark-1-1-0-released.html
+++ b/site/news/spark-1-1-0-released.html
@@ -188,7 +188,7 @@
 
 <p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 171 developers!</p>
 
-<p>This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module. </p>
+<p>This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module.</p>
 
 <p>Visit the <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">release notes</a> to read about the new features, or <a href="/downloads.html">download</a> the release today.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-1-2-2-released.html
----------------------------------------------------------------------
diff --git a/site/news/spark-1-2-2-released.html b/site/news/spark-1-2-2-released.html
index f03b507..28ca3b1 100644
--- a/site/news/spark-1-2-2-released.html
+++ b/site/news/spark-1-2-2-released.html
@@ -186,7 +186,7 @@
     <h2>Spark 1.2.2 and 1.3.1 released</h2>
 
 
-<p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. </p>
+<p>We are happy to announce the availability of <a href="/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers.</p>
 
 <p>To download either release, visit the <a href="/downloads.html">downloads</a> page.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-and-shark-in-the-news.html
----------------------------------------------------------------------
diff --git a/site/news/spark-and-shark-in-the-news.html b/site/news/spark-and-shark-in-the-news.html
index 7c964f7..3dac0cb 100644
--- a/site/news/spark-and-shark-in-the-news.html
+++ b/site/news/spark-and-shark-in-the-news.html
@@ -196,7 +196,7 @@
 <li><a href="http://data-informed.com/spark-an-open-source-engine-for-iterative-data-mining/">DataInformed</a> interviewed two Spark users and wrote about their applications in anomaly detection, predictive analytics and data mining.</li>
 </ul>
 
-<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>. </p>
+<p>In other news, there will be a full day of tutorials on Spark and Shark at the <a href="http://strataconf.com/strata2013">O&#8217;Reilly Strata conference</a> in February. They include a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27438">introduction to Spark, Shark and BDAS</a> Tuesday morning, and a three-hour <a href="http://strataconf.com/strata2013/public/schedule/detail/27440">hands-on exercise session</a>.</p>
 
 
 <p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/news/spark-summit-east-2015-videos-posted.html
----------------------------------------------------------------------
diff --git a/site/news/spark-summit-east-2015-videos-posted.html b/site/news/spark-summit-east-2015-videos-posted.html
index e0cd003..fed7c12 100644
--- a/site/news/spark-summit-east-2015-videos-posted.html
+++ b/site/news/spark-summit-east-2015-videos-posted.html
@@ -186,7 +186,7 @@
     <h2>Spark Summit East 2015 Videos Posted</h2>
 
 
-<p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top. </p>
+<p>The videos and slides for Spark Summit East 2015 are now all <a href="http://spark-summit.org/east/2015">available online</a>. Watch them to get the latest news from the Spark community as well as use cases and applications built on top.</p>
 
 <p>If you like what you see, consider joining us at the <a href="http://spark-summit.org/2015/agenda">2015 Spark Summit</a> in San Francisco.</p>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-0-8-0.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-0-8-0.html b/site/releases/spark-release-0-8-0.html
index 4e0a4f9..5a5dbd5 100644
--- a/site/releases/spark-release-0-8-0.html
+++ b/site/releases/spark-release-0-8-0.html
@@ -210,13 +210,13 @@
 <p>Spark\u2019s internal job scheduler has been refactored and extended to include more sophisticated scheduling policies. In particular, a <a href="http://spark.incubator.apache.org/docs/0.8.0/job-scheduling.html#scheduling-within-an-application">fair scheduler</a> implementation now allows multiple users to share an instance of Spark, which helps users running shorter jobs to achieve good performance, even when longer-running jobs are running in parallel. Support for topology-aware scheduling has been extended, including the ability to take into account rack locality and support for multiple executors on a single machine.</p>
 
 <h3 id="easier-deployment-and-linking">Easier Deployment and Linking</h3>
-<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code>spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>. </p>
+<p>User programs can now link to Spark no matter which Hadoop version they need, without having to publish a version of <code>spark-core</code> specifically for that Hadoop version. An explanation of how to link against different Hadoop versions is provided <a href="http://spark.incubator.apache.org/docs/0.8.0/scala-programming-guide.html#linking-with-spark">here</a>.</p>
 
 <h3 id="expanded-ec2-capabilities">Expanded EC2 Capabilities</h3>
 <p>Spark\u2019s EC2 scripts now support launching in any availability zone. Support has also been added for EC2 instance types which use the newer \u201cHVM\u201d architecture. This includes the cluster compute (cc1/cc2) family of instance types. We\u2019ve also added support for running newer versions of HDFS alongside Spark. Finally, we\u2019ve added the ability to launch clusters with maintenance releases of Spark in addition to launching the newest release.</p>
 
 <h3 id="improved-documentation">Improved Documentation</h3>
-<p>This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark. </p>
+<p>This release adds documentation about cluster hardware provisioning and inter-operation with common Hadoop distributions. Docs are also included to cover the MLlib machine learning functions and new cluster monitoring features. Existing documentation has been updated to reflect changes in building and deploying Spark.</p>
 
 <h3 id="other-improvements">Other Improvements</h3>
 <ul>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-0-9-1.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-0-9-1.html b/site/releases/spark-release-0-9-1.html
index fbc1d66..89a92d3 100644
--- a/site/releases/spark-release-0-9-1.html
+++ b/site/releases/spark-release-0-9-1.html
@@ -201,9 +201,9 @@
   <li>Fixed hash collision bug in external spilling [<a href="https://issues.apache.org/jira/browse/SPARK-1113">SPARK-1113</a>]</li>
   <li>Fixed conflict with Spark\u2019s log4j for users relying on other logging backends [<a href="https://issues.apache.org/jira/browse/SPARK-1190">SPARK-1190</a>]</li>
   <li>Fixed Graphx missing from Spark assembly jar in maven builds</li>
-  <li>Fixed silent failures due to map output status exceeding Akka frame size [<a href="https://issues.apache.org/jira/browse/SPARK-1244">SPARK-1244</a>] </li>
-  <li>Removed Spark\u2019s unnecessary direct dependency on ASM [<a href="https://issues.apache.org/jira/browse/SPARK-782">SPARK-782</a>] </li>
-  <li>Removed metrics-ganglia from default build due to LGPL license conflict [<a href="https://issues.apache.org/jira/browse/SPARK-1167">SPARK-1167</a>] </li>
+  <li>Fixed silent failures due to map output status exceeding Akka frame size [<a href="https://issues.apache.org/jira/browse/SPARK-1244">SPARK-1244</a>]</li>
+  <li>Removed Spark\u2019s unnecessary direct dependency on ASM [<a href="https://issues.apache.org/jira/browse/SPARK-782">SPARK-782</a>]</li>
+  <li>Removed metrics-ganglia from default build due to LGPL license conflict [<a href="https://issues.apache.org/jira/browse/SPARK-1167">SPARK-1167</a>]</li>
   <li>Fixed bug in distribution tarball not containing spark assembly jar [<a href="https://issues.apache.org/jira/browse/SPARK-1184">SPARK-1184</a>]</li>
   <li>Fixed bug causing infinite NullPointerException failures due to a null in map output locations [<a href="https://issues.apache.org/jira/browse/SPARK-1124">SPARK-1124</a>]</li>
   <li>Fixed bugs in post-job cleanup of scheduler\u2019s data structures</li>
@@ -219,7 +219,7 @@
   <li>Fixed bug making Spark application stall when YARN registration fails [<a href="https://issues.apache.org/jira/browse/SPARK-1032">SPARK-1032</a>]</li>
   <li>Race condition in getting HDFS delegation tokens in yarn-client mode [<a href="https://issues.apache.org/jira/browse/SPARK-1203">SPARK-1203</a>]</li>
   <li>Fixed bug in yarn-client mode not exiting properly [<a href="https://issues.apache.org/jira/browse/SPARK-1049">SPARK-1049</a>]</li>
-  <li>Fixed regression bug in ADD_JAR environment variable not correctly adding custom jars [<a href="https://issues.apache.org/jira/browse/SPARK-1089">SPARK-1089</a>] </li>
+  <li>Fixed regression bug in ADD_JAR environment variable not correctly adding custom jars [<a href="https://issues.apache.org/jira/browse/SPARK-1089">SPARK-1089</a>]</li>
 </ul>
 
 <h3 id="improvements-to-other-deployment-scenarios">Improvements to other deployment scenarios</h3>
@@ -230,19 +230,19 @@
 
 <h3 id="optimizations-to-mllib">Optimizations to MLLib</h3>
 <ul>
-  <li>Optimized memory usage of ALS [<a href="https://issues.apache.org/jira/browse/MLLIB-25">MLLIB-25</a>] </li>
+  <li>Optimized memory usage of ALS [<a href="https://issues.apache.org/jira/browse/MLLIB-25">MLLIB-25</a>]</li>
   <li>Optimized computation of YtY for implicit ALS [<a href="https://issues.apache.org/jira/browse/SPARK-1237">SPARK-1237</a>]</li>
   <li>Support for negative implicit input in ALS [<a href="https://issues.apache.org/jira/browse/MLLIB-22">MLLIB-22</a>]</li>
   <li>Setting of a random seed in ALS [<a href="https://issues.apache.org/jira/browse/SPARK-1238">SPARK-1238</a>]</li>
-  <li>Faster construction of features with intercept [<a href="https://issues.apache.org/jira/browse/SPARK-1260">SPARK-1260</a>] </li>
+  <li>Faster construction of features with intercept [<a href="https://issues.apache.org/jira/browse/SPARK-1260">SPARK-1260</a>]</li>
   <li>Check for intercept and weight in GLM\u2019s addIntercept [<a href="https://issues.apache.org/jira/browse/SPARK-1327">SPARK-1327</a>]</li>
 </ul>
 
 <h3 id="bug-fixes-and-better-api-parity-for-pyspark">Bug fixes and better API parity for PySpark</h3>
 <ul>
   <li>Fixed bug in Python de-pickling [<a href="https://issues.apache.org/jira/browse/SPARK-1135">SPARK-1135</a>]</li>
-  <li>Fixed bug in serialization of strings longer than 64K [<a href="https://issues.apache.org/jira/browse/SPARK-1043">SPARK-1043</a>] </li>
-  <li>Fixed bug that made jobs hang when base file is not available [<a href="https://issues.apache.org/jira/browse/SPARK-1025">SPARK-1025</a>] </li>
+  <li>Fixed bug in serialization of strings longer than 64K [<a href="https://issues.apache.org/jira/browse/SPARK-1043">SPARK-1043</a>]</li>
+  <li>Fixed bug that made jobs hang when base file is not available [<a href="https://issues.apache.org/jira/browse/SPARK-1025">SPARK-1025</a>]</li>
   <li>Added Missing RDD operations to PySpark - top, zip, foldByKey, repartition, coalesce, getStorageLevel, setName and toDebugString</li>
 </ul>
 
@@ -274,13 +274,13 @@
   <li>Kay Ousterhout - Multiple bug fixes in scheduler&#8217;s handling of task failures</li>
   <li>Kousuke Saruta - Use of https to access github</li>
   <li>Mark Grover  - Bug fix in distribution tar.gz</li>
-  <li>Matei Zaharia - Bug fixes in handling of task failures due to NPE,  and cleaning up of scheduler data structures </li>
+  <li>Matei Zaharia - Bug fixes in handling of task failures due to NPE,  and cleaning up of scheduler data structures</li>
   <li>Nan Zhu - Bug fixes in PySpark RDD.takeSample and adding of JARs using ADD_JAR -  and improvements to docs</li>
   <li>Nick Lanham - Added ability to make distribution tarballs with Tachyon</li>
   <li>Patrick Wendell - Bug fixes in ASM shading, fixes for log4j initialization, removing Ganglia due to LGPL license, and other miscallenous bug fixes</li>
   <li>Prabin Banka - RDD.zip and other missing RDD operations in PySpark</li>
   <li>Prashant Sharma - RDD.foldByKey in PySpark, and other PySpark doc improvements</li>
-  <li>Qiuzhuang - Bug fix in standalone worker </li>
+  <li>Qiuzhuang - Bug fix in standalone worker</li>
   <li>Raymond Liu - Changed working directory in ZookeeperPersistenceEngine</li>
   <li>Reynold Xin  - Improvements to docs and test infrastructure</li>
   <li>Sandy Ryza - Multiple important Yarn bug fixes and improvements</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-0-1.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-0-1.html b/site/releases/spark-release-1-0-1.html
index 4f9e0f9..78c88ea 100644
--- a/site/releases/spark-release-1-0-1.html
+++ b/site/releases/spark-release-1-0-1.html
@@ -258,8 +258,8 @@
   <li>Cheng Hao &#8211; SQL features</li>
   <li>Cheng Lian &#8211; SQL features</li>
   <li>Christian Tzolov &#8211; build improvmenet</li>
-  <li>Cl�ment MATHIEU &#8211; doc updates </li>
-  <li>CodingCat &#8211; doc updates and bug fix </li>
+  <li>Cl�ment MATHIEU &#8211; doc updates</li>
+  <li>CodingCat &#8211; doc updates and bug fix</li>
   <li>Colin McCabe &#8211; bug fix</li>
   <li>Daoyuan &#8211; SQL joins</li>
   <li>David Lemieux &#8211; bug fix</li>
@@ -275,7 +275,7 @@
   <li>Kan Zhang &#8211; PySpark SQL features</li>
   <li>Kay Ousterhout &#8211; documentation fix</li>
   <li>LY Lai &#8211; bug fix</li>
-  <li>Lars Albertsson &#8211; bug fix </li>
+  <li>Lars Albertsson &#8211; bug fix</li>
   <li>Lei Zhang &#8211; SQL fix and feature</li>
   <li>Mark Hamstra &#8211; bug fix</li>
   <li>Matei Zaharia &#8211; doc updates and bug fix</li>
@@ -297,7 +297,7 @@
   <li>Shixiong Zhu &#8211; code clean-up</li>
   <li>Szul, Piotr &#8211; bug fix</li>
   <li>Takuya UESHIN &#8211; bug fixes and SQL features</li>
-  <li>Thomas Graves &#8211; bug fix </li>
+  <li>Thomas Graves &#8211; bug fix</li>
   <li>Uri Laserson &#8211; bug fix</li>
   <li>Vadim Chekan &#8211; bug fix</li>
   <li>Varakhedi Sujeet &#8211; ec2 r3 support</li>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/site/releases/spark-release-1-0-2.html
----------------------------------------------------------------------
diff --git a/site/releases/spark-release-1-0-2.html b/site/releases/spark-release-1-0-2.html
index fe36880..33b5cb1 100644
--- a/site/releases/spark-release-1-0-2.html
+++ b/site/releases/spark-release-1-0-2.html
@@ -268,7 +268,7 @@
   <li>johnnywalleye - Bug fixes in MLlib</li>
   <li>joyyoj - Bug fix in Streaming</li>
   <li>kballou - Doc fix</li>
-  <li>lianhuiwang - Doc fix </li>
+  <li>lianhuiwang - Doc fix</li>
   <li>witgo - Bug fix in sbt</li>
 </ul>
 


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org


[3/3] spark-website git commit: Use site.baseurl, not site.url, to work with Jekyll 3.3. Require Jekyll 3.3. Again commit HTML consistent with Jekyll 3.3 output. Fix date problem with news posts that set date: by removing date:.

Posted by sr...@apache.org.
Use site.baseurl, not site.url, to work with Jekyll 3.3. Require Jekyll 3.3. Again commit HTML consistent with Jekyll 3.3 output. Fix date problem with news posts that set date: by removing date:.


Project: http://git-wip-us.apache.org/repos/asf/spark-website/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark-website/commit/d82e3722
Tree: http://git-wip-us.apache.org/repos/asf/spark-website/tree/d82e3722
Diff: http://git-wip-us.apache.org/repos/asf/spark-website/diff/d82e3722

Branch: refs/heads/asf-site
Commit: d82e3722043aa2c2c2d5af6d1e68f16a83101d73
Parents: 4e10a1a
Author: Sean Owen <so...@cloudera.com>
Authored: Fri Nov 11 19:56:10 2016 +0000
Committer: Sean Owen <so...@cloudera.com>
Committed: Tue Nov 15 17:56:22 2016 +0100

----------------------------------------------------------------------
 README.md                                       | 32 ++++++----
 _layouts/global.html                            | 62 ++++++++++----------
 _layouts/post.html                              |  2 +-
 community.md                                    |  2 +-
 documentation.md                                | 58 +++++++++---------
 downloads.md                                    |  2 +-
 faq.md                                          |  6 +-
 graphx/index.md                                 | 16 ++---
 index.md                                        | 30 +++++-----
 mllib/index.md                                  | 22 +++----
 .../2012-10-15-spark-version-0-6-0-released.md  |  2 +-
 ...2012-11-22-spark-0-6-1-and-0-5-2-released.md |  2 +-
 news/_posts/2013-02-07-spark-0-6-2-released.md  |  2 +-
 news/_posts/2013-02-27-spark-0-7-0-released.md  |  2 +-
 .../2013-04-16-spark-screencasts-published.md   | 12 ++--
 news/_posts/2013-06-02-spark-0-7-2-released.md  |  2 +-
 news/_posts/2013-07-16-spark-0-7-3-released.md  |  2 +-
 ...3-08-27-fourth-spark-screencast-published.md |  2 +-
 news/_posts/2013-09-25-spark-0-8-0-released.md  |  2 +-
 news/_posts/2013-12-19-spark-0-8-1-released.md  |  2 +-
 news/_posts/2014-02-02-spark-0-9-0-released.md  |  6 +-
 news/_posts/2014-02-27-spark-becomes-tlp.md     |  2 +-
 news/_posts/2014-04-09-spark-0-9-1-released.md  |  6 +-
 news/_posts/2014-05-30-spark-1-0-0-released.md  |  4 +-
 news/_posts/2014-07-11-spark-1-0-1-released.md  |  4 +-
 news/_posts/2014-07-23-spark-0-9-2-released.md  |  6 +-
 news/_posts/2014-08-05-spark-1-0-2-released.md  |  4 +-
 news/_posts/2014-09-11-spark-1-1-0-released.md  |  4 +-
 news/_posts/2014-11-26-spark-1-1-1-released.md  |  4 +-
 news/_posts/2014-12-18-spark-1-2-0-released.md  |  4 +-
 news/_posts/2015-02-09-spark-1-2-1-released.md  |  4 +-
 news/_posts/2015-03-13-spark-1-3-0-released.md  |  4 +-
 news/_posts/2015-04-17-spark-1-2-2-released.md  |  4 +-
 ...2015-05-15-one-month-to-spark-summit-2015.md |  1 -
 news/_posts/2015-05-15-spark-summit-europe.md   |  1 -
 news/_posts/2015-06-11-spark-1-4-0-released.md  |  4 +-
 news/_posts/2015-07-15-spark-1-4-1-released.md  |  4 +-
 news/_posts/2015-09-09-spark-1-5-0-released.md  |  4 +-
 news/_posts/2015-10-02-spark-1-5-1-released.md  |  4 +-
 news/_posts/2015-11-09-spark-1-5-2-released.md  |  4 +-
 news/_posts/2016-01-04-spark-1-6-0-released.md  |  6 +-
 news/_posts/2016-03-09-spark-1-6-1-released.md  |  4 +-
 news/_posts/2016-06-25-spark-1-6-2-released.md  |  4 +-
 news/_posts/2016-07-26-spark-2-0-0-released.md  |  2 +-
 news/_posts/2016-10-03-spark-2-0-1-released.md  |  2 +-
 news/_posts/2016-11-07-spark-1-6-3-released.md  |  4 +-
 news/_posts/2016-11-14-spark-2-0-2-released.md  |  4 +-
 .../_posts/2013-09-25-spark-release-0-8-0.md    |  2 +-
 .../_posts/2013-12-19-spark-release-0-8-1.md    |  4 +-
 .../_posts/2014-02-02-spark-release-0-9-0.md    | 18 +++---
 .../_posts/2014-05-30-spark-release-1-0-0.md    | 14 ++---
 .../_posts/2014-09-11-spark-release-1-1-0.md    |  2 +-
 .../_posts/2014-11-26-spark-release-1-1-1.md    |  2 +-
 .../_posts/2014-12-18-spark-release-1-2-0.md    |  2 +-
 .../_posts/2015-02-09-spark-release-1-2-1.md    |  2 +-
 .../_posts/2015-03-13-spark-release-1-3-0.md    |  2 +-
 .../_posts/2015-04-17-spark-release-1-2-2.md    |  2 +-
 .../_posts/2015-04-17-spark-release-1-3-1.md    |  2 +-
 .../_posts/2015-06-11-spark-release-1-4-0.md    |  2 +-
 .../_posts/2015-07-15-spark-release-1-4-1.md    |  2 +-
 .../_posts/2015-09-09-spark-release-1-5-0.md    |  2 +-
 .../2013-04-10-1-first-steps-with-spark.md      |  4 +-
 ...2013-04-11-2-spark-documentation-overview.md |  4 +-
 .../2013-04-16-3-transformations-and-caching.md |  4 +-
 .../2013-08-26-4-a-standalone-job-in-spark.md   |  2 +-
 site/documentation.html                         |  5 +-
 site/news/index.html                            | 33 +++++------
 site/news/spark-0-9-1-released.html             |  2 +-
 site/news/spark-0-9-2-released.html             |  2 +-
 site/news/spark-1-1-0-released.html             |  2 +-
 site/news/spark-1-2-2-released.html             |  2 +-
 site/news/spark-and-shark-in-the-news.html      |  2 +-
 .../spark-summit-east-2015-videos-posted.html   |  2 +-
 site/releases/spark-release-0-8-0.html          |  4 +-
 site/releases/spark-release-0-9-1.html          | 20 +++----
 site/releases/spark-release-1-0-1.html          |  8 +--
 site/releases/spark-release-1-0-2.html          |  2 +-
 site/releases/spark-release-1-1-0.html          |  6 +-
 site/releases/spark-release-1-2-0.html          |  2 +-
 site/releases/spark-release-1-3-0.html          |  6 +-
 site/releases/spark-release-1-3-1.html          |  6 +-
 site/releases/spark-release-1-4-0.html          |  4 +-
 site/releases/spark-release-1-5-0.html          | 30 +++++-----
 site/releases/spark-release-1-6-0.html          | 20 +++----
 site/releases/spark-release-2-0-0.html          | 36 ++++++------
 sql/index.md                                    | 14 ++---
 streaming/index.md                              | 16 ++---
 87 files changed, 333 insertions(+), 329 deletions(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/README.md
----------------------------------------------------------------------
diff --git a/README.md b/README.md
index 15f6e50..51a7cb4 100644
--- a/README.md
+++ b/README.md
@@ -1,30 +1,40 @@
-Welcome to the Spark website.
-
 ## Generating the website HTML
 
-In this directory you will find text files formatted using Markdown, with an ".md" suffix.
+In this directory you will find text files formatted using Markdown, with an `.md` suffix.
 
-Building the site requires Jekyll 1.0.0 or newer (because we use the keep_files config option. The easiest way to install jekyll is via a Ruby Gem. This will create a directory called `site` containing index.html as well as the rest of the compiled directories and files. Read more about Jekyll at http://jekyllrb.com/docs
+Building the site requires [Jekyll](http://jekyllrb.com/docs) 3.3.0 or newer. 
+The easiest way to install jekyll is via a Ruby Gem. This will create a directory called `site` 
+containing `index.html` as well as the rest of the compiled directories and files.
 
-To install Jekyll and its required dependencies, execute `sudo gem install jekyll pygments.rb` and `sudo pip install Pygments`. See also https://github.com/apache/spark/blob/master/docs/README.md
+To install Jekyll and its required dependencies, execute `sudo gem install jekyll pygments.rb` 
+and `sudo pip install Pygments`.
+See also https://github.com/apache/spark/blob/master/docs/README.md
 
-You can generate the html website by running `jekyll build` in this directory. Use the --watch flag to have jekyll recompile your files as you save changes.
+You can generate the html website by running `jekyll build` in this directory. Use the `--watch` 
+flag to have jekyll recompile your files as you save changes.
 
-In addition to generating the site as html from the markdown files, jekyll can serve the site via a web server. To build the site and run a web server use the command `jekyll serve` which runs the web server on port 4000, then visit the site at http://localhost:4000.
+In addition to generating the site as HTML from the markdown files, jekyll can serve the site via 
+a web server. To build the site and run a web server use the command `jekyll serve` which runs 
+the web server on port 4000, then visit the site at http://localhost:4000.
 
 ## Docs sub-dir
 
-The docs are not generated as part of the website. They are built separately for each release of Spark from the Spark source repository and then copied to the website under the docs directory. See the instructions for building those in the readme in the SPARK_SOURCE/docs directory.
+The docs are not generated as part of the website. They are built separately for each release 
+of Spark from the Spark source repository and then copied to the website under the docs 
+directory. See the instructions for building those in the readme in the Spark 
+project's `/docs` directory.
 
 ## Pygments
 
-We also use pygments (http://pygments.org) for syntax highlighting in documentation markdown pages.
+We also use [pygments](http://pygments.org) for syntax highlighting in documentation markdown pages.
 
-To mark a block of code in your markdown to be syntax highlighted by jekyll during the compile phase, use the following syntax:
+To mark a block of code in your markdown to be syntax highlighted by `jekyll` during the 
+compile phase, use the following syntax:
 
     {% highlight scala %}
     // Your scala code goes here, you can replace scala with many other
     // supported languages too.
     {% endhighlight %}
 
- You probably don't need to install that unless you want to regenerate the pygments css file. It requires Python, and can be installed by running `sudo easy_install Pygments`.
\ No newline at end of file
+ You probably don't need to install that unless you want to regenerate the pygments CSS file. 
+ It requires Python, and can be installed by running `sudo easy_install Pygments`.
\ No newline at end of file

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/_layouts/global.html
----------------------------------------------------------------------
diff --git a/_layouts/global.html b/_layouts/global.html
index c100330..24174ce 100644
--- a/_layouts/global.html
+++ b/_layouts/global.html
@@ -12,8 +12,8 @@
   </title>
 
   {% if page.redirect %}
-    <meta http-equiv="refresh" content="0; url={{site.url}}{{page.redirect}}">
-    <link rel="canonical" href="{{site.url}}{{page.redirect}}" />
+    <meta http-equiv="refresh" content="0; url={{site.baseurl}}/{{page.redirect}}">
+    <link rel="canonical" href="{{site.url}}{{site.baseurl}}{{page.redirect}}" />
   {% endif %}
 
   {% if page.description %}
@@ -21,11 +21,11 @@
   {% endif %}
 
   <!-- Bootstrap core CSS -->
-  <link href="{{site.url}}css/cerulean.min.css" rel="stylesheet">
-  <link href="{{site.url}}css/custom.css" rel="stylesheet">
+  <link href="{{site.baseurl}}/css/cerulean.min.css" rel="stylesheet">
+  <link href="{{site.baseurl}}/css/custom.css" rel="stylesheet">
 
   <!-- Code highlighter CSS -->
-  <link href="{{site.url}}css/pygments-default.css" rel="stylesheet">
+  <link href="{{site.baseurl}}/css/pygments-default.css" rel="stylesheet">
 
   <script type="text/javascript">
   <!-- Google Analytics initialization -->
@@ -61,16 +61,16 @@
 
 <script src="https://code.jquery.com/jquery.js"></script>
 <script src="//netdna.bootstrapcdn.com/bootstrap/3.0.3/js/bootstrap.min.js"></script>
-<script src="{{site.url}}js/lang-tabs.js"></script>
-<script src="{{site.url}}js/downloads.js"></script>
+<script src="{{site.baseurl}}/js/lang-tabs.js"></script>
+<script src="{{site.baseurl}}/js/downloads.js"></script>
 
 <div class="container" style="max-width: 1200px;">
 
 <div class="masthead">
   {% if page.subproject %}
     <p class="lead">
-      <a href="{{site.url}}">
-      <img src="{{site.url}}images/spark-logo-trademark.png"
+      <a href="{{site.baseurl}}/">
+      <img src="{{site.baseurl}}/images/spark-logo-trademark.png"
       style="height:100px; width:auto; vertical-align: bottom; margin-top: 20px;"></a>
       <a href="#"><span class="subproject">
         {{ page.subproject }}
@@ -78,8 +78,8 @@
     </p>
   {% else %}
     <p class="lead">
-      <a href="{{site.url}}">
-      <img src="{{site.url}}images/spark-logo-trademark.png"
+      <a href="{{site.baseurl}}/">
+      <img src="{{site.baseurl}}/images/spark-logo-trademark.png"
         style="height:100px; width:auto; vertical-align: bottom; margin-top: 20px;"></a><span class="tagline">
           Lightning-fast cluster computing
       </span>
@@ -102,16 +102,16 @@
   <!-- Collect the nav links, forms, and other content for toggling -->
   <div class="collapse navbar-collapse" id="navbar-collapse-1">
     <ul class="nav navbar-nav">
-      <li><a href="{{site.url}}downloads.html">Download</a></li>
+      <li><a href="{{site.baseurl}}/downloads.html">Download</a></li>
       <li class="dropdown">
         <a href="#" class="dropdown-toggle" data-toggle="dropdown">
           Libraries <b class="caret"></b>
         </a>
         <ul class="dropdown-menu">
-          <li><a href="{{site.url}}sql/">SQL and DataFrames</a></li>
-          <li><a href="{{site.url}}streaming/">Spark Streaming</a></li>
-          <li><a href="{{site.url}}mllib/">MLlib (machine learning)</a></li>
-          <li><a href="{{site.url}}graphx/">GraphX (graph)</a></li>
+          <li><a href="{{site.baseurl}}/sql/">SQL and DataFrames</a></li>
+          <li><a href="{{site.baseurl}}/streaming/">Spark Streaming</a></li>
+          <li><a href="{{site.baseurl}}/mllib/">MLlib (machine learning)</a></li>
+          <li><a href="{{site.baseurl}}/graphx/">GraphX (graph)</a></li>
           <li class="divider"></li>
           <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects">Third-Party Packages</a></li>
         </ul>
@@ -121,25 +121,25 @@
           Documentation <b class="caret"></b>
         </a>
         <ul class="dropdown-menu">
-          <li><a href="{{site.url}}docs/latest/">Latest Release (Spark 2.0.2)</a></li>
-          <li><a href="{{site.url}}documentation.html">Older Versions and Other Resources</a></li>
+          <li><a href="{{site.baseurl}}/docs/latest/">Latest Release (Spark 2.0.2)</a></li>
+          <li><a href="{{site.baseurl}}/documentation.html">Older Versions and Other Resources</a></li>
         </ul>
       </li>
-      <li><a href="{{site.url}}examples.html">Examples</a></li>
+      <li><a href="{{site.baseurl}}/examples.html">Examples</a></li>
       <li class="dropdown">
-        <a href="{{site.url}}community.html" class="dropdown-toggle" data-toggle="dropdown">
+        <a href="{{site.baseurl}}/community.html" class="dropdown-toggle" data-toggle="dropdown">
           Community <b class="caret"></b>
         </a>
         <ul class="dropdown-menu">
-          <li><a href="{{site.url}}community.html">Mailing Lists</a></li>
-          <li><a href="{{site.url}}community.html#events">Events and Meetups</a></li>
-          <li><a href="{{site.url}}community.html#history">Project History</a></li>
+          <li><a href="{{site.baseurl}}/community.html">Mailing Lists</a></li>
+          <li><a href="{{site.baseurl}}/community.html#events">Events and Meetups</a></li>
+          <li><a href="{{site.baseurl}}/community.html#history">Project History</a></li>
           <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Powered+By+Spark">Powered By</a></li>
           <li><a href="https://cwiki.apache.org/confluence/display/SPARK/Committers">Project Committers</a></li>
           <li><a href="https://issues.apache.org/jira/browse/SPARK">Issue Tracker</a></li>
         </ul>
       </li>
-      <li><a href="{{site.url}}faq.html">FAQ</a></li>
+      <li><a href="{{site.baseurl}}/faq.html">FAQ</a></li>
     </ul>
     <ul class="nav navbar-nav navbar-right">
       <li class="dropdown">
@@ -169,20 +169,20 @@
           <span class="small">({{post.date| date:"%b %d, %Y"}})</span></li>
         {% endfor %}
       </ul>
-      <p class="small" style="text-align: right;"><a href="{{site.url}}news/index.html">Archive</a></p>
+      <p class="small" style="text-align: right;"><a href="{{site.baseurl}}/news/index.html">Archive</a></p>
     </div>
     <div class="hidden-xs hidden-sm">
-      <a href="{{site.url}}downloads.html" class="btn btn-success btn-lg btn-block" style="margin-bottom: 30px;">
+      <a href="{{site.baseurl}}/downloads.html" class="btn btn-success btn-lg btn-block" style="margin-bottom: 30px;">
         Download Spark
       </a>
       <p style="font-size: 16px; font-weight: 500; color: #555;">
         Built-in Libraries:
       </p>
       <ul class="list-none">
-        <li><a href="{{site.url}}sql/">SQL and DataFrames</a></li>
-        <li><a href="{{site.url}}streaming/">Spark Streaming</a></li>
-        <li><a href="{{site.url}}mllib/">MLlib (machine learning)</a></li>
-        <li><a href="{{site.url}}graphx/">GraphX (graph)</a></li>
+        <li><a href="{{site.baseurl}}/sql/">SQL and DataFrames</a></li>
+        <li><a href="{{site.baseurl}}/streaming/">Spark Streaming</a></li>
+        <li><a href="{{site.baseurl}}/mllib/">MLlib (machine learning)</a></li>
+        <li><a href="{{site.baseurl}}/graphx/">GraphX (graph)</a></li>
       </ul>
       <a href="https://cwiki.apache.org/confluence/display/SPARK/Third+Party+Projects">Third-Party Packages</a>
     </div>
@@ -199,7 +199,7 @@
 
 <footer class="small">
   <hr>
-  Apache Spark, Spark, Apache, and the Spark logo are <a href="{{site.url}}trademarks.html">trademarks</a> of
+  Apache Spark, Spark, Apache, and the Spark logo are <a href="{{site.baseurl}}/trademarks.html">trademarks</a> of
   <a href="http://www.apache.org">The Apache Software Foundation</a>.
 </footer>
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/_layouts/post.html
----------------------------------------------------------------------
diff --git a/_layouts/post.html b/_layouts/post.html
index 60b45e9..f2c9f2e 100644
--- a/_layouts/post.html
+++ b/_layouts/post.html
@@ -9,5 +9,5 @@ type: singular
 
 <p>
 <br/>
-<a href="{{site.url}}news/">Spark News Archive</a>
+<a href="{{site.baseurl}}/news/">Spark News Archive</a>
 </p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/community.md
----------------------------------------------------------------------
diff --git a/community.md b/community.md
index b0c5b3a..480d05a 100644
--- a/community.md
+++ b/community.md
@@ -159,7 +159,7 @@ Spark Meetups are grass-roots events organized and hosted by leaders and champio
 Spark started as a research project at the <a href="https://amplab.cs.berkeley.edu">UC Berkeley AMPLab</a>
 in 2009, and was open sourced in early 2010.
 Many of the ideas behind the system are presented in various
-<a href="{{site.url}}research.html">research papers</a>.
+<a href="{{site.baseurl}}/research.html">research papers</a>.
 </p>
 
 <p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/documentation.md
----------------------------------------------------------------------
diff --git a/documentation.md b/documentation.md
index 0ff8ed2..c2f4506 100644
--- a/documentation.md
+++ b/documentation.md
@@ -12,38 +12,38 @@ navigation:
 <p>Setup instructions, programming guides, and other documentation are available for each stable version of Spark below:</p>
 
 <ul>
-  <li><a href="{{site.url}}docs/2.0.1/">Spark 2.0.1 (latest release)</a></li>
-  <li><a href="{{site.url}}docs/2.0.0/">Spark 2.0.0</a></li>
-  <li><a href="{{site.url}}docs/1.6.3/">Spark 1.6.3</a></li>
-  <li><a href="{{site.url}}docs/1.6.2/">Spark 1.6.2</a></li>
-  <li><a href="{{site.url}}docs/1.6.1/">Spark 1.6.1</a></li>
-  <li><a href="{{site.url}}docs/1.6.0/">Spark 1.6.0</a></li>
-  <li><a href="{{site.url}}docs/1.5.2/">Spark 1.5.2</a></li>
-  <li><a href="{{site.url}}docs/1.5.1/">Spark 1.5.1</a></li>
-  <li><a href="{{site.url}}docs/1.5.0/">Spark 1.5.0</a></li>
-  <li><a href="{{site.url}}docs/1.4.1/">Spark 1.4.1</a></li>
-  <li><a href="{{site.url}}docs/1.4.0/">Spark 1.4.0</a></li>
-  <li><a href="{{site.url}}docs/1.3.1/">Spark 1.3.1</a></li>
-  <li><a href="{{site.url}}docs/1.3.0/">Spark 1.3.0</a></li>
-  <li><a href="{{site.url}}docs/1.2.1/">Spark 1.2.1</a></li>
-  <li><a href="{{site.url}}docs/1.1.1/">Spark 1.1.1</a></li>
-  <li><a href="{{site.url}}docs/1.0.2/">Spark 1.0.2</a></li>
-  <li><a href="{{site.url}}docs/0.9.2/">Spark 0.9.2</a></li>
-  <li><a href="{{site.url}}docs/0.8.1/">Spark 0.8.1</a></li>
-  <li><a href="{{site.url}}docs/0.7.3/">Spark 0.7.3</a></li>
-  <li><a href="{{site.url}}docs/0.6.2/">Spark 0.6.2</a></li>
+  <li><a href="{{site.baseurl}}/docs/2.0.1/">Spark 2.0.1 (latest release)</a></li>
+  <li><a href="{{site.baseurl}}/docs/2.0.0/">Spark 2.0.0</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.6.3/">Spark 1.6.3</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.6.2/">Spark 1.6.2</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.6.1/">Spark 1.6.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.6.0/">Spark 1.6.0</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.5.2/">Spark 1.5.2</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.5.1/">Spark 1.5.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.5.0/">Spark 1.5.0</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.4.1/">Spark 1.4.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.4.0/">Spark 1.4.0</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.3.1/">Spark 1.3.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.3.0/">Spark 1.3.0</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.2.1/">Spark 1.2.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.1.1/">Spark 1.1.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/1.0.2/">Spark 1.0.2</a></li>
+  <li><a href="{{site.baseurl}}/docs/0.9.2/">Spark 0.9.2</a></li>
+  <li><a href="{{site.baseurl}}/docs/0.8.1/">Spark 0.8.1</a></li>
+  <li><a href="{{site.baseurl}}/docs/0.7.3/">Spark 0.7.3</a></li>
+  <li><a href="{{site.baseurl}}/docs/0.6.2/">Spark 0.6.2</a></li>
 </ul>
 
 <!--
 <p>Documentation for preview releases:</p>
 
 <ul>
-  <li><a href="{{site.url}}docs/2.0.0-preview/">Spark 2.0.0 preview</a></li>
+  <li><a href="{{site.baseurl}}/docs/2.0.0-preview/">Spark 2.0.0 preview</a></li>
 </ul>
 -->
 
-<p>The documentation linked to above covers getting started with Spark, as well the built-in components <a href="{{site.url}}docs/latest/mllib-guide.html">MLlib</a>,
-<a href="{{site.url}}docs/latest/streaming-programming-guide.html">Spark Streaming</a>, and <a href="{{site.url}}docs/latest/graphx-guide.html">GraphX</a>.</p>
+<p>The documentation linked to above covers getting started with Spark, as well the built-in components <a href="{{site.baseurl}}/docs/latest/mllib-guide.html">MLlib</a>,
+<a href="{{site.baseurl}}/docs/latest/streaming-programming-guide.html">Spark Streaming</a>, and <a href="{{site.baseurl}}/docs/latest/graphx-guide.html">GraphX</a>.</p>
 
 <p>In addition, this page lists other resources for learning Spark.</p>
 
@@ -52,10 +52,10 @@ See the <a href="http://www.youtube.com/channel/UCRzsq7k4-kT-h3TDUBQ82-w">Apache
 
 <h4>Screencast Tutorial Videos</h4>
 <ul>
-  <li><a href="{{site.url}}screencasts/1-first-steps-with-spark.html">Screencast 1: First Steps with Spark</a></li>
-  <li><a href="{{site.url}}screencasts/2-spark-documentation-overview.html">Screencast 2: Spark Documentation Overview</a></li>
-<li><a href="{{site.url}}screencasts/3-transformations-and-caching.html">Screencast 3: Transformations and Caching</a></li>
-<li><a href="{{site.url}}screencasts/4-a-standalone-job-in-spark.html">Screencast 4: A Spark Standalone Job in Scala</a></li>
+  <li><a href="{{site.baseurl}}/screencasts/1-first-steps-with-spark.html">Screencast 1: First Steps with Spark</a></li>
+  <li><a href="{{site.baseurl}}/screencasts/2-spark-documentation-overview.html">Screencast 2: Spark Documentation Overview</a></li>
+<li><a href="{{site.baseurl}}/screencasts/3-transformations-and-caching.html">Screencast 3: Transformations and Caching</a></li>
+<li><a href="{{site.baseurl}}/screencasts/4-a-standalone-job-in-spark.html">Screencast 4: A Spark Standalone Job in Scala</a></li>
 
 </ul>
 
@@ -174,7 +174,7 @@ Slides, videos and EC2-based exercises from each of these are available online:
 <h3>Examples</h3>
 
 <ul>
-  <li>The <a href="{{site.url}}examples.html">Spark examples page</a> shows the basic API in Scala, Java and Python.</li>
+  <li>The <a href="{{site.baseurl}}/examples.html">Spark examples page</a> shows the basic API in Scala, Java and Python.</li>
 </ul>
 
 <h3>Wiki</h3>
@@ -188,5 +188,5 @@ information for developers, such as architecture documents and how to <a href="h
 
 <p>
 Spark was initially developed as a UC Berkeley research project, and much of the design is documented in papers.
-The <a href="{{site.url}}research.html">research page</a> lists some of the original motivation and direction.
+The <a href="{{site.baseurl}}/research.html">research page</a> lists some of the original motivation and direction.
 </p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/downloads.md
----------------------------------------------------------------------
diff --git a/downloads.md b/downloads.md
index 0031a05..17e1c7c 100644
--- a/downloads.md
+++ b/downloads.md
@@ -62,7 +62,7 @@ If you are interested in working with the newest under-development code or contr
     # 2.0 maintenance branch with stability fixes on top of Spark 2.0.2
     git clone git://github.com/apache/spark.git -b branch-2.0
 
-Once you've downloaded Spark, you can find instructions for installing and building it on the <a href="{{site.url}}documentation.html">documentation page</a>.
+Once you've downloaded Spark, you can find instructions for installing and building it on the <a href="{{site.baseurl}}/documentation.html">documentation page</a>.
 
 ### Release Notes for Stable Releases
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/faq.md
----------------------------------------------------------------------
diff --git a/faq.md b/faq.md
index f5c8565..f8aa072 100644
--- a/faq.md
+++ b/faq.md
@@ -23,10 +23,10 @@ Spark is a fast and general processing engine compatible with Hadoop data. It ca
 
 <p class="question">Does my data need to fit in memory to use Spark?</p>
 
-<p class="answer">No. Spark's operators spill data to disk if it does not fit in memory, allowing it to run well on any sized data. Likewise, cached datasets that do not fit in memory are either spilled to disk or recomputed on the fly when needed, as determined by the RDD's <a href="{{site.url}}docs/latest/scala-programming-guide.html#rdd-persistence">storage level</a>.
+<p class="answer">No. Spark's operators spill data to disk if it does not fit in memory, allowing it to run well on any sized data. Likewise, cached datasets that do not fit in memory are either spilled to disk or recomputed on the fly when needed, as determined by the RDD's <a href="{{site.baseurl}}/docs/latest/scala-programming-guide.html#rdd-persistence">storage level</a>.
 
 <p class="question">How can I run Spark on a cluster?</p>
-<p class="answer">You can use either the <a href="{{site.url}}docs/latest/spark-standalone.html">standalone deploy mode</a>, which only needs Java to be installed on each node, or the <a href="{{site.url}}docs/latest/running-on-mesos.html">Mesos</a> and <a href="{{site.url}}docs/latest/running-on-yarn.html">YARN</a> cluster managers. If you'd like to run on Amazon EC2, Spark provides <a href="{{site.url}}docs/latest/ec2-scripts.html}}">EC2 scripts</a> to automatically launch a cluster.</p>
+<p class="answer">You can use either the <a href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone deploy mode</a>, which only needs Java to be installed on each node, or the <a href="{{site.baseurl}}/docs/latest/running-on-mesos.html">Mesos</a> and <a href="{{site.baseurl}}/docs/latest/running-on-yarn.html">YARN</a> cluster managers. If you'd like to run on Amazon EC2, Spark provides <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html}}">EC2 scripts</a> to automatically launch a cluster.</p>
 
 <p>Note that you can also run Spark locally (possibly on multiple cores) without any special setup by just passing <code>local[N]</code> as the master URL, where <code>N</code> is the number of parallel threads you want.</p>
 
@@ -62,7 +62,7 @@ and <a href="https://www.apache.org/foundation/marks/">trademark policy</a>.
 In particular, note that there are strong restrictions about how third-party products
 use the "Spark" name (names based on Spark are generally not allowed).
 Please also refer to our
-<a href="{{site.url}}trademarks.html">trademark policy summary</a>.
+<a href="{{site.baseurl}}/trademarks.html">trademark policy summary</a>.
 </p>
 
 <p class="question">How can I contribute to Spark?</p>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/graphx/index.md
----------------------------------------------------------------------
diff --git a/graphx/index.md b/graphx/index.md
index 8e7b54c..a3aa8d2 100644
--- a/graphx/index.md
+++ b/graphx/index.md
@@ -17,7 +17,7 @@ subproject: GraphX
       Seamlessly work with both graphs and collections.
     </p>
     <p>
-      GraphX unifies ETL, exploratory analysis, and iterative graph computation within a single system. You can <a href="{{site.url}}docs/latest/graphx-programming-guide.html#the-property-graph">view</a> the same data as both graphs and collections, <a href="{{site.url}}docs/latest/graphx-programming-guide.html#property-operators">transform</a> and <a href="{{site.url}}docs/latest/graphx-programming-guide.html#join-operators">join</a> graphs with RDDs efficiently, and write custom iterative graph algorithms using the <a href="{{site.url}}docs/latest/graphx-programming-guide.html#pregel-api">Pregel API</a>.
+      GraphX unifies ETL, exploratory analysis, and iterative graph computation within a single system. You can <a href="{{site.baseurl}}/docs/latest/graphx-programming-guide.html#the-property-graph">view</a> the same data as both graphs and collections, <a href="{{site.baseurl}}/docs/latest/graphx-programming-guide.html#property-operators">transform</a> and <a href="{{site.baseurl}}/docs/latest/graphx-programming-guide.html#join-operators">join</a> graphs with RDDs efficiently, and write custom iterative graph algorithms using the <a href="{{site.baseurl}}/docs/latest/graphx-programming-guide.html#pregel-api">Pregel API</a>.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
@@ -47,7 +47,7 @@ subproject: GraphX
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 272px; display: inline-block; text-align: center; padding:0;">
-      <img src="{{site.url}}images/graphx-perf-comparison.png" style="width: 60%; max-width: 250px;">
+      <img src="{{site.baseurl}}/images/graphx-perf-comparison.png" style="width: 60%; max-width: 250px;">
       <div class="caption" style="min-width: 272px;">End-to-end PageRank performance (20 iterations, 3.7B edges)</div>
     </div>
   </div>
@@ -59,7 +59,7 @@ subproject: GraphX
     <p class="lead">
       Choose from a growing library of graph algorithms.
     </p>
-    <p>In addition to a <a href="{{site.url}}docs/latest/graphx-programming-guide.html#graph-operators">highly flexible API</a>, GraphX comes with a variety of graph algorithms, many of which were contributed by our users.</p>
+    <p>In addition to a <a href="{{site.baseurl}}/docs/latest/graphx-programming-guide.html#graph-operators">highly flexible API</a>, GraphX comes with a variety of graph algorithms, many of which were contributed by our users.</p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top">
     <ul class="list-narrow">
@@ -83,7 +83,7 @@ subproject: GraphX
     </p>
     <p>
       If you have questions about the library, ask on the
-      <a href="{{site.url}}community.html#mailing-lists">Spark mailing lists</a>.
+      <a href="{{site.baseurl}}/community.html#mailing-lists">Spark mailing lists</a>.
     </p>
     <p>
       GraphX is in the alpha stage and welcomes contributions. If you'd like to submit a change to GraphX,
@@ -98,10 +98,10 @@ subproject: GraphX
       To get started with GraphX:
     </p>
     <ul class="list-narrow">
-      <li><a href="{{site.url}}downloads.html">Download Spark</a>. GraphX is included as a module.</li>
-      <li>Read the <a href="{{site.url}}docs/latest/graphx-programming-guide.html">GraphX guide</a>, which includes
+      <li><a href="{{site.baseurl}}/downloads.html">Download Spark</a>. GraphX is included as a module.</li>
+      <li>Read the <a href="{{site.baseurl}}/docs/latest/graphx-programming-guide.html">GraphX guide</a>, which includes
       usage examples.</li>
-      <li>Learn how to <a href="{{site.url}}docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster
+      <li>Learn how to <a href="{{site.baseurl}}/docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster
         if you'd like to run in distributed mode. You can also run locally on a multicore machine
         without any setup.
       </li>
@@ -111,7 +111,7 @@ subproject: GraphX
 
 <div class="row">
   <div class="col-sm-12 col-center">
-    <a href="{{site.url}}downloads.html" class="btn btn-success btn-lg btn-multiline">
+    <a href="{{site.baseurl}}/downloads.html" class="btn btn-success btn-lg btn-multiline">
       Download Apache Spark<br/><span class="small">Includes GraphX</span>
     </a>
   </div>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/index.md
----------------------------------------------------------------------
diff --git a/index.md b/index.md
index 96ada5c..14185d2 100644
--- a/index.md
+++ b/index.md
@@ -30,7 +30,7 @@ navigation:
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 272px; display: inline-block; text-align: center;">
-      <img src="{{site.url}}images/logistic-regression.png" style="width: 100%; max-width: 250px;">
+      <img src="{{site.baseurl}}/images/logistic-regression.png" style="width: 100%; max-width: 250px;">
       <div class="caption" style="min-width: 272px;">Logistic regression in Hadoop and Spark</div>
     </div>
   </div>
@@ -84,21 +84,21 @@ navigation:
 
     <p>
       Spark powers a stack of libraries including
-      <a href="{{site.url}}sql/">SQL and DataFrames</a>, <a href="{{site.url}}mllib/">MLlib</a> for machine learning,
-      <a href="{{site.url}}graphx/">GraphX</a>, and <a href="{{site.url}}streaming/">Spark Streaming</a>.
+      <a href="{{site.baseurl}}/sql/">SQL and DataFrames</a>, <a href="{{site.baseurl}}/mllib/">MLlib</a> for machine learning,
+      <a href="{{site.baseurl}}/graphx/">GraphX</a>, and <a href="{{site.baseurl}}/streaming/">Spark Streaming</a>.
       You can combine these libraries seamlessly in the same application.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
-    <img src="{{site.url}}images/spark-stack.png" style="margin-top: 15px; width: 100%; max-width: 296px;" usemap="#stack-map">
+    <img src="{{site.baseurl}}/images/spark-stack.png" style="margin-top: 15px; width: 100%; max-width: 296px;" usemap="#stack-map">
     <map name="stack-map">
-      <area shape="rect" coords="0,0,74,95" href="{{site.url}}sql/"
+      <area shape="rect" coords="0,0,74,95" href="{{site.baseurl}}/sql/"
             alt="Spark SQL" title="Spark SQL">
-      <area shape="rect" coords="74,0,150,95" href="{{site.url}}streaming/"
+      <area shape="rect" coords="74,0,150,95" href="{{site.baseurl}}/streaming/"
             alt="Spark Streaming" title="Spark Streaming">
-      <area shape="rect" coords="150,0,224,95" href="{{site.url}}mllib/"
+      <area shape="rect" coords="150,0,224,95" href="{{site.baseurl}}/mllib/"
             alt="MLlib (machine learning)" title="MLlib">
-      <area shape="rect" coords="225,0,300,95" href="{{site.url}}graphx/"
+      <area shape="rect" coords="225,0,300,95" href="{{site.baseurl}}/graphx/"
             alt="GraphX" title="GraphX">
     </map>
   </div>
@@ -113,13 +113,13 @@ navigation:
     </p>
 
     <p>
-      You can run Spark using its <a href="{{site.url}}docs/latest/spark-standalone.html">standalone cluster mode</a>, on <a href="{{site.url}}docs/latest/ec2-scripts.html">EC2</a>, on <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/index.html">Hadoop YARN</a>, or on <a href="http://mesos.apache.org">Apache Mesos</a>.
+      You can run Spark using its <a href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone cluster mode</a>, on <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2</a>, on <a href="http://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/index.html">Hadoop YARN</a>, or on <a href="http://mesos.apache.org">Apache Mesos</a>.
       Access data in <a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS</a>, <a href="http://cassandra.apache.org">Cassandra</a>, <a href="http://hbase.apache.org">HBase</a>,
       <a href="http://hive.apache.org">Hive</a>, <a href="http://tachyon-project.org">Tachyon</a>, and any Hadoop data source.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
-    <img src="{{site.url}}images/spark-runs-everywhere.png" style="width: 100%; max-width: 280px;">
+    <img src="{{site.baseurl}}/images/spark-runs-everywhere.png" style="width: 100%; max-width: 280px;">
   </div>
 </div>
 
@@ -139,7 +139,7 @@ navigation:
       There are many ways to reach the community:
     </p>
     <ul class="list-narrow">
-      <li>Use the <a href="{{site.url}}community.html#mailing-lists">mailing lists</a> to ask questions.</li>
+      <li>Use the <a href="{{site.baseurl}}/community.html#mailing-lists">mailing lists</a> to ask questions.</li>
       <li>In-person events include numerous <a href="http://www.meetup.com/topics/apache-spark/">meetup groups</a> and
       <a href="http://spark-summit.org/">Spark Summit</a>.</li>
       <li>We use <a href="https://issues.apache.org/jira/browse/SPARK">JIRA</a> for issue tracking.</li>
@@ -172,18 +172,18 @@ navigation:
 
     <p>Learning Spark is easy whether you come from a Java or Python background:</p>
     <ul class="list-narrow">
-      <li><a href="{{site.url}}downloads.html">Download</a> the latest release &mdash; you can run Spark locally on your laptop.</li>
-      <li>Read the <a href="{{site.url}}docs/latest/quick-start.html">quick start guide</a>.</li>
+      <li><a href="{{site.baseurl}}/downloads.html">Download</a> the latest release &mdash; you can run Spark locally on your laptop.</li>
+      <li>Read the <a href="{{site.baseurl}}/docs/latest/quick-start.html">quick start guide</a>.</li>
       <li>
         Spark Summit 2014 contained free <a href="http://spark-summit.org/2014/training">training videos and exercises</a>.
       </li>
-      <li>Learn how to <a href="{{site.url}}docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster.</li>
+      <li>Learn how to <a href="{{site.baseurl}}/docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster.</li>
     </ul>
   </div>
 </div>
 
 <div class="row">
   <div class="col-sm-12 col-center">
-    <a href="{{site.url}}downloads.html" class="btn btn-success btn-lg" style="width: 262px;">Download Apache Spark</a>
+    <a href="{{site.baseurl}}/downloads.html" class="btn btn-success btn-lg" style="width: 262px;">Download Apache Spark</a>
   </div>
 </div>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/mllib/index.md
----------------------------------------------------------------------
diff --git a/mllib/index.md b/mllib/index.md
index a013bbc..61e65a8 100644
--- a/mllib/index.md
+++ b/mllib/index.md
@@ -17,7 +17,7 @@ subproject: MLlib
       Usable in Java, Scala, Python, and R.
     </p>
     <p>
-      MLlib fits into <a href="{{site.url}}">Spark</a>'s
+      MLlib fits into <a href="{{site.baseurl}}/">Spark</a>'s
       APIs and interoperates with <a href="http://www.numpy.org">NumPy</a>
       in Python (as of Spark 0.9) and R libraries (as of Spark 1.5).
       You can use any Hadoop data source (e.g. HDFS, HBase, or local files), making it
@@ -53,7 +53,7 @@ subproject: MLlib
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="width: 100%; max-width: 272px; display: inline-block; text-align: center;">
-      <img src="{{site.url}}images/logistic-regression.png" style="width: 100%; max-width: 250px;">
+      <img src="{{site.baseurl}}/images/logistic-regression.png" style="width: 100%; max-width: 250px;">
       <div class="caption" style="min-width: 272px;">Logistic regression in Hadoop and Spark</div>
     </div>
   </div>
@@ -67,13 +67,13 @@ subproject: MLlib
     </p>
     <p>
       If you have a Hadoop 2 cluster, you can run Spark and MLlib without any pre-installation.
-      Otherwise, Spark is easy to run <a href="{{site.url}}docs/latest/spark-standalone.html">standalone</a>
-      or on <a href="{{site.url}}docs/latest/ec2-scripts.html">EC2</a> or <a href="http://mesos.apache.org">Mesos</a>.
+      Otherwise, Spark is easy to run <a href="{{site.baseurl}}/docs/latest/spark-standalone.html">standalone</a>
+      or on <a href="{{site.baseurl}}/docs/latest/ec2-scripts.html">EC2</a> or <a href="http://mesos.apache.org">Mesos</a>.
       You can read from <a href="http://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HdfsUserGuide.html">HDFS</a>, <a href="http://hbase.apache.org">HBase</a>, or any Hadoop data source.
     </p>
   </div>
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
-    <img src="{{site.url}}images/hadoop.jpg" style="width: 100%; max-width: 280px;">
+    <img src="{{site.baseurl}}/images/hadoop.jpg" style="width: 100%; max-width: 280px;">
   </div>
 </div>
 
@@ -99,7 +99,7 @@ subproject: MLlib
       <li>Distributed linear algebra: singular value decomposition (SVD), principal component analysis (PCA),...</li>
       <li>Statistics: summary statistics, hypothesis testing,...</li>
     </ul>
-    <p>Refer to the <a href="{{site.url}}docs/latest/mllib-guide.html">MLlib guide</a> for usage examples.</p>
+    <p>Refer to the <a href="{{site.baseurl}}/docs/latest/mllib-guide.html">MLlib guide</a> for usage examples.</p>
   </div>
 
   <div class="col-md-4 col-padded">
@@ -110,7 +110,7 @@ subproject: MLlib
     </p>
     <p>
       If you have questions about the library, ask on the
-      <a href="{{site.url}}community.html#mailing-lists">Spark mailing lists</a>.
+      <a href="{{site.baseurl}}/community.html#mailing-lists">Spark mailing lists</a>.
     </p>
     <p>
       MLlib is still a rapidly growing project and welcomes contributions. If you'd like to submit an algorithm to MLlib,
@@ -125,10 +125,10 @@ subproject: MLlib
       To get started with MLlib:
     </p>
     <ul class="list-narrow">
-      <li><a href="{{site.url}}downloads.html">Download Spark</a>. MLlib is included as a module.</li>
-      <li>Read the <a href="{{site.url}}docs/latest/mllib-guide.html">MLlib guide</a>, which includes
+      <li><a href="{{site.baseurl}}/downloads.html">Download Spark</a>. MLlib is included as a module.</li>
+      <li>Read the <a href="{{site.baseurl}}/docs/latest/mllib-guide.html">MLlib guide</a>, which includes
       various usage examples.</li>
-      <li>Learn how to <a href="{{site.url}}docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster
+      <li>Learn how to <a href="{{site.baseurl}}/docs/latest/#launching-on-a-cluster">deploy</a> Spark on a cluster
         if you'd like to run in distributed mode. You can also run locally on a multicore machine
         without any setup.
       </li>
@@ -138,7 +138,7 @@ subproject: MLlib
 
 <div class="row">
   <div class="col-sm-12 col-center">
-    <a href="{{site.url}}downloads.html" class="btn btn-success btn-lg btn-multiline">
+    <a href="{{site.baseurl}}/downloads.html" class="btn btn-success btn-lg btn-multiline">
       Download Apache Spark<br/><span class="small">Includes MLlib</span>
     </a>
   </div>

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2012-10-15-spark-version-0-6-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2012-10-15-spark-version-0-6-0-released.md b/news/_posts/2012-10-15-spark-version-0-6-0-released.md
index 90c7636..1219996 100644
--- a/news/_posts/2012-10-15-spark-version-0-6-0-released.md
+++ b/news/_posts/2012-10-15-spark-version-0-6-0-released.md
@@ -10,4 +10,4 @@ published: true
 meta:
   _edit_last: '1'
 ---
-<a href="{{site.url}}releases/spark-release-0-6-0.html">Spark version 0.6.0</a> was released today, a major release that brings a wide range of performance improvements and new features, including a simpler standalone deploy mode and a Java API. Read more about it in the <a href="{{site.url}}releases/spark-release-0-6-0.html">release notes</a>.
+<a href="{{site.baseurl}}/releases/spark-release-0-6-0.html">Spark version 0.6.0</a> was released today, a major release that brings a wide range of performance improvements and new features, including a simpler standalone deploy mode and a Java API. Read more about it in the <a href="{{site.baseurl}}/releases/spark-release-0-6-0.html">release notes</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2012-11-22-spark-0-6-1-and-0-5-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2012-11-22-spark-0-6-1-and-0-5-2-released.md b/news/_posts/2012-11-22-spark-0-6-1-and-0-5-2-released.md
index 21c952f..7f23ac9 100644
--- a/news/_posts/2012-11-22-spark-0-6-1-and-0-5-2-released.md
+++ b/news/_posts/2012-11-22-spark-0-6-1-and-0-5-2-released.md
@@ -10,4 +10,4 @@ published: true
 meta:
   _edit_last: '1'
 ---
-Today we've made available two maintenance releases for Spark: <a href="{{site.url}}releases/spark-release-0-6-1.html" title="Spark Release 0.6.1">0.6.1</a> and <a href="{{site.url}}releases/spark-release-0-5-2.html" title="Spark Release 0.5.2">0.5.2</a>. They both contain important bug fixes as well as some new features, such as the ability to build against Hadoop 2 distributions. We recommend that users update to the latest version for their branch; for new users, we recommend <a href="{{site.url}}releases/spark-release-0-6-1.html" title="Spark Release 0.6.1">0.6.1</a>.
+Today we've made available two maintenance releases for Spark: <a href="{{site.baseurl}}/releases/spark-release-0-6-1.html" title="Spark Release 0.6.1">0.6.1</a> and <a href="{{site.baseurl}}/releases/spark-release-0-5-2.html" title="Spark Release 0.5.2">0.5.2</a>. They both contain important bug fixes as well as some new features, such as the ability to build against Hadoop 2 distributions. We recommend that users update to the latest version for their branch; for new users, we recommend <a href="{{site.baseurl}}/releases/spark-release-0-6-1.html" title="Spark Release 0.6.1">0.6.1</a>.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-02-07-spark-0-6-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-02-07-spark-0-6-2-released.md b/news/_posts/2013-02-07-spark-0-6-2-released.md
index 7343045..fb08098 100644
--- a/news/_posts/2013-02-07-spark-0-6-2-released.md
+++ b/news/_posts/2013-02-07-spark-0-6-2-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We recently released <a href="{{site.url}}releases/spark-release-0-6-2.html" title="Spark Release 0.6.2">Spark 0.6.2</a>, a new version of Spark. This is a maintenance release that includes several bug fixes and usability improvements (see the <a href="{{site.url}}releases/spark-release-0-6-2.html" title="Spark Release 0.6.2">release notes</a>). We recommend that all users upgrade to this release.
+We recently released <a href="{{site.baseurl}}/releases/spark-release-0-6-2.html" title="Spark Release 0.6.2">Spark 0.6.2</a>, a new version of Spark. This is a maintenance release that includes several bug fixes and usability improvements (see the <a href="{{site.baseurl}}/releases/spark-release-0-6-2.html" title="Spark Release 0.6.2">release notes</a>). We recommend that all users upgrade to this release.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-02-27-spark-0-7-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-02-27-spark-0-7-0-released.md b/news/_posts/2013-02-27-spark-0-7-0-released.md
index 32f51d6..e5d4b29 100644
--- a/news/_posts/2013-02-27-spark-0-7-0-released.md
+++ b/news/_posts/2013-02-27-spark-0-7-0-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We're proud to announce the release of <a href="{{site.url}}releases/spark-release-0-7-0.html" title="Spark Release 0.7.0">Spark 0.7.0</a>, a new major version of Spark that adds several key features, including a <a href="{{site.url}}docs/latest/python-programming-guide.html">Python API</a> for Spark and an <a href="{{site.url}}docs/latest/streaming-programming-guide.html">alpha of Spark Streaming</a>. This release is the result of the largest group of contributors yet behind a Spark release -- 31 contributors from inside and outside Berkeley. Head over to the <a href="{{site.url}}releases/spark-release-0-7-0.html" title="Spark Release 0.7.0">release notes</a> to read more about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We're proud to announce the release of <a href="{{site.baseurl}}/releases/spark-release-0-7-0.html" title="Spark Release 0.7.0">Spark 0.7.0</a>, a new major version of Spark that adds several key features, including a <a href="{{site.baseurl}}/docs/latest/python-programming-guide.html">Python API</a> for Spark and an <a href="{{site.baseurl}}/docs/latest/streaming-programming-guide.html">alpha of Spark Streaming</a>. This release is the result of the largest group of contributors yet behind a Spark release -- 31 contributors from inside and outside Berkeley. Head over to the <a href="{{site.baseurl}}/releases/spark-release-0-7-0.html" title="Spark Release 0.7.0">release notes</a> to read more about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-04-16-spark-screencasts-published.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-04-16-spark-screencasts-published.md b/news/_posts/2013-04-16-spark-screencasts-published.md
index 70fbfcc..4ee0365 100644
--- a/news/_posts/2013-04-16-spark-screencasts-published.md
+++ b/news/_posts/2013-04-16-spark-screencasts-published.md
@@ -11,10 +11,10 @@ meta:
   _edit_last: '2'
   _wpas_done_all: '1'
 ---
-We have released the first two screencasts in a series of short hands-on video training courses we will be publishing to help new users get up and running with Spark in minutes.
-
-The first Spark screencast is called <a href="{{site.url}}screencasts/1-first-steps-with-spark.html">First Steps With Spark</a> and walks you through downloading and building Spark, as well as using the Spark shell, all in less than 10 minutes!
-
-The second screencast is a 2 minute <a href="{{site.url}}screencasts/2-spark-documentation-overview.html">overview of the Spark documentation</a>.
-
+We have released the first two screencasts in a series of short hands-on video training courses we will be publishing to help new users get up and running with Spark in minutes.
+
+The first Spark screencast is called <a href="{{site.baseurl}}/screencasts/1-first-steps-with-spark.html">First Steps With Spark</a> and walks you through downloading and building Spark, as well as using the Spark shell, all in less than 10 minutes!
+
+The second screencast is a 2 minute <a href="{{site.baseurl}}/screencasts/2-spark-documentation-overview.html">overview of the Spark documentation</a>.
+
 We hope you find these screencasts useful.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-06-02-spark-0-7-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-06-02-spark-0-7-2-released.md b/news/_posts/2013-06-02-spark-0-7-2-released.md
index 4b6d184..cbd3da5 100644
--- a/news/_posts/2013-06-02-spark-0-7-2-released.md
+++ b/news/_posts/2013-06-02-spark-0-7-2-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We're happy to announce the release of <a href="{{site.url}}releases/spark-release-0-7-2.html" title="Spark Release 0.7.2">Spark 0.7.2</a>, a new maintenance release that includes several bug fixes and improvements, as well as new code examples and API features. We recommend that all users update to this release. Head over to the <a href="{{site.url}}releases/spark-release-0-7-2.html" title="Spark Release 0.7.2">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We're happy to announce the release of <a href="{{site.baseurl}}/releases/spark-release-0-7-2.html" title="Spark Release 0.7.2">Spark 0.7.2</a>, a new maintenance release that includes several bug fixes and improvements, as well as new code examples and API features. We recommend that all users update to this release. Head over to the <a href="{{site.baseurl}}/releases/spark-release-0-7-2.html" title="Spark Release 0.7.2">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-07-16-spark-0-7-3-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-07-16-spark-0-7-3-released.md b/news/_posts/2013-07-16-spark-0-7-3-released.md
index 527221f..508ea41 100644
--- a/news/_posts/2013-07-16-spark-0-7-3-released.md
+++ b/news/_posts/2013-07-16-spark-0-7-3-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We've just posted <a href="{{site.url}}releases/spark-release-0-7-3.html" title="Spark Release 0.7.3">Spark Release 0.7.3</a>, a maintenance release that contains several fixes, including streaming API updates and new functionality for adding JARs to a <code>spark-shell</code> session. We recommend that all users update to this release. Visit the <a href="{{site.url}}releases/spark-release-0-7-3.html" title="Spark Release 0.7.3">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We've just posted <a href="{{site.baseurl}}/releases/spark-release-0-7-3.html" title="Spark Release 0.7.3">Spark Release 0.7.3</a>, a maintenance release that contains several fixes, including streaming API updates and new functionality for adding JARs to a <code>spark-shell</code> session. We recommend that all users update to this release. Visit the <a href="{{site.baseurl}}/releases/spark-release-0-7-3.html" title="Spark Release 0.7.3">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-08-27-fourth-spark-screencast-published.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-08-27-fourth-spark-screencast-published.md b/news/_posts/2013-08-27-fourth-spark-screencast-published.md
index 2de789f..22dcbf7 100644
--- a/news/_posts/2013-08-27-fourth-spark-screencast-published.md
+++ b/news/_posts/2013-08-27-fourth-spark-screencast-published.md
@@ -11,7 +11,7 @@ meta:
   _edit_last: '2'
   _wpas_done_all: '1'
 ---
-We have released the next screencast, <a href="{{site.url}}screencasts/4-a-standalone-job-in-spark.html">A Standalone Job in Scala</a> that takes you beyond the Spark shell, helping you write your first standalone Spark job.
+We have released the next screencast, <a href="{{site.baseurl}}/screencasts/4-a-standalone-job-in-spark.html">A Standalone Job in Scala</a> that takes you beyond the Spark shell, helping you write your first standalone Spark job.
 
 This is the fourth in a series of short hands-on video training courses to help new users get up and running with Spark in minutes.
 

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-09-25-spark-0-8-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-09-25-spark-0-8-0-released.md b/news/_posts/2013-09-25-spark-0-8-0-released.md
index 208a5ce..05ecb7b 100644
--- a/news/_posts/2013-09-25-spark-0-8-0-released.md
+++ b/news/_posts/2013-09-25-spark-0-8-0-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We're proud to announce the release of <a href="{{site.url}}releases/spark-release-0-8-0.html" title="Spark Release 0.8.0">Apache Spark 0.8.0</a>. Spark 0.8.0 is a major release that includes many new capabilities and usability improvements. It\u2019s also our first release under the Apache incubator. It is the largest Spark release yet, with contributions from 67 developers and 24 companies. Major new features include an expanded monitoring framework and UI, a machine learning library, and support for running Spark inside of YARN.
+We're proud to announce the release of <a href="{{site.baseurl}}/releases/spark-release-0-8-0.html" title="Spark Release 0.8.0">Apache Spark 0.8.0</a>. Spark 0.8.0 is a major release that includes many new capabilities and usability improvements. It\u2019s also our first release under the Apache incubator. It is the largest Spark release yet, with contributions from 67 developers and 24 companies. Major new features include an expanded monitoring framework and UI, a machine learning library, and support for running Spark inside of YARN.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2013-12-19-spark-0-8-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2013-12-19-spark-0-8-1-released.md b/news/_posts/2013-12-19-spark-0-8-1-released.md
index dbb3620..ae53be1 100644
--- a/news/_posts/2013-12-19-spark-0-8-1-released.md
+++ b/news/_posts/2013-12-19-spark-0-8-1-released.md
@@ -11,4 +11,4 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We've just posted <a href="{{site.url}}releases/spark-release-0-8-1.html" title="Spark Release 0.8.1">Spark Release 0.8.1</a>, a maintenance and performance release for the Scala 2.9 version of Spark. 0.8.1 includes support for YARN 2.2, a high availability mode for the standalone scheduler, optimizations to the shuffle, and many other improvements. We recommend that all users update to this release. Visit the <a href="{{site.url}}releases/spark-release-0-8-1.html" title="Spark Release 0.8.1">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+We've just posted <a href="{{site.baseurl}}/releases/spark-release-0-8-1.html" title="Spark Release 0.8.1">Spark Release 0.8.1</a>, a maintenance and performance release for the Scala 2.9 version of Spark. 0.8.1 includes support for YARN 2.2, a high availability mode for the standalone scheduler, optimizations to the shuffle, and many other improvements. We recommend that all users update to this release. Visit the <a href="{{site.baseurl}}/releases/spark-release-0-8-1.html" title="Spark Release 0.8.1">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-02-02-spark-0-9-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-02-02-spark-0-9-0-released.md b/news/_posts/2014-02-02-spark-0-9-0-released.md
index 06dd67e..57fb7c7 100644
--- a/news/_posts/2014-02-02-spark-0-9-0-released.md
+++ b/news/_posts/2014-02-02-spark-0-9-0-released.md
@@ -11,11 +11,11 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-0-9-0.html" title="Spark Release 0.9.0">
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-0-9-0.html" title="Spark Release 0.9.0">
 Spark 0.9.0</a>! Spark 0.9.0 is a major release and Spark's largest release ever, with contributions from 83 developers. 
 This release expands Spark's standard libraries, introducing a new graph computation package (GraphX) and adding several new features to the machine learning and stream-processing packages. It also makes major improvements to the core engine,
 including external aggregations, a simplified H/A mode for long lived applications, and 
 hardened YARN support.
 
-Visit the <a href="{{site.url}}releases/spark-release-0-9-0.html" title="Spark Release 0.9.0">release notes</a> 
-to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-0-9-0.html" title="Spark Release 0.9.0">release notes</a> 
+to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-02-27-spark-becomes-tlp.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-02-27-spark-becomes-tlp.md b/news/_posts/2014-02-27-spark-becomes-tlp.md
index fd6e4f0..553c104 100644
--- a/news/_posts/2014-02-27-spark-becomes-tlp.md
+++ b/news/_posts/2014-02-27-spark-becomes-tlp.md
@@ -14,4 +14,4 @@ meta:
 
 The Apache Software Foundation <a href="https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces50">announced</a> today that Spark has graduated from the Apache Incubator to become a top-level Apache project, signifying that the project's community and products have been well-governed under the ASF's meritocratic process and principles. This is a major step for the community and we are very proud to share this news with users as we complete Spark's move to Apache. Read more about Spark's growth during the past year and from contributors and users in the ASF's <a href="https://blogs.apache.org/foundation/entry/the_apache_software_foundation_announces50">press release</a>.
 
-As part of this change, note that Spark's <a href="{{site.url}}community.html">mailing lists</a> have moved to <tt>@spark.apache.org</tt> addresses, although the old <tt>@spark.incubator.apache.org</tt> addresses also still work.
+As part of this change, note that Spark's <a href="{{site.baseurl}}/community.html">mailing lists</a> have moved to <tt>@spark.apache.org</tt> addresses, although the old <tt>@spark.incubator.apache.org</tt> addresses also still work.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-04-09-spark-0-9-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-04-09-spark-0-9-1-released.md b/news/_posts/2014-04-09-spark-0-9-1-released.md
index e81c27b..58393e4 100644
--- a/news/_posts/2014-04-09-spark-0-9-1-released.md
+++ b/news/_posts/2014-04-09-spark-0-9-1-released.md
@@ -12,10 +12,10 @@ meta:
   _wpas_done_all: '1'
 ---
 
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">
 Spark 0.9.1</a>! Apache Spark 0.9.1 is a maintenance release with bug fixes, performance improvements, better stability with YARN and 
 improved parity of the Scala and Python API. We recommend all 0.9.0 users to upgrade to this stable release. 
 Contributions to this release came from 37 developers. 
 
-Visit the <a href="{{site.url}}releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">release notes</a> 
-to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-0-9-1.html" title="Spark Release 0.9.1">release notes</a> 
+to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-05-30-spark-1-0-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-05-30-spark-1-0-0-released.md b/news/_posts/2014-05-30-spark-1-0-0-released.md
index a7ea4e6..0e35bf0 100644
--- a/news/_posts/2014-05-30-spark-1-0-0-released.md
+++ b/news/_posts/2014-05-30-spark-1-0-0-released.md
@@ -11,7 +11,7 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-0-0.html" title="Spark Release 1.0.0">Spark 1.0.0</a>! Spark 1.0.0 is the first in the 1.X line of releases, providing API stability for Spark's core interfaces. It is Spark's largest release ever, with contributions from 117 developers. 
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-0-0.html" title="Spark Release 1.0.0">Spark 1.0.0</a>! Spark 1.0.0 is the first in the 1.X line of releases, providing API stability for Spark's core interfaces. It is Spark's largest release ever, with contributions from 117 developers. 
 This release expands Spark's standard libraries, introducing a new SQL package (Spark SQL) that lets users integrate SQL queries into existing Spark workflows. MLlib, Spark's machine learning library, is expanded with sparse vector support and several new algorithms. The GraphX and Streaming libraries also introduce new features and optimizations. Spark's core engine adds support for secured YARN clusters, a unified tool for submitting Spark applications, and several performance and stability improvements.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-0-0.html" title="Spark Release 1.0.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-0-0.html" title="Spark Release 1.0.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-07-11-spark-1-0-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-07-11-spark-1-0-1-released.md b/news/_posts/2014-07-11-spark-1-0-1-released.md
index f317f7a..16fd99a 100644
--- a/news/_posts/2014-07-11-spark-1-0-1-released.md
+++ b/news/_posts/2014-07-11-spark-1-0-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-0-1.html" title="Spark Release 1.0.1">Spark 1.0.1</a>! This release includes contributions from 70 developers. Spark 1.0.0 includes fixes across several areas of Spark, including the core API, PySpark, and MLlib. It also includes new features in Spark's (alpha) SQL library, including support for JSON data and performance and stability fixes.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-0-1.html" title="Spark Release 1.0.1">Spark 1.0.1</a>! This release includes contributions from 70 developers. Spark 1.0.0 includes fixes across several areas of Spark, including the core API, PySpark, and MLlib. It also includes new features in Spark's (alpha) SQL library, including support for JSON data and performance and stability fixes.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-0-1.html" title="Spark Release 1.0.1">release notes</a> to read about this release or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-0-1.html" title="Spark Release 1.0.1">release notes</a> to read about this release or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-07-23-spark-0-9-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-07-23-spark-0-9-2-released.md b/news/_posts/2014-07-23-spark-0-9-2-released.md
index 3553b54..82b628f 100644
--- a/news/_posts/2014-07-23-spark-0-9-2-released.md
+++ b/news/_posts/2014-07-23-spark-0-9-2-released.md
@@ -12,9 +12,9 @@ meta:
   _wpas_done_all: '1'
 ---
 
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">
 Spark 0.9.2</a>! Apache Spark 0.9.2 is a maintenance release with bug fixes. We recommend all 0.9.x users to upgrade to this stable release. 
 Contributions to this release came from 28 developers. 
 
-Visit the <a href="{{site.url}}releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">release notes</a> 
-to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-0-9-2.html" title="Spark Release 0.9.2">release notes</a> 
+to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-08-05-spark-1-0-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-08-05-spark-1-0-2-released.md b/news/_posts/2014-08-05-spark-1-0-2-released.md
index e7fabde..75174a1 100644
--- a/news/_posts/2014-08-05-spark-1-0-2-released.md
+++ b/news/_posts/2014-08-05-spark-1-0-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-0-2.html" title="Spark Release 1.0.2">Spark 1.0.2</a>! This release includes contributions from 30 developers. Spark 1.0.2 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-0-2.html" title="Spark Release 1.0.2">Spark 1.0.2</a>! This release includes contributions from 30 developers. Spark 1.0.2 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-0-2.html" title="Spark Release 1.0.2">release notes</a> to read about this release or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-0-2.html" title="Spark Release 1.0.2">release notes</a> to read about this release or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-09-11-spark-1-1-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-09-11-spark-1-1-0-released.md b/news/_posts/2014-09-11-spark-1-1-0-released.md
index 1d73222..20326e6 100644
--- a/news/_posts/2014-09-11-spark-1-1-0-released.md
+++ b/news/_posts/2014-09-11-spark-1-1-0-released.md
@@ -11,8 +11,8 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 171 developers!
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 171 developers!
 
 This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark\u2019s Python support and adds new components to the Spark Streaming module. 
 
-Visit the <a href="{{site.url}}releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-11-26-spark-1-1-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-11-26-spark-1-1-1-released.md b/news/_posts/2014-11-26-spark-1-1-1-released.md
index df32f26..20fe700 100644
--- a/news/_posts/2014-11-26-spark-1-1-1-released.md
+++ b/news/_posts/2014-11-26-spark-1-1-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-1-1.html" title="Spark Release 1.1.1">Spark 1.1.1</a>! This is a maintenance release that includes contributions from 55 developers. Spark 1.1.1 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, SQL, GraphX, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-1-1.html" title="Spark Release 1.1.1">Spark 1.1.1</a>! This is a maintenance release that includes contributions from 55 developers. Spark 1.1.1 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, SQL, GraphX, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-1-1.html" title="Spark Release 1.1.1">release notes</a> to read about this release or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-1-1.html" title="Spark Release 1.1.1">release notes</a> to read about this release or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2014-12-18-spark-1-2-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2014-12-18-spark-1-2-0-released.md b/news/_posts/2014-12-18-spark-1-2-0-released.md
index 027686b..7334e39 100644
--- a/news/_posts/2014-12-18-spark-1-2-0-released.md
+++ b/news/_posts/2014-12-18-spark-1-2-0-released.md
@@ -11,8 +11,8 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-2-0.html" title="Spark Release 1.2.0">Spark 1.2.0</a>! Spark 1.2.0 is the third release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 172 developers and more than 1,000 commits!
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-2-0.html" title="Spark Release 1.2.0">Spark 1.2.0</a>! Spark 1.2.0 is the third release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 172 developers and more than 1,000 commits!
 
 This release brings operational and performance improvements in Spark core including a new network transport subsytem designed for very large shuffles. Spark SQL introduces an API for external data sources along with Hive 13 support, dynamic partitioning, and the fixed-precision decimal type. MLlib adds a new pipeline-oriented package (spark.ml) for composing multiple algorithms. Spark Streaming adds a Python API and a write ahead log for fault tolerance. Finally, GraphX has graduated from alpha and introduces a stable API.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-2-0.html" title="Spark Release 1.2.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-2-0.html" title="Spark Release 1.2.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-02-09-spark-1-2-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-02-09-spark-1-2-1-released.md b/news/_posts/2015-02-09-spark-1-2-1-released.md
index bccd311..0a752fb 100644
--- a/news/_posts/2015-02-09-spark-1-2-1-released.md
+++ b/news/_posts/2015-02-09-spark-1-2-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-2-1.html" title="Spark Release 1.2.1">Spark 1.2.1</a>! This is a maintenance release that includes contributions from 69 developers. Spark 1.2.1 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, SQL, GraphX, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-2-1.html" title="Spark Release 1.2.1">Spark 1.2.1</a>! This is a maintenance release that includes contributions from 69 developers. Spark 1.2.1 includes fixes across several areas of Spark, including the core API, Streaming, PySpark, SQL, GraphX, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-2-1.html" title="Spark Release 1.2.1">release notes</a> to read about this release or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-2-1.html" title="Spark Release 1.2.1">release notes</a> to read about this release or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-03-13-spark-1-3-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-03-13-spark-1-3-0-released.md b/news/_posts/2015-03-13-spark-1-3-0-released.md
index 2fae1f2..a30cb13 100644
--- a/news/_posts/2015-03-13-spark-1-3-0-released.md
+++ b/news/_posts/2015-03-13-spark-1-3-0-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-3-0.html" title="Spark Release 1.3.0">Spark 1.3.0</a>! Spark 1.3.0 is the third release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 174 developers and more than 1,000 commits!
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-3-0.html" title="Spark Release 1.3.0">Spark 1.3.0</a>! Spark 1.3.0 is the third release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 174 developers and more than 1,000 commits!
 
-Visit the <a href="{{site.url}}releases/spark-release-1-3-0.html" title="Spark Release 1.3.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-3-0.html" title="Spark Release 1.3.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-04-17-spark-1-2-2-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-04-17-spark-1-2-2-released.md b/news/_posts/2015-04-17-spark-1-2-2-released.md
index f0fa5a6..5d30e70 100644
--- a/news/_posts/2015-04-17-spark-1-2-2-released.md
+++ b/news/_posts/2015-04-17-spark-1-2-2-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="{{site.url}}releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. 
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-2-2.html" title="Spark Release 1.2.2">Spark 1.2.2</a> and <a href="{{site.baseurl}}/releases/spark-release-1-3-1.html" title="Spark Release 1.3.1">Spark 1.3.1</a>! These are both maintenance releases that collectively feature the work of more than 90 developers. 
 
-To download either release, visit the <a href="{{site.url}}downloads.html">downloads</a> page.
+To download either release, visit the <a href="{{site.baseurl}}/downloads.html">downloads</a> page.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-05-15-one-month-to-spark-summit-2015.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-05-15-one-month-to-spark-summit-2015.md b/news/_posts/2015-05-15-one-month-to-spark-summit-2015.md
index 8c0f35a..974bd51 100644
--- a/news/_posts/2015-05-15-one-month-to-spark-summit-2015.md
+++ b/news/_posts/2015-05-15-one-month-to-spark-summit-2015.md
@@ -1,7 +1,6 @@
 ---
 layout: post
 title: One month to Spark Summit 2015 in San Francisco
-date: 2015-05-15 00:00:10
 categories:
 - News
 tags: []

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-05-15-spark-summit-europe.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-05-15-spark-summit-europe.md b/news/_posts/2015-05-15-spark-summit-europe.md
index a30bcad..67dcf0a 100644
--- a/news/_posts/2015-05-15-spark-summit-europe.md
+++ b/news/_posts/2015-05-15-spark-summit-europe.md
@@ -1,7 +1,6 @@
 ---
 layout: post
 title: Announcing Spark Summit Europe
-date: 2015-05-15 00:00:00
 categories:
 - News
 tags: []

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-06-11-spark-1-4-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-06-11-spark-1-4-0-released.md b/news/_posts/2015-06-11-spark-1-4-0-released.md
index 668fee8..bd35970 100644
--- a/news/_posts/2015-06-11-spark-1-4-0-released.md
+++ b/news/_posts/2015-06-11-spark-1-4-0-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-4-0.html" title="Spark Release 1.4.0">Spark 1.4.0</a>! Spark 1.4.0 is the fifth release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 210 developers and more than 1,000 commits!
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-4-0.html" title="Spark Release 1.4.0">Spark 1.4.0</a>! Spark 1.4.0 is the fifth release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 210 developers and more than 1,000 commits!
 
-Visit the <a href="{{site.url}}releases/spark-release-1-4-0.html" title="Spark Release 1.4.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-4-0.html" title="Spark Release 1.4.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-07-15-spark-1-4-1-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-07-15-spark-1-4-1-released.md b/news/_posts/2015-07-15-spark-1-4-1-released.md
index 1d2cddb..b1a46be 100644
--- a/news/_posts/2015-07-15-spark-1-4-1-released.md
+++ b/news/_posts/2015-07-15-spark-1-4-1-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-4-1.html" title="Spark Release 1.4.1">Spark 1.4.1</a>! This is a maintenance release that includes contributions from 85 developers. Spark 1.4.1 includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, Spark SQL, and MLlib.
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-4-1.html" title="Spark Release 1.4.1">Spark 1.4.1</a>! This is a maintenance release that includes contributions from 85 developers. Spark 1.4.1 includes fixes across several areas of Spark, including the DataFrame API, Spark Streaming, PySpark, Spark SQL, and MLlib.
 
-Visit the <a href="{{site.url}}releases/spark-release-1-4-1.html" title="Spark Release 1.4.1">release notes</a> to read about this release or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-4-1.html" title="Spark Release 1.4.1">release notes</a> to read about this release or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.

http://git-wip-us.apache.org/repos/asf/spark-website/blob/d82e3722/news/_posts/2015-09-09-spark-1-5-0-released.md
----------------------------------------------------------------------
diff --git a/news/_posts/2015-09-09-spark-1-5-0-released.md b/news/_posts/2015-09-09-spark-1-5-0-released.md
index 3f993f9..044b186 100644
--- a/news/_posts/2015-09-09-spark-1-5-0-released.md
+++ b/news/_posts/2015-09-09-spark-1-5-0-released.md
@@ -11,6 +11,6 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-5-0.html" title="Spark Release 1.5.0">Spark 1.5.0</a>! Spark 1.5.0 is the sixth release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 230 developers and more than 1,400 commits!
+We are happy to announce the availability of <a href="{{site.baseurl}}/releases/spark-release-1-5-0.html" title="Spark Release 1.5.0">Spark 1.5.0</a>! Spark 1.5.0 is the sixth release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 230 developers and more than 1,400 commits!
 
-Visit the <a href="{{site.url}}releases/spark-release-1-5-0.html" title="Spark Release 1.5.0">release notes</a> to read about the new features, or <a href="{{site.url}}downloads.html">download</a> the release today.
+Visit the <a href="{{site.baseurl}}/releases/spark-release-1-5-0.html" title="Spark Release 1.5.0">release notes</a> to read about the new features, or <a href="{{site.baseurl}}/downloads.html">download</a> the release today.


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org