You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by pw...@apache.org on 2014/09/12 02:10:00 UTC

svn commit: r1624428 - in /spark: news/_posts/2014-09-11-spark-1-1-0-released.md releases/_posts/2014-09-11-spark-release-1-1-0.md site/news/index.html site/news/spark-1-1-0-released.html site/releases/spark-release-1-1-0.html

Author: pwendell
Date: Fri Sep 12 00:10:00 2014
New Revision: 1624428

URL: http://svn.apache.org/r1624428
Log:
More 1.1 typo fixes

Modified:
    spark/news/_posts/2014-09-11-spark-1-1-0-released.md
    spark/releases/_posts/2014-09-11-spark-release-1-1-0.md
    spark/site/news/index.html
    spark/site/news/spark-1-1-0-released.html
    spark/site/releases/spark-release-1-1-0.html

Modified: spark/news/_posts/2014-09-11-spark-1-1-0-released.md
URL: http://svn.apache.org/viewvc/spark/news/_posts/2014-09-11-spark-1-1-0-released.md?rev=1624428&r1=1624427&r2=1624428&view=diff
==============================================================================
--- spark/news/_posts/2014-09-11-spark-1-1-0-released.md (original)
+++ spark/news/_posts/2014-09-11-spark-1-1-0-released.md Fri Sep 12 00:10:00 2014
@@ -11,7 +11,7 @@ meta:
   _edit_last: '4'
   _wpas_done_all: '1'
 ---
-We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 173 developers!
+We are happy to announce the availability of <a href="{{site.url}}releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark's largest release ever, with contributions from 171 developers!
 
 This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark’s Python support and adds new components to the Spark Streaming module. 
 

Modified: spark/releases/_posts/2014-09-11-spark-release-1-1-0.md
URL: http://svn.apache.org/viewvc/spark/releases/_posts/2014-09-11-spark-release-1-1-0.md?rev=1624428&r1=1624427&r2=1624428&view=diff
==============================================================================
--- spark/releases/_posts/2014-09-11-spark-release-1-1-0.md (original)
+++ spark/releases/_posts/2014-09-11-spark-release-1-1-0.md Fri Sep 12 00:10:00 2014
@@ -11,7 +11,7 @@ meta:
   _wpas_done_all: '1'
 ---
 
-Spark 1.1.0 is the first minor release on the 1.X line. This release brings operational and performance improvements in Spark core along with significant extensions to Spark’s newest libraries: MLlib and Spark SQL. It also builds out Spark’s Python support and adds new components to the Spark Streaming module. Spark 1.1 represents the work of 173 contributors, the most to ever contribute to a Spark release!
+Spark 1.1.0 is the first minor release on the 1.X line. This release brings operational and performance improvements in Spark core along with significant extensions to Spark’s newest libraries: MLlib and Spark SQL. It also builds out Spark’s Python support and adds new components to the Spark Streaming module. Spark 1.1 represents the work of 171 contributors, the most to ever contribute to a Spark release!
 
 ### Performance and Usability Improvements
 Across the board, Spark 1.1 adds features for improved stability and performance, particularly for large-scale workloads. Spark now performs [disk spilling for skewed blocks](https://issues.apache.org/jira/browse/SPARK-1777) during cache operations, guarding against memory overflows if a single RDD partition is large. Disk spilling during aggregations, introduced in Spark 1.0, has been [ported to PySpark](https://issues.apache.org/jira/browse/SPARK-2538). This release introduces a [new shuffle implementation](https://issues.apache.org/jira/browse/SPARK-2045]) optimized for very large scale shuffles. This “sort-based shuffle” will be become the default in the next release, and is now available to users. For jobs with large numbers of reducers, we recommend turning this on. This release also adds several usability improvements for monitoring the performance of long running or complex jobs. Among the changes are better [named accumulators](https://issues.apache.org/jira/brows
 e/SPARK-2380) that display in Spark’s UI, [dynamic updating of metrics](https://issues.apache.org/jira/browse/SPARK-2099) for progress tasks, and [reporting of input metrics](https://issues.apache.org/jira/browse/SPARK-1683) for tasks that read input data.
@@ -101,7 +101,6 @@ Spark 1.1.0 is backwards compatible with
  * Guancheng Chen -- doc fix
  * Guillaume Ballet -- build fix
  * GuoQiang Li -- bug fixes in Spark core and MLlib
- * Guoquiang Li -- bug fixes throughout Spark core
  * Guo Wei -- bug fix in Spark SQL
  * Haoyuan Li -- Tachyon fix
  * Hari Shreeharan -- Flume polling source for Spark Streaming
@@ -206,7 +205,6 @@ Spark 1.1.0 is backwards compatible with
  * Xi Lui -- UDF improvement in Spark SQL
  * Ximo Guanter Gonzalbez -- SQL DSL support for aggregations
  * Yadid Ayzenberg -- doc fixes
- * Yadong -- code clean-up
  * Yadong Qi -- code clean-up
  * Yanjie Gao -- Spark SQL enhancement
  * Yantangz Hai -- bug fix

Modified: spark/site/news/index.html
URL: http://svn.apache.org/viewvc/spark/site/news/index.html?rev=1624428&r1=1624427&r2=1624428&view=diff
==============================================================================
--- spark/site/news/index.html (original)
+++ spark/site/news/index.html Fri Sep 12 00:10:00 2014
@@ -169,7 +169,7 @@
       <h3 class="entry-title"><a href="/news/spark-1-1-0-released.html">Spark 1.1.0 released</a></h3>
       <div class="entry-date">September 11, 2014</div>
     </header>
-    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 173 developers!</p>
+    <div class="entry-content"><p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 171 developers!</p>
 
 </div>
   </article>

Modified: spark/site/news/spark-1-1-0-released.html
URL: http://svn.apache.org/viewvc/spark/site/news/spark-1-1-0-released.html?rev=1624428&r1=1624427&r2=1624428&view=diff
==============================================================================
--- spark/site/news/spark-1-1-0-released.html (original)
+++ spark/site/news/spark-1-1-0-released.html Fri Sep 12 00:10:00 2014
@@ -165,7 +165,7 @@
     <h2>Spark 1.1.0 released</h2>
 
 
-<p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 173 developers!</p>
+<p>We are happy to announce the availability of <a href="/releases/spark-release-1-1-0.html" title="Spark Release 1.1.0">Spark 1.1.0</a>! Spark 1.1.0 is the second release on the API-compatible 1.X line. It is Spark&#8217;s largest release ever, with contributions from 171 developers!</p>
 
 <p>This release brings operational and performance improvements in Spark core including a new implementation of the Spark shuffle designed for very large scale workloads. Spark 1.1 adds significant extensions to the newest Spark modules, MLlib and Spark SQL. Spark SQL introduces a JDBC server, byte code generation for fast expression evaluation, a public types API, JSON support, and other features and optimizations. MLlib introduces a new statistics libary along with several new algorithms and optimizations. Spark 1.1 also builds out Spark’s Python support and adds new components to the Spark Streaming module. </p>
 

Modified: spark/site/releases/spark-release-1-1-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-1-0.html?rev=1624428&r1=1624427&r2=1624428&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-1-0.html (original)
+++ spark/site/releases/spark-release-1-1-0.html Fri Sep 12 00:10:00 2014
@@ -165,7 +165,7 @@
     <h2>Spark Release 1.1.0</h2>
 
 
-<p>Spark 1.1.0 is the first minor release on the 1.X line. This release brings operational and performance improvements in Spark core along with significant extensions to Spark’s newest libraries: MLlib and Spark SQL. It also builds out Spark’s Python support and adds new components to the Spark Streaming module. Spark 1.1 represents the work of 173 contributors, the most to ever contribute to a Spark release!</p>
+<p>Spark 1.1.0 is the first minor release on the 1.X line. This release brings operational and performance improvements in Spark core along with significant extensions to Spark’s newest libraries: MLlib and Spark SQL. It also builds out Spark’s Python support and adds new components to the Spark Streaming module. Spark 1.1 represents the work of 171 contributors, the most to ever contribute to a Spark release!</p>
 
 <h3 id="performance-and-usability-improvements">Performance and Usability Improvements</h3>
 <p>Across the board, Spark 1.1 adds features for improved stability and performance, particularly for large-scale workloads. Spark now performs <a href="https://issues.apache.org/jira/browse/SPARK-1777">disk spilling for skewed blocks</a> during cache operations, guarding against memory overflows if a single RDD partition is large. Disk spilling during aggregations, introduced in Spark 1.0, has been <a href="https://issues.apache.org/jira/browse/SPARK-2538">ported to PySpark</a>. This release introduces a <a href="https://issues.apache.org/jira/browse/SPARK-2045]">new shuffle implementation</a> optimized for very large scale shuffles. This “sort-based shuffle” will be become the default in the next release, and is now available to users. For jobs with large numbers of reducers, we recommend turning this on. This release also adds several usability improvements for monitoring the performance of long running or complex jobs. Among the changes are better <a href="https://issu
 es.apache.org/jira/browse/SPARK-2380">named accumulators</a> that display in Spark’s UI, <a href="https://issues.apache.org/jira/browse/SPARK-2099">dynamic updating of metrics</a> for progress tasks, and <a href="https://issues.apache.org/jira/browse/SPARK-1683">reporting of input metrics</a> for tasks that read input data.</p>
@@ -262,7 +262,6 @@
   <li>Guancheng Chen &#8211; doc fix</li>
   <li>Guillaume Ballet &#8211; build fix</li>
   <li>GuoQiang Li &#8211; bug fixes in Spark core and MLlib</li>
-  <li>Guoquiang Li &#8211; bug fixes throughout Spark core</li>
   <li>Guo Wei &#8211; bug fix in Spark SQL</li>
   <li>Haoyuan Li &#8211; Tachyon fix</li>
   <li>Hari Shreeharan &#8211; Flume polling source for Spark Streaming</li>
@@ -367,7 +366,6 @@
   <li>Xi Lui &#8211; UDF improvement in Spark SQL</li>
   <li>Ximo Guanter Gonzalbez &#8211; SQL DSL support for aggregations</li>
   <li>Yadid Ayzenberg &#8211; doc fixes</li>
-  <li>Yadong &#8211; code clean-up</li>
   <li>Yadong Qi &#8211; code clean-up</li>
   <li>Yanjie Gao &#8211; Spark SQL enhancement</li>
   <li>Yantangz Hai &#8211; bug fix</li>



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org