You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by ma...@apache.org on 2016/02/20 00:14:46 UTC

svn commit: r1731310 [2/2] - in /spark: ./ site/ site/news/ site/releases/

Modified: spark/site/releases/spark-release-1-2-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-2-0.html?rev=1731310&r1=1731309&r2=1731310&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-2-0.html (original)
+++ spark/site/releases/spark-release-1-2-0.html Fri Feb 19 23:14:46 2016
@@ -181,7 +181,7 @@
 <p>In 1.2 Spark core upgrades two major subsystems to improve the performance and stability of very large scale shuffles. The first is Spark’s communication manager used during bulk transfers, which upgrades to a <a href="https://issues.apache.org/jira/browse/SPARK-2468">netty-based implementation</a>. The second is Spark’s shuffle mechanism, which upgrades to the <a href="https://issues.apache.org/jira/browse/SPARK-3280">“sort based” shuffle initially released in Spark 1.1</a>. These both improve the performance and stability of very large scale shuffles. Spark also adds an <a href="https://issues.apache.org/jira/browse/SPARK-3174">elastic scaling mechanism</a> designed to improve cluster utilization during long running ETL-style jobs. This is currently supported on YARN and will make its way to other cluster managers in future versions. Finally, Spark 1.2 adds support for Scala 2.11. For instructions on building for Scala 2.11 see the <a href="/docs/1.2.0/build
 ing-spark.html#building-for-scala-211">build documentation</a>.</p>
 
 <h3 id="spark-streaming">Spark Streaming</h3>
-<p>This release includes two major feature additions to Spark’s streaming library, a Python API and a write ahead log for full driver H/A. The <a href="https://issues.apache.org/jira/browse/SPARK-2377">Python API</a> covers almost all the DStream transformations and output operations. Input sources based on text files and text over sockets are currently supported. Support for Kafka and Flume input streams in Python will be added in the next release. Second, Spark streaming now features H/A driver support through a <a href="https://issues.apache.org/jira/browse/SPARK-3129">write ahead log (WAL)</a>. In Spark 1.1 and earlier, some buffered (received but not yet processed) data can be lost during driver restarts. To prevent this Spark 1.2 adds an optional WAL, which buffers received data into a fault-tolerant file system (e.g. HDFS). See the <a href="/docs/1.2.0/streaming-programming-guide.html">streaming programming guide</a> for more details.</p>
+<p>This release includes two major feature additions to Spark’s streaming library, a Python API and a write ahead log for full driver H/A. The <a href="https://issues.apache.org/jira/browse/SPARK-2377">Python API</a> covers almost all the DStream transformations and output operations. Input sources based on text files and text over sockets are currently supported. Support for Kafka and Flume input streams in Python will be added in the next release. Second, Spark streaming now features H/A driver support through a <a href="https://issues.apache.org/jira/browse/SPARK-3129">write ahead log (WAL)</a>. In Spark 1.1 and earlier, some buffered (received but not yet processed) data can be lost during driver restarts. To prevent this Spark 1.2 adds an optional WAL, which buffers received data into a fault-tolerant file system (e.g. HDFS). See the <a href="/docs/1.2.0/streaming-programming-guide.html">streaming programming guide</a> for more details. </p>
 
 <h3 id="mllib">MLLib</h3>
 <p>Spark 1.2 previews a new set of machine learning API’s in a package called spark.ml that <a href="https://issues.apache.org/jira/browse/SPARK-3530">supports learning pipelines</a>, where multiple algorithms are run in sequence with varying parameters. This type of pipeline is common in practical machine learning deployments. The new ML package uses Spark’s SchemaRDD to represent <a href="https://issues.apache.org/jira/browse/SPARK-3573">ML datasets</a>, providing direct interoperability with Spark SQL. In addition to the new API, Spark 1.2 extends decision trees with two tree ensemble methods: <a href="https://issues.apache.org/jira/browse/SPARK-1545">random forests</a> and <a href="https://issues.apache.org/jira/browse/SPARK-1547">gradient-boosted trees</a>, among the most successful tree-based models for classification and regression. Finally, MLlib&#8217;s Python implementation receives a major update in 1.2 to simplify the process of adding Python APIs, along with b
 etter Python API coverage.</p>

Modified: spark/site/releases/spark-release-1-3-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-3-0.html?rev=1731310&r1=1731309&r2=1731310&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-3-0.html (original)
+++ spark/site/releases/spark-release-1-3-0.html Fri Feb 19 23:14:46 2016
@@ -178,7 +178,7 @@
 <p>To download Spark 1.3 visit the <a href="/downloads.html">downloads</a> page.</p>
 
 <h3 id="spark-core">Spark Core</h3>
-<p>Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports <a href="https://issues.apache.org/jira/browse/SPARK-5430">multi level aggregation trees</a> to help speed up expensive reduce operations. <a href="https://issues.apache.org/jira/browse/SPARK-5063">Improved error reporting</a> has been added for certain gotcha operations. Spark&#8217;s Jetty dependency is <a href="https://issues.apache.org/jira/browse/SPARK-3996">now shaded</a> to help avoid conflicts with user programs. Spark now supports <a href="https://issues.apache.org/jira/browse/SPARK-3883">SSL encryption</a> for some communication endpoints. Finaly, realtime <a href="https://issues.apache.org/jira/browse/SPARK-3428">GC metrics</a> and <a href="https://issues.apache.org/jira/browse/SPARK-4874">record counts</a> have been added to the UI.</p>
+<p>Spark 1.3 sees a handful of usability improvements in the core engine. The core API now supports <a href="https://issues.apache.org/jira/browse/SPARK-5430">multi level aggregation trees</a> to help speed up expensive reduce operations. <a href="https://issues.apache.org/jira/browse/SPARK-5063">Improved error reporting</a> has been added for certain gotcha operations. Spark&#8217;s Jetty dependency is <a href="https://issues.apache.org/jira/browse/SPARK-3996">now shaded</a> to help avoid conflicts with user programs. Spark now supports <a href="https://issues.apache.org/jira/browse/SPARK-3883">SSL encryption</a> for some communication endpoints. Finaly, realtime <a href="https://issues.apache.org/jira/browse/SPARK-3428">GC metrics</a> and <a href="https://issues.apache.org/jira/browse/SPARK-4874">record counts</a> have been added to the UI. </p>
 
 <h3 id="dataframe-api">DataFrame API</h3>
 <p>Spark 1.3 adds a new <a href="/docs/1.3.0/sql-programming-guide.html#dataframes">DataFrames API</a> that provides powerful and convenient operators when working with structured datasets. The DataFrame is an evolution of the base RDD API that includes named fields along with schema information. It’s easy to construct a DataFrame from sources such as Hive tables, JSON data, a JDBC database, or any implementation of Spark’s new data source API. Data frames will become a common interchange format between Spark components and when importing and exporting data to other systems. Data frames are supported in Python, Scala, and Java.</p>
@@ -190,7 +190,7 @@
 <p>In this release Spark MLlib introduces several new algorithms: latent Dirichlet allocation (LDA) for <a href="https://issues.apache.org/jira/browse/SPARK-1405">topic modeling</a>, <a href="https://issues.apache.org/jira/browse/SPARK-2309">multinomial logistic regression</a> for multiclass classification, <a href="https://issues.apache.org/jira/browse/SPARK-5012">Gaussian mixture model (GMM)</a> and <a href="https://issues.apache.org/jira/browse/SPARK-4259">power iteration clustering</a> for clustering, <a href="https://issues.apache.org/jira/browse/SPARK-4001">FP-growth</a> for frequent pattern mining, and <a href="https://issues.apache.org/jira/browse/SPARK-4409">block matrix abstraction</a> for distributed linear algebra. Initial support has been added for <a href="https://issues.apache.org/jira/browse/SPARK-4587">model import/export</a> in exchangeable format, which will be expanded in future versions to cover more model types in Java/Python/Scala. The implementations of k-mea
 ns and ALS receive <a href="https://issues.apache.org/jira/browse/SPARK-3424, https://issues.apache.org/jira/browse/SPARK-3541">updates</a> that lead to significant performance gain. PySpark now supports the <a href="https://issues.apache.org/jira/browse/SPARK-4586">ML pipeline API</a> added in Spark 1.2, and <a href="https://issues.apache.org/jira/browse/SPARK-5094">gradient boosted trees</a> and <a href="https://issues.apache.org/jira/browse/SPARK-5012">Gaussian mixture model</a>. Finally, the ML pipeline API has been ported to support the new DataFrames abstraction.</p>
 
 <h3 id="spark-streaming">Spark Streaming</h3>
-<p>Spark 1.3 introduces a new <a href="https://issues.apache.org/jira/browse/SPARK-4964"><em>direct</em> Kafka API</a> (<a href="http://spark.apache.org/docs/1.3.0/streaming-kafka-integration.html">docs</a>) which enables exactly-once delivery without the use of write ahead logs. It also adds a <a href="https://issues.apache.org/jira/browse/SPARK-5047">Python Kafka API</a> along with infrastructure for additional Python API’s in future releases. An online version of <a href="https://issues.apache.org/jira/browse/SPARK-4979">logistic regression</a> and the ability to read <a href="https://issues.apache.org/jira/browse/SPARK-4969">binary records</a> have also been added. For stateful operations, support has been added for loading of an <a href="https://issues.apache.org/jira/browse/SPARK-3660">initial state RDD</a>. Finally, the streaming programming guide has been updated to include information about SQL and DataFrame operations within streaming applications, and important clari
 fications to the fault-tolerance semantics.</p>
+<p>Spark 1.3 introduces a new <a href="https://issues.apache.org/jira/browse/SPARK-4964"><em>direct</em> Kafka API</a> (<a href="http://spark.apache.org/docs/1.3.0/streaming-kafka-integration.html">docs</a>) which enables exactly-once delivery without the use of write ahead logs. It also adds a <a href="https://issues.apache.org/jira/browse/SPARK-5047">Python Kafka API</a> along with infrastructure for additional Python API’s in future releases. An online version of <a href="https://issues.apache.org/jira/browse/SPARK-4979">logistic regression</a> and the ability to read <a href="https://issues.apache.org/jira/browse/SPARK-4969">binary records</a> have also been added. For stateful operations, support has been added for loading of an <a href="https://issues.apache.org/jira/browse/SPARK-3660">initial state RDD</a>. Finally, the streaming programming guide has been updated to include information about SQL and DataFrame operations within streaming applications, and important clari
 fications to the fault-tolerance semantics. </p>
 
 <h3 id="graphx">GraphX</h3>
 <p>GraphX adds a handful of utility functions in this release, including conversion into a <a href="https://issues.apache.org/jira/browse/SPARK-4917">canonical edge graph</a>.</p>
@@ -206,7 +206,7 @@
 <ul>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6194">SPARK-6194</a>: A memory leak in PySPark&#8217;s <code>collect()</code>.</li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6222">SPARK-6222</a>: An issue with failure recovery in Spark Streaming.</li>
-  <li><a href="https://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>: Spark SQL can&#8217;t read parquet data generated with Spark 1.1.</li>
+  <li><a href="https://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>: Spark SQL can&#8217;t read parquet data generated with Spark 1.1. </li>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-6247">SPARK-6247</a>: Errors analyzing certain join types in Spark SQL.</li>
 </ul>
 

Modified: spark/site/releases/spark-release-1-3-1.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-3-1.html?rev=1731310&r1=1731309&r2=1731310&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-3-1.html (original)
+++ spark/site/releases/spark-release-1-3-1.html Fri Feb 19 23:14:46 2016
@@ -183,10 +183,10 @@
 <h4 id="spark-sql">Spark SQL</h4>
 <ul>
   <li>Unable to use reserved words in DDL (<a href="http://issues.apache.org/jira/browse/SPARK-6250">SPARK-6250</a>)</li>
-  <li>Parquet no longer caches metadata (<a href="http://issues.apache.org/jira/browse/SPARK-6575">SPARK-6575</a>)</li>
+  <li>Parquet no longer caches metadata (<a href="http://issues.apache.org/jira/browse/SPARK-6575">SPARK-6575</a>) </li>
   <li>Bug when joining two Parquet tables (<a href="http://issues.apache.org/jira/browse/SPARK-6851">SPARK-6851</a>)</li>
-  <li>Unable to read parquet data generated by Spark 1.1.1 (<a href="http://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>)</li>
-  <li>Parquet data source may use wrong Hadoop FileSystem (<a href="http://issues.apache.org/jira/browse/SPARK-6330">SPARK-6330</a>)</li>
+  <li>Unable to read parquet data generated by Spark 1.1.1 (<a href="http://issues.apache.org/jira/browse/SPARK-6315">SPARK-6315</a>) </li>
+  <li>Parquet data source may use wrong Hadoop FileSystem (<a href="http://issues.apache.org/jira/browse/SPARK-6330">SPARK-6330</a>) </li>
 </ul>
 
 <h4 id="spark-streaming">Spark Streaming</h4>

Modified: spark/site/releases/spark-release-1-4-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-4-0.html?rev=1731310&r1=1731309&r2=1731310&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-4-0.html (original)
+++ spark/site/releases/spark-release-1-4-0.html Fri Feb 19 23:14:46 2016
@@ -237,7 +237,7 @@ Python coverage. MLlib also adds several
 </ul>
 
 <h3 id="spark-streaming">Spark Streaming</h3>
-<p>Spark streaming adds visual instrumentation graphs and significantly improved debugging information in the UI. It also enhances support for both Kafka and Kinesis.</p>
+<p>Spark streaming adds visual instrumentation graphs and significantly improved debugging information in the UI. It also enhances support for both Kafka and Kinesis. </p>
 
 <ul>
   <li><a href="https://issues.apache.org/jira/browse/SPARK-7602">SPARK-7602</a>: Visualization and monitoring in the streaming UI including batch drill down (<a href="https://issues.apache.org/jira/browse/SPARK-6796">SPARK-6796</a>, <a href="https://issues.apache.org/jira/browse/SPARK-6862">SPARK-6862</a>)</li>
@@ -263,7 +263,7 @@ Python coverage. MLlib also adds several
 
 <h4 id="test-partners">Test Partners</h4>
 
-<p>Thanks to The following organizations, who helped benchmark or integration test release candidates: <br /> Intel, Palantir, Cloudera, Mesosphere, Huawei, Shopify, Netflix, Yahoo, UC Berkeley and Databricks.</p>
+<p>Thanks to The following organizations, who helped benchmark or integration test release candidates: <br /> Intel, Palantir, Cloudera, Mesosphere, Huawei, Shopify, Netflix, Yahoo, UC Berkeley and Databricks. </p>
 
 <h4 id="contributors">Contributors</h4>
 <ul>

Modified: spark/site/releases/spark-release-1-5-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-5-0.html?rev=1731310&r1=1731309&r2=1731310&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-5-0.html (original)
+++ spark/site/releases/spark-release-1-5-0.html Fri Feb 19 23:14:46 2016
@@ -178,25 +178,25 @@
 <p>You can consult JIRA for the <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?projectId=12315420&amp;version=12332078">detailed changes</a>. We have curated a list of high level changes here:</p>
 
 <ul id="markdown-toc">
-  <li><a href="#apis-rdd-dataframe-and-sql" id="markdown-toc-apis-rdd-dataframe-and-sql">APIs: RDD, DataFrame and SQL</a></li>
-  <li><a href="#backend-execution-dataframe-and-sql" id="markdown-toc-backend-execution-dataframe-and-sql">Backend Execution: DataFrame and SQL</a></li>
-  <li><a href="#integrations-data-sources-hive-hadoop-mesos-and-cluster-management" id="markdown-toc-integrations-data-sources-hive-hadoop-mesos-and-cluster-management">Integrations: Data Sources, Hive, Hadoop, Mesos and Cluster Management</a></li>
-  <li><a href="#r-language" id="markdown-toc-r-language">R Language</a></li>
-  <li><a href="#machine-learning-and-advanced-analytics" id="markdown-toc-machine-learning-and-advanced-analytics">Machine Learning and Advanced Analytics</a></li>
-  <li><a href="#spark-streaming" id="markdown-toc-spark-streaming">Spark Streaming</a></li>
-  <li><a href="#deprecations-removals-configs-and-behavior-changes" id="markdown-toc-deprecations-removals-configs-and-behavior-changes">Deprecations, Removals, Configs, and Behavior Changes</a>    <ul>
-      <li><a href="#spark-core" id="markdown-toc-spark-core">Spark Core</a></li>
-      <li><a href="#spark-sql--dataframes" id="markdown-toc-spark-sql--dataframes">Spark SQL &amp; DataFrames</a></li>
-      <li><a href="#spark-streaming-1" id="markdown-toc-spark-streaming-1">Spark Streaming</a></li>
-      <li><a href="#mllib" id="markdown-toc-mllib">MLlib</a></li>
+  <li><a href="#apis-rdd-dataframe-and-sql">APIs: RDD, DataFrame and SQL</a></li>
+  <li><a href="#backend-execution-dataframe-and-sql">Backend Execution: DataFrame and SQL</a></li>
+  <li><a href="#integrations-data-sources-hive-hadoop-mesos-and-cluster-management">Integrations: Data Sources, Hive, Hadoop, Mesos and Cluster Management</a></li>
+  <li><a href="#r-language">R Language</a></li>
+  <li><a href="#machine-learning-and-advanced-analytics">Machine Learning and Advanced Analytics</a></li>
+  <li><a href="#spark-streaming">Spark Streaming</a></li>
+  <li><a href="#deprecations-removals-configs-and-behavior-changes">Deprecations, Removals, Configs, and Behavior Changes</a>    <ul>
+      <li><a href="#spark-core">Spark Core</a></li>
+      <li><a href="#spark-sql--dataframes">Spark SQL &amp; DataFrames</a></li>
+      <li><a href="#spark-streaming-1">Spark Streaming</a></li>
+      <li><a href="#mllib">MLlib</a></li>
     </ul>
   </li>
-  <li><a href="#known-issues" id="markdown-toc-known-issues">Known Issues</a>    <ul>
-      <li><a href="#sqldataframe" id="markdown-toc-sqldataframe">SQL/DataFrame</a></li>
-      <li><a href="#streaming" id="markdown-toc-streaming">Streaming</a></li>
+  <li><a href="#known-issues">Known Issues</a>    <ul>
+      <li><a href="#sqldataframe">SQL/DataFrame</a></li>
+      <li><a href="#streaming">Streaming</a></li>
     </ul>
   </li>
-  <li><a href="#credits" id="markdown-toc-credits">Credits</a></li>
+  <li><a href="#credits">Credits</a></li>
 </ul>
 
 <h3 id="apis-rdd-dataframe-and-sql">APIs: RDD, DataFrame and SQL</h3>

Modified: spark/site/releases/spark-release-1-6-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-6-0.html?rev=1731310&r1=1731309&r2=1731310&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-6-0.html (original)
+++ spark/site/releases/spark-release-1-6-0.html Fri Feb 19 23:14:46 2016
@@ -178,13 +178,13 @@
 <p>You can consult JIRA for the <a href="https://issues.apache.org/jira/secure/ReleaseNote.jspa?version=12333083&amp;projectId=12315420">detailed changes</a>. We have curated a list of high level changes here:</p>
 
 <ul id="markdown-toc">
-  <li><a href="#spark-coresql" id="markdown-toc-spark-coresql">Spark Core/SQL</a></li>
-  <li><a href="#spark-streaming" id="markdown-toc-spark-streaming">Spark Streaming</a></li>
-  <li><a href="#mllib" id="markdown-toc-mllib">MLlib</a></li>
-  <li><a href="#deprecations" id="markdown-toc-deprecations">Deprecations</a></li>
-  <li><a href="#changes-of-behavior" id="markdown-toc-changes-of-behavior">Changes of behavior</a></li>
-  <li><a href="#known-issues" id="markdown-toc-known-issues">Known issues</a></li>
-  <li><a href="#credits" id="markdown-toc-credits">Credits</a></li>
+  <li><a href="#spark-coresql">Spark Core/SQL</a></li>
+  <li><a href="#spark-streaming">Spark Streaming</a></li>
+  <li><a href="#mllib">MLlib</a></li>
+  <li><a href="#deprecations">Deprecations</a></li>
+  <li><a href="#changes-of-behavior">Changes of behavior</a></li>
+  <li><a href="#known-issues">Known issues</a></li>
+  <li><a href="#credits">Credits</a></li>
 </ul>
 
 <h3 id="spark-coresql">Spark Core/SQL</h3>
@@ -207,7 +207,7 @@
     <ul>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10000">SPARK-10000</a> <strong>Unified Memory Management</strong>  - Shared memory for execution and caching instead of exclusive division of the regions.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11787">SPARK-11787</a> <strong>Parquet Performance</strong> - Improve Parquet scan performance when using flat schemas.</li>
-      <li><a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241&#160;</a> <strong>Improved query planner for queries having distinct aggregations</strong> - Query plans of distinct aggregations are more robust when distinct columns have high cardinality.</li>
+      <li><a href="https://issues.apache.org/jira/browse/SPARK-9241">SPARK-9241&#160;</a> <strong>Improved query planner for queries having distinct aggregations</strong> - Query plans of distinct aggregations are more robust when distinct columns have high cardinality. </li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-9858">SPARK-9858&#160;</a> <strong>Adaptive query execution</strong> - Initial support for automatically selecting the number of reducers for joins and aggregations.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-10978">SPARK-10978</a> <strong>Avoiding double filters in Data Source API</strong> - When implementing a data source with filter pushdown, developers can now tell Spark SQL to avoid double evaluating a pushed-down filter.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11111">SPARK-11111</a> <strong>Fast null-safe joins</strong> - Joins using null-safe equality (<code>&lt;=&gt;</code>) will now execute using SortMergeJoin instead of computing a cartisian product.</li>
@@ -220,7 +220,7 @@
 <h3 id="spark-streaming">Spark Streaming</h3>
 
 <ul>
-  <li><strong>API Updates</strong>
+  <li><strong>API Updates</strong> 
     <ul>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-2629">SPARK-2629&#160;</a> <strong>New improved state management</strong> - <code>mapWithState</code> - a DStream transformation for stateful stream processing, supercedes <code>updateStateByKey</code> in functionality and performance.</li>
       <li><a href="https://issues.apache.org/jira/browse/SPARK-11198">SPARK-11198</a> <strong>Kinesis record deaggregation</strong> - Kinesis streams have been upgraded to use KCL 1.4.0 and supports transparent deaggregation of KPL-aggregated records.</li>
@@ -231,7 +231,7 @@
   <li><strong>UI Improvements</strong>
     <ul>
       <li>Made failures visible in the streaming tab, in the timelines, batch list, and batch details page.</li>
-      <li>Made output operations visible in the streaming tab as progress bars.</li>
+      <li>Made output operations visible in the streaming tab as progress bars. </li>
     </ul>
   </li>
 </ul>



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org