You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by sr...@apache.org on 2015/05/08 15:59:52 UTC

svn commit: r1678354 - in /spark: ./ releases/_posts/ site/ site/releases/ sql/

Author: srowen
Date: Fri May  8 13:59:51 2015
New Revision: 1678354

URL: http://svn.apache.org/r1678354
Log:
Reapply my past changes, which I had only applied to .html, to .md too, and add the changes from the regenerated .html too

Modified:
    spark/community.md
    spark/downloads.md
    spark/examples.md
    spark/faq.md
    spark/index.md
    spark/releases/_posts/2015-03-13-spark-release-1-3-0.md
    spark/site/community.html
    spark/site/downloads.html
    spark/site/examples.html
    spark/site/faq.html
    spark/site/index.html
    spark/site/releases/spark-release-1-3-0.html
    spark/sql/index.md

Modified: spark/community.md
URL: http://svn.apache.org/viewvc/spark/community.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/community.md (original)
+++ spark/community.md Fri May  8 13:59:51 2015
@@ -28,6 +28,8 @@ navigation:
   </li>
 </ul>
 
+<p>The StackOverflow tag <a href="http://stackoverflow.com/questions/tagged/apache-spark"><code>apache-spark</code></a> is an unofficial but active forum for Spark users' questions and answers.</p>
+
 <a name="events"></a>
 <h3>Events and Meetups</h3>
 

Modified: spark/downloads.md
URL: http://svn.apache.org/viewvc/spark/downloads.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/downloads.md (original)
+++ spark/downloads.md Fri May  8 13:59:51 2015
@@ -20,13 +20,13 @@ The latest release of Spark is Spark 1.3
 <a href="{{site.url}}releases/spark-release-1-3-1.html">(release notes)</a>
 <a href="https://git-wip-us.apache.org/repos/asf?p=spark.git;a=commit;h=3e8391327ba586eaf54447043bd526d919043a44">(git tag)</a><br/>
 
-1. Chose a Spark release:
+1. Choose a Spark release:
   <select id="sparkVersionSelect" onChange="javascript:onVersionSelect();"></select><br>
 
-2. Chose a package type:
+2. Choose a package type:
   <select id="sparkPackageSelect" onChange="javascript:onPackageSelect();"></select><br>
 
-3. Chose a download type:
+3. Choose a download type:
   <select id="sparkDownloadSelect" onChange="javascript:onDownloadSelect()"></select><br>
 
 4. Download Spark: <span id="spanDownloadLink"></span>

Modified: spark/examples.md
URL: http://svn.apache.org/viewvc/spark/examples.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/examples.md (original)
+++ spark/examples.md Fri May  8 13:59:51 2015
@@ -26,8 +26,8 @@ In this example, we search through the e
 <div class="tab-content">
   <div class="tab-pane tab-pane-python active">
     <div class="code code-tab">
-    file = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
-    errors = file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br>
+    text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
+    errors = text_file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br />
     <span class="comment"># Count all the errors</span><br>
     errors.<span class="sparkop">count</span>()<br>
     <span class="comment"># Count errors mentioning MySQL</span><br>
@@ -38,8 +38,8 @@ In this example, we search through the e
   </div>
   <div class="tab-pane tab-pane-scala">
     <div class="code code-tab">
-    <span class="keyword">val</span> file = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
-    <span class="keyword">val</span> errors = file.<span class="sparkop">filter</span>(<span class="closure">line =&gt; line.contains("ERROR")</span>)<br>
+    <span class="keyword">val</span> textFile = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
+    <span class="keyword">val</span> errors = textFile.<span class="sparkop">filter</span>(<span class="closure">line =&gt; line.contains("ERROR")</span>)<br>
     <span class="comment">// Count all the errors</span><br>
     errors.<span class="sparkop">count</span>()<br>
     <span class="comment">// Count errors mentioning MySQL</span><br>
@@ -50,8 +50,8 @@ In this example, we search through the e
   </div>
   <div class="tab-pane tab-pane-java">
     <div class="code code-tab">
-    JavaRDD&lt;String&gt; file = spark.textFile(<span class="string">"hdfs://..."</span>);<br>
-    JavaRDD&lt;String&gt; errors = file.<span class="sparkop">filter</span>(<span class="closure">new Function&lt;String, Boolean&gt;() {<br>
+    JavaRDD&lt;String&gt; textFile = spark.textFile(<span class="string">"hdfs://..."</span>);<br>
+    JavaRDD&lt;String&gt; errors = textFile.<span class="sparkop">filter</span>(<span class="closure">new Function&lt;String, Boolean&gt;() {<br>
     &nbsp;&nbsp;public Boolean call(String s) { return s.contains("ERROR"); }<br>
     }</span>);<br>
     <span class="comment">// Count all the errors</span><br>
@@ -112,8 +112,8 @@ In this example, we search through the e
 <div class="tab-content">
   <div class="tab-pane tab-pane-python active">
     <div class="code code-tab">
-    file = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
-    counts = file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split(" ")</span>) \<br>
+    text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
+    counts = text_file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split(" ")</span>) \<br>
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">map</span>(<span class="closure">lambda word: (word, 1)</span>) \<br>
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">reduceByKey</span>(<span class="closure">lambda a, b: a + b</span>)<br>
     counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>)
@@ -121,8 +121,8 @@ In this example, we search through the e
   </div>
   <div class="tab-pane tab-pane-scala">
     <div class="code code-tab">
-    <span class="keyword">val</span> file = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
-    <span class="keyword">val</span> counts = file.<span class="sparkop">flatMap</span>(<span class="closure">line =&gt; line.split(" ")</span>)<br>
+    <span class="keyword">val</span> textFile = spark.textFile(<span class="string">"hdfs://..."</span>)<br>
+    <span class="keyword">val</span> counts = textFile.<span class="sparkop">flatMap</span>(<span class="closure">line =&gt; line.split(" ")</span>)<br>
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">map</span>(<span class="closure">word =&gt; (word, 1)</span>)<br>
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">reduceByKey</span>(<span class="closure">_ + _</span>)<br>
     counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>)
@@ -130,8 +130,8 @@ In this example, we search through the e
   </div>
   <div class="tab-pane tab-pane-java">
     <div class="code code-tab">
-    JavaRDD&lt;String&gt; file = spark.textFile(<span class="string">"hdfs://..."</span>);<br>
-    JavaRDD&lt;String&gt; words = file.<span class="sparkop">flatMap</span>(<span class="closure">new FlatMapFunction&lt;String, String&gt;() {<br>
+    JavaRDD&lt;String&gt; textFile = spark.textFile(<span class="string">"hdfs://..."</span>);<br>
+    JavaRDD&lt;String&gt; words = textFile.<span class="sparkop">flatMap</span>(<span class="closure">new FlatMapFunction&lt;String, String&gt;() {<br>
     &nbsp;&nbsp;public Iterable&lt;String&gt; call(String s) { return Arrays.asList(s.split(" ")); }<br>
     }</span>);<br>
     JavaPairRDD&lt;String, Integer&gt; pairs = words.<span class="sparkop">mapToPair</span>(<span class="closure">new PairFunction&lt;String, String, Integer&gt;() {<br>

Modified: spark/faq.md
URL: http://svn.apache.org/viewvc/spark/faq.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/faq.md (original)
+++ spark/faq.md Fri May  8 13:59:51 2015
@@ -53,8 +53,8 @@ Spark is a fast and general processing e
 <p class="answer">Starting in version 0.8, Spark is under the <a href="http://www.apache.org/licenses/LICENSE-2.0.html">Apache 2.0 license</a>. Previous versions used the <a href="https://github.com/mesos/spark/blob/branch-0.7/LICENSE">BSD license</a>.</p>
 
 <p class="question">How can I contribute to Spark?</p>
-<p class="answer">Contact the <a href="{{site.url}}community.html">mailing list</a> or send us a pull request on <a href="https://github.com/apache/spark">GitHub</a> (instructions <a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">here</a>).  We're glad to hear about your experience using Spark and to accept patches.</p>
-<p>If you would like to report an issue, post it to the <a href="https://issues.apache.org/jira/browse/SPARK">Spark issue tracker</a>.</p>
+
+<p class="answer">See the <a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark wiki</a> for more information.</p>
 
 <p class="question">Where can I get more help?</p>
 <p class="answer">Please post on the <a href="http://apache-spark-user-list.1001560.n3.nabble.com">Spark Users</a> mailing list.  We'll be glad to help!</p>

Modified: spark/index.md
URL: http://svn.apache.org/viewvc/spark/index.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/index.md (original)
+++ spark/index.md Fri May  8 13:59:51 2015
@@ -53,9 +53,9 @@ navigation:
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="text-align: left; display: inline-block;">
       <div class="code">
-        file = spark.textFile(<span class="string">"hdfs://..."</span>)<br/>
+        text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br/>
         &nbsp;<br/>
-        file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split()</span>)<br/>
+        text_file.<span class="sparkop">flatMap</span>(<span class="closure">lambda&nbsp;line:&nbsp;line.split()</span>)<br/>
         &nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">map</span>(<span class="closure">lambda word: (word, 1)</span>)<br/>
         &nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">reduceByKey</span>(<span class="closure">lambda a, b: a+b</span>)
       </div>
@@ -63,9 +63,9 @@ navigation:
     </div>
     <!--
     <div class="code" style="margin-top: 20px; text-align: left; display: inline-block;">
-      file = spark.textFile(<span class="string">"hdfs://..."</span>)<br/>
+      text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br/>
       &nbsp;<br/>
-      file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br/>
+      text_file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br/>
       &nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">count</span>()
     </div>
     -->

Modified: spark/releases/_posts/2015-03-13-spark-release-1-3-0.md
URL: http://svn.apache.org/viewvc/spark/releases/_posts/2015-03-13-spark-release-1-3-0.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/releases/_posts/2015-03-13-spark-release-1-3-0.md (original)
+++ spark/releases/_posts/2015-03-13-spark-release-1-3-0.md Fri May  8 13:59:51 2015
@@ -36,7 +36,7 @@ GraphX adds a handful of utility functio
 ## Upgrading to Spark 1.3
 Spark 1.3 is binary compatible with Spark 1.X releases, so no code changes are necessary. This excludes API’s marked explicitly as unstable.
 
-As part of stabilizing the Spark SQL API, the `SchemaRDD` class has been extended renamed to `DataFrame`. Spark SQL's [migration guide](http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#migration-guide) describes the upgrade process in detail. Spark SQL also now requires that column identifiers which use reserved words (such as "string" or "table") be escaped using backticks.
+As part of stabilizing the Spark SQL API, the `SchemaRDD` class has been renamed to `DataFrame`. Spark SQL's [migration guide](http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#migration-guide) describes the upgrade process in detail. Spark SQL also now requires that column identifiers which use reserved words (such as "string" or "table") be escaped using backticks.
 
 ### Known Issues
 This release has few known issues which will be addressed in Spark 1.3.1:

Modified: spark/site/community.html
URL: http://svn.apache.org/viewvc/spark/site/community.html?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/site/community.html (original)
+++ spark/site/community.html Fri May  8 13:59:51 2015
@@ -188,6 +188,8 @@
   </li>
 </ul>
 
+<p>The StackOverflow tag <a href="http://stackoverflow.com/questions/tagged/apache-spark"><code>apache-spark</code></a> is an unofficial but active forum for Spark users' questions and answers.</p>
+
 <p><a name="events"></a></p>
 <h3>Events and Meetups</h3>
 

Modified: spark/site/downloads.html
URL: http://svn.apache.org/viewvc/spark/site/downloads.html?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/site/downloads.html (original)
+++ spark/site/downloads.html Fri May  8 13:59:51 2015
@@ -182,15 +182,15 @@ $(document).ready(function() {
 
 <ol>
   <li>
-    <p>Chose a Spark release:
+    <p>Choose a Spark release:
   <select id="sparkVersionSelect" onchange="javascript:onVersionSelect();"></select><br /></p>
   </li>
   <li>
-    <p>Chose a package type:
+    <p>Choose a package type:
   <select id="sparkPackageSelect" onchange="javascript:onPackageSelect();"></select><br /></p>
   </li>
   <li>
-    <p>Chose a download type:
+    <p>Choose a download type:
   <select id="sparkDownloadSelect" onchange="javascript:onDownloadSelect()"></select><br /></p>
   </li>
   <li>

Modified: spark/site/examples.html
URL: http://svn.apache.org/viewvc/spark/site/examples.html?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/site/examples.html (original)
+++ spark/site/examples.html Fri May  8 13:59:51 2015
@@ -187,8 +187,8 @@ previous ones, and <em>actions</em>, whi
 <div class="tab-content">
   <div class="tab-pane tab-pane-python active">
     <div class="code code-tab">
-    file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
-    errors = file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br />
+    text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
+    errors = text_file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br />
     <span class="comment"># Count all the errors</span><br />
     errors.<span class="sparkop">count</span>()<br />
     <span class="comment"># Count errors mentioning MySQL</span><br />
@@ -199,8 +199,8 @@ previous ones, and <em>actions</em>, whi
   </div>
   <div class="tab-pane tab-pane-scala">
     <div class="code code-tab">
-    <span class="keyword">val</span> file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
-    <span class="keyword">val</span> errors = file.<span class="sparkop">filter</span>(<span class="closure">line =&gt; line.contains("ERROR")</span>)<br />
+    <span class="keyword">val</span> textFile = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
+    <span class="keyword">val</span> errors = textFile.<span class="sparkop">filter</span>(<span class="closure">line =&gt; line.contains("ERROR")</span>)<br />
     <span class="comment">// Count all the errors</span><br />
     errors.<span class="sparkop">count</span>()<br />
     <span class="comment">// Count errors mentioning MySQL</span><br />
@@ -211,8 +211,8 @@ previous ones, and <em>actions</em>, whi
   </div>
   <div class="tab-pane tab-pane-java">
     <div class="code code-tab">
-    JavaRDD&lt;String&gt; file = spark.textFile(<span class="string">"hdfs://..."</span>);<br />
-    JavaRDD&lt;String&gt; errors = file.<span class="sparkop">filter</span>(<span class="closure">new Function&lt;String, Boolean&gt;() {<br />
+    JavaRDD&lt;String&gt; textFile = spark.textFile(<span class="string">"hdfs://..."</span>);<br />
+    JavaRDD&lt;String&gt; errors = textFile.<span class="sparkop">filter</span>(<span class="closure">new Function&lt;String, Boolean&gt;() {<br />
     &nbsp;&nbsp;public Boolean call(String s) { return s.contains("ERROR"); }<br />
     }</span>);<br />
     <span class="comment">// Count all the errors</span><br />
@@ -272,8 +272,8 @@ previous ones, and <em>actions</em>, whi
 <div class="tab-content">
   <div class="tab-pane tab-pane-python active">
     <div class="code code-tab">
-    file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
-    counts = file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split(" ")</span>) \<br />
+    text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
+    counts = text_file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split(" ")</span>) \<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">map</span>(<span class="closure">lambda word: (word, 1)</span>) \<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">reduceByKey</span>(<span class="closure">lambda a, b: a + b</span>)<br />
     counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>)
@@ -281,8 +281,8 @@ previous ones, and <em>actions</em>, whi
   </div>
   <div class="tab-pane tab-pane-scala">
     <div class="code code-tab">
-    <span class="keyword">val</span> file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
-    <span class="keyword">val</span> counts = file.<span class="sparkop">flatMap</span>(<span class="closure">line =&gt; line.split(" ")</span>)<br />
+    <span class="keyword">val</span> textFile = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
+    <span class="keyword">val</span> counts = textFile.<span class="sparkop">flatMap</span>(<span class="closure">line =&gt; line.split(" ")</span>)<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">map</span>(<span class="closure">word =&gt; (word, 1)</span>)<br />
     &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">reduceByKey</span>(<span class="closure">_ + _</span>)<br />
     counts.<span class="sparkop">saveAsTextFile</span>(<span class="string">"hdfs://..."</span>)
@@ -290,8 +290,8 @@ previous ones, and <em>actions</em>, whi
   </div>
   <div class="tab-pane tab-pane-java">
     <div class="code code-tab">
-    JavaRDD&lt;String&gt; file = spark.textFile(<span class="string">"hdfs://..."</span>);<br />
-    JavaRDD&lt;String&gt; words = file.<span class="sparkop">flatMap</span>(<span class="closure">new FlatMapFunction&lt;String, String&gt;() {<br />
+    JavaRDD&lt;String&gt; textFile = spark.textFile(<span class="string">"hdfs://..."</span>);<br />
+    JavaRDD&lt;String&gt; words = textFile.<span class="sparkop">flatMap</span>(<span class="closure">new FlatMapFunction&lt;String, String&gt;() {<br />
     &nbsp;&nbsp;public Iterable&lt;String&gt; call(String s) { return Arrays.asList(s.split(" ")); }<br />
     }</span>);<br />
     JavaPairRDD&lt;String, Integer&gt; pairs = words.<span class="sparkop">mapToPair</span>(<span class="closure">new PairFunction&lt;String, String, Integer&gt;() {<br />

Modified: spark/site/faq.html
URL: http://svn.apache.org/viewvc/spark/site/faq.html?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/site/faq.html (original)
+++ spark/site/faq.html Fri May  8 13:59:51 2015
@@ -213,6 +213,7 @@ Spark is a fast and general processing e
 <p class="answer">Starting in version 0.8, Spark is under the <a href="http://www.apache.org/licenses/LICENSE-2.0.html">Apache 2.0 license</a>. Previous versions used the <a href="https://github.com/mesos/spark/blob/branch-0.7/LICENSE">BSD license</a>.</p>
 
 <p class="question">How can I contribute to Spark?</p>
+
 <p class="answer">See the <a href="https://cwiki.apache.org/confluence/display/SPARK/Contributing+to+Spark">Contributing to Spark wiki</a> for more information.</p>
 
 <p class="question">Where can I get more help?</p>

Modified: spark/site/index.html
URL: http://svn.apache.org/viewvc/spark/site/index.html?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/site/index.html (original)
+++ spark/site/index.html Fri May  8 13:59:51 2015
@@ -212,9 +212,9 @@
   <div class="col-md-5 col-sm-5 col-padded-top col-center">
     <div style="text-align: left; display: inline-block;">
       <div class="code">
-        file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
+        text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br />
         &nbsp;<br />
-        file.<span class="sparkop">flatMap</span>(<span class="closure">lambda line: line.split()</span>)<br />
+        text_file.<span class="sparkop">flatMap</span>(<span class="closure">lambda&nbsp;line:&nbsp;line.split()</span>)<br />
         &nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">map</span>(<span class="closure">lambda word: (word, 1)</span>)<br />
         &nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">reduceByKey</span>(<span class="closure">lambda a, b: a+b</span>)
       </div>
@@ -222,9 +222,9 @@
     </div>
     <!--
     <div class="code" style="margin-top: 20px; text-align: left; display: inline-block;">
-      file = spark.textFile(<span class="string">"hdfs://..."</span>)<br/>
+      text_file = spark.textFile(<span class="string">"hdfs://..."</span>)<br/>
       &nbsp;<br/>
-      file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br/>
+      text_file.<span class="sparkop">filter</span>(<span class="closure">lambda line: "ERROR" in line</span>)<br/>
       &nbsp;&nbsp;&nbsp;&nbsp;.<span class="sparkop">count</span>()
     </div>
     -->

Modified: spark/site/releases/spark-release-1-3-0.html
URL: http://svn.apache.org/viewvc/spark/site/releases/spark-release-1-3-0.html?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/site/releases/spark-release-1-3-0.html (original)
+++ spark/site/releases/spark-release-1-3-0.html Fri May  8 13:59:51 2015
@@ -195,7 +195,7 @@
 <h2 id="upgrading-to-spark-13">Upgrading to Spark 1.3</h2>
 <p>Spark 1.3 is binary compatible with Spark 1.X releases, so no code changes are necessary. This excludes API’s marked explicitly as unstable.</p>
 
-<p>As part of stabilizing the Spark SQL API, the <code>SchemaRDD</code> class has been extended renamed to <code>DataFrame</code>. Spark SQL&#8217;s <a href="http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#migration-guide">migration guide</a> describes the upgrade process in detail. Spark SQL also now requires that column identifiers which use reserved words (such as &#8220;string&#8221; or &#8220;table&#8221;) be escaped using backticks.</p>
+<p>As part of stabilizing the Spark SQL API, the <code>SchemaRDD</code> class has been renamed to <code>DataFrame</code>. Spark SQL&#8217;s <a href="http://spark.apache.org/docs/1.3.0/sql-programming-guide.html#migration-guide">migration guide</a> describes the upgrade process in detail. Spark SQL also now requires that column identifiers which use reserved words (such as &#8220;string&#8221; or &#8220;table&#8221;) be escaped using backticks.</p>
 
 <h3 id="known-issues">Known Issues</h3>
 <p>This release has few known issues which will be addressed in Spark 1.3.1:</p>

Modified: spark/sql/index.md
URL: http://svn.apache.org/viewvc/spark/sql/index.md?rev=1678354&r1=1678353&r2=1678354&view=diff
==============================================================================
--- spark/sql/index.md (original)
+++ spark/sql/index.md Fri May  8 13:59:51 2015
@@ -16,7 +16,7 @@ subproject: SQL
   <div class="col-md-7 col-sm-7">
     <h2>Integrated</h2>
     <p class="lead">
-	  Seemlessly mix SQL queries with Spark programs.
+	  Seamlessly mix SQL queries with Spark programs.
     </p>
     <p>
 	  Spark SQL lets you query structured data as a distributed dataset (RDD) in Spark, with integrated APIs in Python, Scala and Java. 



---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org