You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by pw...@apache.org on 2013/09/10 21:40:42 UTC
[28/50] git commit: Fix some review comments
Fix some review comments
Project: http://git-wip-us.apache.org/repos/asf/incubator-spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/incubator-spark/commit/b4588549
Tree: http://git-wip-us.apache.org/repos/asf/incubator-spark/tree/b4588549
Diff: http://git-wip-us.apache.org/repos/asf/incubator-spark/diff/b4588549
Branch: refs/heads/branch-0.8
Commit: b458854977c437e85fd89056e5d40383c8fa962e
Parents: 170b386
Author: Matei Zaharia <ma...@eecs.berkeley.edu>
Authored: Sun Sep 8 21:25:49 2013 -0700
Committer: Matei Zaharia <ma...@eecs.berkeley.edu>
Committed: Sun Sep 8 21:25:49 2013 -0700
----------------------------------------------------------------------
docs/cluster-overview.md | 2 +-
docs/quick-start.md | 2 +-
2 files changed, 2 insertions(+), 2 deletions(-)
----------------------------------------------------------------------
http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/b4588549/docs/cluster-overview.md
----------------------------------------------------------------------
diff --git a/docs/cluster-overview.md b/docs/cluster-overview.md
index cf6b48c..7025c23 100644
--- a/docs/cluster-overview.md
+++ b/docs/cluster-overview.md
@@ -80,7 +80,7 @@ The following table summarizes terms you'll see used to refer to cluster concept
<tbody>
<tr>
<td>Application</td>
- <td>Any user program invoking Spark</td>
+ <td>User program built on Spark. Consists of a <em>driver program</em> and <em>executors</em> on the cluster.</td>
</tr>
<tr>
<td>Driver program</td>
http://git-wip-us.apache.org/repos/asf/incubator-spark/blob/b4588549/docs/quick-start.md
----------------------------------------------------------------------
diff --git a/docs/quick-start.md b/docs/quick-start.md
index 1b069ce..8f782db 100644
--- a/docs/quick-start.md
+++ b/docs/quick-start.md
@@ -36,7 +36,7 @@ scala> textFile.count() // Number of items in this RDD
res0: Long = 74
scala> textFile.first() // First item in this RDD
-res1: String = Welcome to the Spark documentation!
+res1: String = # Apache Spark
{% endhighlight %}
Now let's use a transformation. We will use the [`filter`](scala-programming-guide.html#transformations) transformation to return a new RDD with a subset of the items in the file.