You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@spark.apache.org by jo...@apache.org on 2014/12/01 02:18:59 UTC

spark git commit: [SPARK-4656][Doc] Typo in Programming Guide markdown

Repository: spark
Updated Branches:
  refs/heads/master aea7a9976 -> a217ec5fd


[SPARK-4656][Doc] Typo in Programming Guide markdown

Grammatical error in Programming Guide document

Author: lewuathe <le...@me.com>

Closes #3412 from Lewuathe/typo-programming-guide and squashes the following commits:

a3e2f00 [lewuathe] Typo in Programming Guide markdown


Project: http://git-wip-us.apache.org/repos/asf/spark/repo
Commit: http://git-wip-us.apache.org/repos/asf/spark/commit/a217ec5f
Tree: http://git-wip-us.apache.org/repos/asf/spark/tree/a217ec5f
Diff: http://git-wip-us.apache.org/repos/asf/spark/diff/a217ec5f

Branch: refs/heads/master
Commit: a217ec5fd5cd7addc69e538d6ec6dd64956cc8ed
Parents: aea7a99
Author: lewuathe <le...@me.com>
Authored: Sun Nov 30 17:18:50 2014 -0800
Committer: Josh Rosen <jo...@databricks.com>
Committed: Sun Nov 30 17:18:50 2014 -0800

----------------------------------------------------------------------
 docs/programming-guide.md | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)
----------------------------------------------------------------------


http://git-wip-us.apache.org/repos/asf/spark/blob/a217ec5f/docs/programming-guide.md
----------------------------------------------------------------------
diff --git a/docs/programming-guide.md b/docs/programming-guide.md
index c60de6e..7a16ee8 100644
--- a/docs/programming-guide.md
+++ b/docs/programming-guide.md
@@ -1177,7 +1177,7 @@ Accumulators are variables that are only "added" to through an associative opera
 therefore be efficiently supported in parallel. They can be used to implement counters (as in
 MapReduce) or sums. Spark natively supports accumulators of numeric types, and programmers
 can add support for new types. If accumulators are created with a name, they will be
-displayed in Spark's UI. This can can be useful for understanding the progress of 
+displayed in Spark's UI. This can be useful for understanding the progress of 
 running stages (NOTE: this is not yet supported in Python).
 
 An accumulator is created from an initial value `v` by calling `SparkContext.accumulator(v)`. Tasks


---------------------------------------------------------------------
To unsubscribe, e-mail: commits-unsubscribe@spark.apache.org
For additional commands, e-mail: commits-help@spark.apache.org