You are viewing a plain text version of this content. The canonical link for it is here.
Posted to commits@crunch.apache.org by jw...@apache.org on 2014/06/03 17:43:27 UTC

svn commit: r1599619 - /crunch/site/trunk/content/user-guide.mdtext

Author: jwills
Date: Tue Jun  3 15:43:27 2014
New Revision: 1599619

URL: http://svn.apache.org/r1599619
Log:
CRUNCH-365: Typo fixes for the Crunch user guide. Contributed by Tom Wheeler.

Modified:
    crunch/site/trunk/content/user-guide.mdtext

Modified: crunch/site/trunk/content/user-guide.mdtext
URL: http://svn.apache.org/viewvc/crunch/site/trunk/content/user-guide.mdtext?rev=1599619&r1=1599618&r2=1599619&view=diff
==============================================================================
--- crunch/site/trunk/content/user-guide.mdtext (original)
+++ crunch/site/trunk/content/user-guide.mdtext Tue Jun  3 15:43:27 2014
@@ -344,8 +344,8 @@ framework won't kill it,
 * `setStatus(String status)` and `getStatus` for setting and retrieving task status information, and
 * `getTaskAttemptID()` for accessing the current `TaskAttemptID` information.
 
-DoFns also have a number of helper methods for working with [Hadoop Counters](http://codingwiththomas.blogspot.com/2011/04/controlling-hadoop-job-recursion.html), all named `increment`. Counters are an incredibly useful way of keeping track of the state of long running data pipelines and detecting any exceptional conditions that
-occur during processing, and they are supported in both the MapReduce-based and in-memory Crunch pipeline contexts. You can retrive the value of the Counters
+DoFns also have a number of helper methods for working with [Hadoop Counters](http://codingwiththomas.blogspot.com/2011/04/controlling-hadoop-job-recursion.html), all named `increment`. Counters are an incredibly useful way of keeping track of the state of long-running data pipelines and detecting any exceptional conditions that
+occur during processing, and they are supported in both the MapReduce-based and in-memory Crunch pipeline contexts. You can retrieve the value of the Counters
 in your client code at the end of a MapReduce pipeline by getting them from the [StageResult](apidocs/0.9.0/org/apache/crunch/PipelineResult.StageResult.html)
 objects returned by Crunch at the end of a run.
 
@@ -355,7 +355,7 @@ objects returned by Crunch at the end of
 * `increment(Enum<?> counterName, long value)` increments the value of the given counter by the given value.
 
 (Note that there was a change in the Counters API from Hadoop 1.0 to Hadoop 2.0, and thus we do not recommend that you work with the
-Counter classes directly in yoru Crunch pipelines (the two `getCounter` methods that were defined in DoFn are both deprecated) so that you will not be
+Counter classes directly in your Crunch pipelines (the two `getCounter` methods that were defined in DoFn are both deprecated) so that you will not be
 required to recompile your job jars when you move from a Hadoop 1.0 cluster to a Hadoop 2.0 cluster.)
 
 <a name="doplan"></a>
@@ -461,7 +461,7 @@ call on a PCollection will be a PTable i
 can be used to kick off a shuffle on the cluster.
 
 <pre>
-  public static class InidicatorFn&lt;T&gt; extends MapFn&lt;T, Pair&lt;T, Boolean&gt;&gt; {
+  public static class IndicatorFn&lt;T&gt; extends MapFn&lt;T, Pair&lt;T, Boolean&gt;&gt; {
     public Pair&lt;T, Boolean&gt; map(T input) { ... }
   }
 
@@ -1097,7 +1097,7 @@ Crunch APIs have a number of utilities f
 more advanced patterns like secondary sorts.
 
 <a name="stdsort"></a>
-#### Standard and Reveserse Sorting
+#### Standard and Reverse Sorting
 
 The [Sort](apidocs/0.9.0/org/apache/crunch/lib/Sort.html) API methods contain utility functions
 for sorting the contents of PCollections and PTables whose contents implement the `Comparable`