You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by Apache Wiki <wi...@apache.org> on 2006/05/30 09:34:37 UTC

[Lucene-hadoop Wiki] Update of "HadoopMapReduce" by TeppoKurki

Dear Wiki user,

You have subscribed to a wiki page or wiki category on "Lucene-hadoop Wiki" for change notification.

The following page has been changed by TeppoKurki:
http://wiki.apache.org/lucene-hadoop/HadoopMapReduce

------------------------------------------------------------------------------
  file (''append phase''). The file is then merge sorted so that the key-value pairs for
  a given key are contiguous (''sort phase''). This makes the actual reduce operation simple: the file is
  read sequentially and the values are passed to the reduce method
- with an iterator reading the input input file until the next key
+ with an iterator reading the input file until the next key
  value is encountered. See [http://svn.apache.org/viewcvs.cgi/lucene/hadoop/trunk/src/java/org/apache/hadoop/mapred/ReduceTask.java?view=markup	 ReduceTask] for details.
  
  In the end the output will consist of one output file per Reduce