You are viewing a plain text version of this content. The canonical link for it is here.
Posted to mapreduce-user@hadoop.apache.org by John Lilley <jo...@redpoint.net> on 2013/05/22 16:58:01 UTC

RE: Shuffle phase

I was reading the elephant book trying to understand which process actually serves up the HTTP transfer on the mapper side.  Is it the each map task?  Or is there some persistent task on each worker that serves up mapper output for all map tasks?
Thanks,
John

From: Kai Voigt [mailto:k@123.org]
Sent: Tuesday, May 21, 2013 12:59 PM
To: user@hadoop.apache.org
Subject: Re: Shuffle phase replication factor

The map output doesn't get written to HDFS. The map task writes its output to its local disk, the reduce tasks will pull the data through HTTP for further processing.

Am 21.05.2013 um 19:57 schrieb John Lilley <jo...@redpoint.net>>:


When MapReduce enters "shuffle" to partition the tuples, I am assuming that it writes intermediate data to HDFS.  What replication factor is used for those temporary files?
john


--
Kai Voigt
k@123.org<ma...@123.org>