You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-user@hadoop.apache.org by sudha sadhasivam <su...@yahoo.com> on 2009/10/22 05:38:53 UTC

Re: Do I need some distributed computing algorithm if I want to deep into source code of hadoop ?

To understand implementation you need to reverse engineer and view into the classes and functionality.
To understand design map-reduce papers and HDFS design papers is sufficient.
G Sudha Sadasivam

--- On Thu, 10/22/09, Amandeep Khurana <am...@gmail.com> wrote:


From: Amandeep Khurana <am...@gmail.com>
Subject: Re: Do I need some distributed computing algorithm if I want to deep into source code of hadoop ?
To: common-user@hadoop.apache.org
Date: Thursday, October 22, 2009, 1:36 AM


You do need to have read the GFS and MapReduce papers. It'll make
understanding the design easier. But apart from that, nothing
really...

On 10/21/09, Jeff Zhang <zj...@gmail.com> wrote:
> Hi all,
>
> These days, I begin look into source code hadoop. And I want to know whether
> I need some distributed computing algorithm if I want to deep into source
> code of hadoop ?
>
>
> Thank you.
>
>
> Jeff zhang
>


-- 


Amandeep Khurana
Computer Science Graduate Student
University of California, Santa Cruz