You are viewing a plain text version of this content. The canonical link for it is here.
Posted to general@hadoop.apache.org by Ravi Phulari <rp...@yahoo-inc.com> on 2010/03/06 08:50:11 UTC

Re: Fault Tolerancy in core hadoop and HDFS

HDFS Architecture  talks more about fault tolerance. - http://hadoop.apache.org/common/docs/current/hdfs_design.html


Removing core-dev@ & common-dev@ from list.
Please post general questions to general@ or  comman-user@ .

-
Ravi


On 3/5/10 11:13 AM, "iprafulla" <ip...@yahoo.com> wrote:



Hi Everyone,

I am researching fault tolerancy aspect of HDFS and hadoop core system.

I went through slides at hadoop  homepage  and was good starting point and
googled papers that were helpful.

I am familiar with the hadoop system and actually looking for the detail
architecture and implementation of hadoop  system so that I can dig in the
topic.
Any suggestions are most welcome .

and of course except in the code :)

Thank u.

prafulla
--
View this message in context: http://old.nabble.com/Fault-Tolerancy-in-core-hadoop--and-HDFS-tp27798051p27798051.html
Sent from the Hadoop core-dev mailing list archive at Nabble.com.



Ravi
--


Re: Fault Tolerancy in core hadoop and HDFS

Posted by Zooko O'Whielacronx <zo...@gmail.com>.
You might also be interested in hadoop-lafs
(http://code.google.com/p/hadoop-lafs ) which lets Hadoop use
Tahoe-LAFS (http://tahoe-lafs.org ) instead of HDFS. Tahoe-LAFS is a
fault-tolerant distributed data store which uses erasure coding (c.f.
https://issues.apache.org/jira/browse/HDFS-503 ) and SHA-256-based
integrity checks.

Regards,

Zooko