You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-user@hadoop.apache.org by Brahma Reddy Battula <br...@huawei.com> on 2012/04/30 09:07:47 UTC

checksum error

Hi


I have started hadoop cluster with one NameNode And one DataNode and written one file with replication factor one.

 Now Edited((To get check-sum error)) written file in DN where block is located(physically).

Then I tried to read file ,then I got could not obtain block since block got corrupted and check-sum error from DN logs.

After that I removed what ever I edited from block .



Then I try to read file,here I am able to read by using fsshell commands even though block got corrupted(by fsck report)..But I am getting eof execption using readfully api.

After reverting not getting any checksum error from DN logs.


Please let me know the behavior once revert back to original




Thanks And Regards

Brahma Reddy

RE: checksum error

Posted by Brahma Reddy Battula <br...@huawei.com>.
Thanks for your response.

Are you telling while reading or writing a file make RF=0..?

Anyway write will fail if we make RF=0 since min replica is one by default.

Coming read I am getting eof exception even RF=0;

Here doubt is actually block got corrupted (I checked using fsck and even NN UI also shows block got corrupted)

After reverting changes what ever changes I did.

Read is successfull by using fsshell commands(cat,text and get)..Only readfully throwing EOF


Please correct me If I am wrong.

Thanks And Regards

Brahma Reddy


________________________________
From: Srikanth [sxk7699@rit.edu]
Sent: Monday, April 30, 2012 3:29 PM
To: hdfs-user@hadoop.apache.org
Subject: Re: checksum error

Hi,

When you say replication factor=1 it is trying to look for another data node to store the replica of what is stored in the only data node you have created.

Try using replication factor=0.


Srikanth Kommineni

On Apr 30, 2012, at 3:07, Brahma Reddy Battula <br...@huawei.com>> wrote:

Hi


I have started hadoop cluster with one NameNode And one DataNode and written one file with replication factor one.

 Now Edited((To get check-sum error)) written file in DN where block is located(physically).

Then I tried to read file ,then I got could not obtain block since block got corrupted and check-sum error from DN logs.

After that I removed what ever I edited from block .



Then I try to read file,here I am able to read by using fsshell commands even though block got corrupted(by fsck report)..But I am getting eof execption using readfully api.

After reverting not getting any checksum error from DN logs.


Please let me know the behavior once revert back to original




Thanks And Regards

Brahma Reddy

Re: checksum error

Posted by Srikanth <sx...@rit.edu>.
Hi, 

When you say replication factor=1 it is trying to look for another data node to store the replica of what is stored in the only data node you have created.

Try using replication factor=0. 


Srikanth Kommineni 

On Apr 30, 2012, at 3:07, Brahma Reddy Battula <br...@huawei.com> wrote:

> Hi 
> 
> 
> I have started hadoop cluster with one NameNode And one DataNode and written one file with replication factor one.
> 
>  Now Edited((To get check-sum error)) written file in DN where block is located(physically).
> 
> Then I tried to read file ,then I got could not obtain block since block got corrupted and check-sum error from DN logs.
> 
> After that I removed what ever I edited from block .
> 
> 
> 
> Then I try to read file,here I am able to read by using fsshell commands even though block got corrupted(by fsck report)..But I am getting eof execption using readfully api.
> 
> After reverting not getting any checksum error from DN logs.
> 
> 
> Please let me know the behavior once revert back to original 
> 
> 
> 
> 
> Thanks And Regards
> 
> Brahma Reddy