You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by Stefan Groschupf <sg...@media-style.com> on 2006/03/27 02:48:17 UTC

Re: "lost" NDFS blocks following network reorg

Hi hadoop developers,

I moved this discussion to the hadoop developer list since it is may  
more reponsable to this problem than the nutch users mailing list.

I spend some time to read code and find some interesting things.

The local name of the data node is  machineName + ":" + tmpPort.  So  
it can change if the port is blocked or the machine name change.
May we should create the datanode only once and write it to the data  
folder to be able read it later on.(?)

This local name is used to send block reports to the name node.  
FSNamesystem#processReport(Block newReport[], UTF8 dataNodeLocalName)  
process this report.
In the first line of this method the DatanodeInfo is loaded by the  
dataNode's localName. The datanode already is in this map since a  
heart beat is send before a block report.
So:
   DatanodeInfo node = (DatanodeInfo) datanodeMap.get(name);  // no  
problem but just a 'empty' container:
...
   Block oldReport[] = node.getBlocks(); // will return null since no  
Blocks are yet associated with this node.

Since oldReport is null all code is skipped until line 901. But this  
only adds the blocks to the node container.

In line 924 begins a section of code that collects all obsolete  
blocks. First of all I wondering why we iterate throw all blocks  
here, this could be expansice and it would be enough to iterate over  
all blocks that are reported by this datanode, isn't it?
If a block is still valid is tested by FSDirectory#isValidBlock that  
checks if the block is in activeBlocks.
The problem I see now is that the only method that adds Blocks t  
activeBlocks is unprotectedAddFile(UTF8 name, Block blocks[]). But  
here also the name node local name that may changed is involved.
This method is also used to load the state of stopped or crashed name  
node.
So in case you stop the dfs, change host names a set of blocks will  
be marked as obsolete and deleted.

Writing a test case for this behavior is very difficult since it  
involve a change of the machine name.

Makes my observation sense or do I had overseen a detail and the  
problem Ken describe is caused by a other problem?
In any case I suggest to make the data node name persistent so incase  
port or host-name change the name node will not handle the same  
datanode as a new one.

Stefan





Am 26.03.2006 um 23:11 schrieb Doug Cutting:

> Ken Krugler wrote:
>> Anyway, curious if anybody has insights here. We've done a fair  
>> amount of poking around, to no avail. I don't think there's any  
>> way to get the blocks back, as they definitely seem to be gone,  
>> and file recovery on Linux seems pretty iffy. I'm mostly  
>> interested in figuring out if this is a known issue ("Of course  
>> you can't change the server names and expect it to work"), or  
>> whether it's a symptom of lurking NDFS bugs.
>
> It's hard to tell, after the fact, whether stuff like this is pilot  
> error or a bug.  Others have reported similar things, so it's  
> either a bug or it's too easy to make pilot errors.  So something  
> needs to change.  But what?
>
> We need to start testing stuff like this systematically.  A  
> reproducible test case would make this much easier to diagnose.
>
> I'm sorry I can't be more helpful.  I'm sorry you lost data.
>
> Doug
>

---------------------------------------------
blog: http://www.find23.org
company: http://www.media-style.com



Datanode / namenode UUIDs (Re: "lost" NDFS blocks following network reorg)

Posted by Andrzej Bialecki <ab...@getopt.org>.
Stefan Groschupf wrote:
> Hi hadoop developers,

Hi,

My comments below. Generally speaking, I think you are right - datanodes 
should be initialized with a UUID, created once and persisted across IP 
changes and hostname changes, and this UUID should be used to identify 
datanodes/namenodes. I think the "format" command should also be 
implemented for datanodes, to create their UUID when starting for the 
first time - and later on this UUID should be retrieved from the local 
file somewhere in the data dir.

> The local name of the data node is  machineName + ":" + tmpPort.  So 
> it can change if the port is blocked or the machine name change.
> May we should create the datanode only once and write it to the data 
> folder to be able read it later on.(?)
>
> This local name is used to send block reports to the name node. 
> FSNamesystem#processReport(Block newReport[], UTF8 dataNodeLocalName) 
> process this report.
> In the first line of this method the DatanodeInfo is loaded by the 
> dataNode's localName. The datanode already is in this map since a 
> heart beat is send before a block report.
> So:
>   DatanodeInfo node = (DatanodeInfo) datanodeMap.get(name);  // no 
> problem but just a 'empty' container:
> ...
>   Block oldReport[] = node.getBlocks(); // will return null since no 
> Blocks are yet associated with this node.
>
> Since oldReport is null all code is skipped until line 901. But this 
> only adds the blocks to the node container.

Umm.. I don't follow. The lines 901-905 will add these blocks from the 
newReport, because newPos == 0.

>
> In line 924 begins a section of code that collects all obsolete 
> blocks. First of all I wondering why we iterate throw all blocks here, 
> this could be expansice and it would be enough to iterate over all 
> blocks that are reported by this datanode, isn't it?
> If a block is still valid is tested by FSDirectory#isValidBlock that 
> checks if the block is in activeBlocks.
> The problem I see now is that the only method that adds Blocks t 
> activeBlocks is unprotectedAddFile(UTF8 name, Block blocks[]). But 
> here also the name node local name that may changed is involved.
> This method is also used to load the state of stopped or crashed name 
> node.
> So in case you stop the dfs, change host names a set of blocks will be 
> marked as obsolete and deleted.

I'm not 100% sure if this part is correct, but it makes me nervous, too, 
to involve such ephemeric things as IP/hostname in handling data that 
persists across IP/hostname changes...

-- 
Best regards,
Andrzej Bialecki     <><
 ___. ___ ___ ___ _ _   __________________________________
[__ || __|__/|__||\/|  Information Retrieval, Semantic Web
___|||__||  \|  ||  |  Embedded Unix, System Integration
http://www.sigram.com  Contact: info at sigram dot com