You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/04/11 21:01:19 UTC

[jira] Commented: (HADOOP-124) don't permit two datanodes to run from same dfs.data.dir

    [ http://issues.apache.org/jira/browse/HADOOP-124?page=comments#action_12374091 ] 

Doug Cutting commented on HADOOP-124:
-------------------------------------

I just talked with Owen, and we came up with the following plan:

(0) store a node id in each dfs.data.dir;

(1) pass the node id to the name node in all DataNodeProtocol calls;

(2) the namenode tracks datanodes by <id,host:port> pairs, only talking to one id from a given host:port at a time.  requests from an unknown host:port return a status that causes the datanode to exit and *not* restart itself.

(3) add a hello() method to DataNodeProtocol, called at datanode startup only.  this erases any entries for a datanode id, replacing them with the new entry.

Thus when a second datanode is started it causes any existing datanode running on that host to be forgotten and to exit when it next contacts the namenode.

> don't permit two datanodes to run from same dfs.data.dir
> --------------------------------------------------------
>
>          Key: HADOOP-124
>          URL: http://issues.apache.org/jira/browse/HADOOP-124
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>  Environment: ~30 node cluster
>     Reporter: Bryan Pendleton
>     Priority: Critical

>
> DFS files are still rotting.
> I suspect that there's a problem with block accounting/detecting identical hosts in the namenode. I have 30 physical nodes, with various numbers of local disks, meaning that my current 'bin/hadoop dfs -report" shows 80 nodes after a full restart. However, when I discovered the  problem (which resulted in losing about 500gb worth of temporary data because of missing blocks in some of the larger chunks) -report showed 96 nodes. I suspect somehow there were extra datanodes running against the same paths, and that the namenode was counting those as replicated instances, which then showed up over-replicated, and one of them was told to delete its local block, leading to the block actually getting lost.
> I will debug it more the next time the situation arises. This is at least the 5th time I've had a large amount of file data "rot" in DFS since January.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira