You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/05/01 22:20:47 UTC

[jira] Commented: (HADOOP-94) disallow more than one datanode running on one computing sharing the same data directory

    [ http://issues.apache.org/jira/browse/HADOOP-94?page=comments#action_12377277 ] 

Doug Cutting commented on HADOOP-94:
------------------------------------

I agree that we should put some sort of a lock file in the data directory, but I don't think we should move the existing pid file, since that is managed by the generic daemon start/stop code.  Rather we could use the nio file locking code to create an exclusive lock on a file.  This will automatically be unlocked by the kernel if/when the jvm exits.

> disallow more than one datanode running on one computing sharing the same data directory
> ----------------------------------------------------------------------------------------
>
>          Key: HADOOP-94
>          URL: http://issues.apache.org/jira/browse/HADOOP-94
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>     Reporter: Hairong Kuang
>      Fix For: 0.3

>
> Currently dfs disallows more one datanode to run on the same computer if they are started up using the same hadoop conf dir. However, this does not prevent more than one data node gets started, each using a different conf dir (strickly speaking, a different pid file). If every machine has two such datanodes running, namenode will be busy on deleting and replicating blocks or eventually lead to block loss.
> Suggested solution: put pid file in  the data directory and disallow configuration.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira