You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Sameer Paranjpye (JIRA)" <ji...@apache.org> on 2006/03/24 22:22:35 UTC

[jira] Assigned: (HADOOP-19) Datanode corruption

     [ http://issues.apache.org/jira/browse/HADOOP-19?page=all ]

Sameer Paranjpye reassigned HADOOP-19:
--------------------------------------

    Assign To: Doug Cutting

> Datanode corruption
> -------------------
>
>          Key: HADOOP-19
>          URL: http://issues.apache.org/jira/browse/HADOOP-19
>      Project: Hadoop
>         Type: Bug
>   Components: dfs
>     Versions: 0.1
>     Reporter: Rod Taylor
>     Assignee: Doug Cutting
>     Priority: Critical
>      Fix For: 0.1

>
> Our admins accidentally started a second nutch datanode pointing to the same directories as one already running (same machine) which in turn caused the entire contents of the datanode to go disappear.
> This happened because the blocking was based on the username (since fixed in our start scripts) and it was started as two different users.
> The ndfs.name.dir and ndfs.data.dir directories were both completely devoid of content, where they had about 150GB not all that much earlier.
> I think the solution is improved interlocking within the data directory itself (file locked with flock or something similar).

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira