You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Doug Cutting (JIRA)" <ji...@apache.org> on 2006/05/27 00:42:31 UTC

[jira] Resolved: (HADOOP-163) If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.

     [ http://issues.apache.org/jira/browse/HADOOP-163?page=all ]
     
Doug Cutting resolved HADOOP-163:
---------------------------------

    Resolution: Fixed

This looks great!  I just committed it.  Thanks, Hairong!

> If a DFS datanode cannot write onto its file system. it should tell the name node not to assign new blocks to it.
> -----------------------------------------------------------------------------------------------------------------
>
>          Key: HADOOP-163
>          URL: http://issues.apache.org/jira/browse/HADOOP-163
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.2
>     Reporter: Runping Qi
>     Assignee: Hairong Kuang
>      Fix For: 0.3
>  Attachments: disk.patch
>
> I observed that sometime, if a file of a data node is not mounted properly, it may not be writable. In this case, any data writes will fail. The name node should stop assigning new blocks to that data node. The webpage should show that node is in an abnormal state.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira