You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2007/05/02 20:25:15 UTC

[jira] Updated: (HADOOP-1200) Datanode should periodically do a disk check

     [ https://issues.apache.org/jira/browse/HADOOP-1200?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hairong Kuang updated HADOOP-1200:
----------------------------------

    Attachment: diskCheck.patch

This patch checks if the disk is read-only whenever an IOException occurs.

> Datanode should periodically do a disk check
> --------------------------------------------
>
>                 Key: HADOOP-1200
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1200
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.12.2
>            Reporter: Hairong Kuang
>         Assigned To: Hairong Kuang
>            Priority: Blocker
>             Fix For: 0.13.0
>
>         Attachments: diskCheck.patch
>
>
> HADOOP-1170 removed the disk checking feature. But this is a needed feature for maintaining a large cluster. I agree that checking the disk on every I/O is too costly. A nicer approach is to have a thread that periodically do a disk check. It then automatically decommissions itself when any error occurs.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.