You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2008/10/22 00:08:44 UTC

[jira] Updated: (HADOOP-4480) data node process should not die if one dir goes bad

     [ https://issues.apache.org/jira/browse/HADOOP-4480?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

dhruba borthakur updated HADOOP-4480:
-------------------------------------

    Component/s:     (was: fs)
                 dfs

> data node process should not die if one dir goes bad
> ----------------------------------------------------
>
>                 Key: HADOOP-4480
>                 URL: https://issues.apache.org/jira/browse/HADOOP-4480
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.18.1
>            Reporter: Allen Wittenauer
>
> When multiple directories are configured for the data node process to use to store blocks, it currently exits when one of them is not writable.   Instead, it should either completely ignore that directory or attempt to continue reading and then marking it unusable if reads fail.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.