You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "lohit vijayarenu (JIRA)" <ji...@apache.org> on 2008/05/06 20:42:57 UTC
[jira] Updated: (HADOOP-2065) Replication policy for corrupted
block
[ https://issues.apache.org/jira/browse/HADOOP-2065?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
lohit vijayarenu updated HADOOP-2065:
-------------------------------------
Attachment: HADOOP-2065-3.patch
Attaching the patch against trunk.
> Replication policy for corrupted block
> ---------------------------------------
>
> Key: HADOOP-2065
> URL: https://issues.apache.org/jira/browse/HADOOP-2065
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Affects Versions: 0.14.1
> Reporter: Koji Noguchi
> Assignee: lohit vijayarenu
> Fix For: 0.18.0
>
> Attachments: HADOOP-2065-2.patch, HADOOP-2065-3.patch, HADOOP-2065.patch
>
>
> Thanks to HADOOP-1955, even if one of the replica is corrupted, the block should get replicated from a good replica relatively fast.
> Created this ticket to continue the discussion from http://issues.apache.org/jira/browse/HADOOP-1955#action_12531162.
> bq. 2. Delete corrupted source replica
> bq. 3. If all replicas are corrupt, stop replication.
> For (2), it'll be nice if the namenode can delete the corrupted block if there's a good replica on other nodes.
> For (3), I prefer if the namenode can still replicate the block.
> Before 0.14, if the file was corrupted, users were still able to pull the data and decide if they want to delete those files. (HADOOP-2063)
> In 0.14 and later, we cannot/don't replicate these blocks so they eventually get lost.
> To make the matters worse, if the corrupted file is accessed, all the corrupted replicas would be deleted except for one and stay as replication factor of 1 forever.
>
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.