You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2010/08/19 18:58:18 UTC
[jira] Resolved: (HDFS-86) Corrupted blocks get deleted but not
replicated
[ https://issues.apache.org/jira/browse/HDFS-86?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Hairong Kuang resolved HDFS-86.
-------------------------------
Resolution: Invalid
> Corrupted blocks get deleted but not replicated
> -----------------------------------------------
>
> Key: HDFS-86
> URL: https://issues.apache.org/jira/browse/HDFS-86
> Project: Hadoop HDFS
> Issue Type: Bug
> Reporter: Hairong Kuang
> Assignee: Hairong Kuang
> Attachments: blockInvalidate.patch
>
>
> When I test the patch to HADOOP-1345 on a two node dfs cluster, I see that dfs correctly delete the corrupted replica and successfully retry reading from the other correct replica, but the block does not get replicated. The block remains with only 1 replica until the next block report comes in.
> In my testcase, since the dfs cluster has only 2 datanodes, the target of replication is the same as the target of block invalidation. After poking the logs, I found out that the namenode sent the replication request before the block invalidation request.
> This is because the namenode does not invalidate a block well. In FSNamesystem.invalidateBlock, it first puts the invalidate request in a queue and then immediately removes the replica from its state, which triggers the choosing a target for the block. When requests are sent back to the target datanode as a reply to a heartbeat message, the replication requests have higher priority than the invalidate requests.
> This problem could be solved if a namenode removes an invalidated replica from its state only after the invalidate request is sent to the datanode.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.