You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2008/05/14 09:11:55 UTC
[jira] Resolved: (HADOOP-1497) Possibility of duplicate blockids if
dead-datanodes come back up after corresponding files were deleted
[ https://issues.apache.org/jira/browse/HADOOP-1497?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur resolved HADOOP-1497.
--------------------------------------
Resolution: Duplicate
Fix Version/s: 0.18.0
This is fixed as part of HADOOP-2656
> Possibility of duplicate blockids if dead-datanodes come back up after corresponding files were deleted
> -------------------------------------------------------------------------------------------------------
>
> Key: HADOOP-1497
> URL: https://issues.apache.org/jira/browse/HADOOP-1497
> Project: Hadoop Core
> Issue Type: Bug
> Components: dfs
> Reporter: dhruba borthakur
> Assignee: dhruba borthakur
> Fix For: 0.18.0
>
>
> Suppose a datanode D has a block B that belongs to file F. Suppose the datanode D dies and the namenode replicates those blocks to other datanodes. No, suppose the user deletes file F. The namenode removes all the blocks that belonged to file F. Now, suppose a new file F1 is created and the namenode generates the same blockid B for this new file F1.
> Suppose the old datanode D comes back to life. Now we have a valid corrupted block B on datanode D.
> This case is possibly detected by the Client (using CRC). But does HDFS need to handle this scenario better?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.