You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2007/03/20 18:22:32 UTC
[jira] Updated: (HADOOP-1135) A block report processing may
incorrectly cause the namenode to delete blocks
[ https://issues.apache.org/jira/browse/HADOOP-1135?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
dhruba borthakur updated HADOOP-1135:
-------------------------------------
Summary: A block report processing may incorrectly cause the namenode to delete blocks (was: A block report processing may incorrect cause the namenode to delete blocks )
> A block report processing may incorrectly cause the namenode to delete blocks
> ------------------------------------------------------------------------------
>
> Key: HADOOP-1135
> URL: https://issues.apache.org/jira/browse/HADOOP-1135
> Project: Hadoop
> Issue Type: Bug
> Components: dfs
> Reporter: dhruba borthakur
> Assigned To: dhruba borthakur
>
> When a block report arrives at the namenode, the namenode goes through all the blocks on that datanode. If a block is not valid it is marked for deletion. The blocks-to-be-deleted are sent to the datanode as a response to the next heartbeat RPC. The namenode sends only 100 blocks-to-be-deleted at a time. This was introduced as part of hadoop-994. The bug is that if the number of blocks-to-be-deleted exceeds 100, then that namenode marks all the remaining blocks in the block report for deletion.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.