You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2010/11/02 01:48:26 UTC

[jira] Resolved: (HDFS-37) An invalidated block should be removed from the blockMap

     [ https://issues.apache.org/jira/browse/HDFS-37?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Hairong Kuang resolved HDFS-37.
-------------------------------

    Resolution: Not A Problem

> An invalidated block should be removed from the blockMap
> --------------------------------------------------------
>
>                 Key: HDFS-37
>                 URL: https://issues.apache.org/jira/browse/HDFS-37
>             Project: Hadoop HDFS
>          Issue Type: Bug
>            Reporter: Hairong Kuang
>            Assignee: Hairong Kuang
>
> Currently when a namenode schedules to delete an over-replicated block, the replica to be deleted does not get removed the block map immediately. Instead it gets removed when the next block report to comes in. This causes three problems: 
> 1. getBlockLocations may return locations that do not contain the block;
> 2. Over-replication due to unsuccessful deletion can not be detected as described in HADOOP-4477.
> 3. The number of blocks shown on dfs Web UI does not get updated on a source node when a large number of blocks have been moved from the source node to a target node, for example, when running a balancer.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.