You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2007/06/02 07:56:15 UTC

[jira] Commented: (HADOOP-1300) deletion of excess replicas does not take into account 'rack-locality'

    [ https://issues.apache.org/jira/browse/HADOOP-1300?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#action_12500919 ] 

dhruba borthakur commented on HADOOP-1300:
------------------------------------------

The goal while deleting excess replicas should be to maximize the number of unique racks  on which replicas will remain.

For example, if a block has 5 replicas on racks R1, R1, R2, R2, R3 and the target replication factor is 3, then we can delete one of the replicas from rack R1 and another one from rack R2. This leaves us with replicas R1, R2 and R3.

If there are multiple replicas on a rack and we are supposed to delete a replica from that rack, then we select the replica from that datanode that has the least amount of available disk space.



> deletion of excess replicas does not take into account 'rack-locality'
> ----------------------------------------------------------------------
>
>                 Key: HADOOP-1300
>                 URL: https://issues.apache.org/jira/browse/HADOOP-1300
>             Project: Hadoop
>          Issue Type: Bug
>          Components: dfs
>            Reporter: Koji Noguchi
>
> One rack went down today, resulting in one missing block/file.
> Looking at the log, this block was originally over-replicated. 
> 3 replicas on one rack and 1 replica on another.
> Namenode decided to delete the latter, leaving 3 replicas on the same rack.
> It'll be nice if the deletion is also rack-aware.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.