You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2008/07/03 00:22:45 UTC

[jira] Commented: (HADOOP-3685) Unbalanced replication target

    [ https://issues.apache.org/jira/browse/HADOOP-3685?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=12610073#action_12610073 ] 

Hairong Kuang commented on HADOOP-3685:
---------------------------------------

This bug is introduced by HADOOP-2559. The change there works for choosing targets for a new block, but does not work for re-replicating an underreplicated block.

> Unbalanced replication target 
> ------------------------------
>
>                 Key: HADOOP-3685
>                 URL: https://issues.apache.org/jira/browse/HADOOP-3685
>             Project: Hadoop Core
>          Issue Type: Bug
>          Components: dfs
>    Affects Versions: 0.17.0
>            Reporter: Koji Noguchi
>            Priority: Critical
>
> In HADOOP-3633, namenode was assigning some datanodes to receive  hundreds of blocks in a short period which caused datanodes to go out of memroy(threads).
> Most of them were from remote rack.
> Looking at the code, 
> {noformat}
>     166           chooseLocalRack(results.get(1), excludedNodes, blocksize,
>     167                           maxNodesPerRack, results);
> {noformat}
> was sometimes not choosing the local rack of the writer(source).  
> As a result, when a datanode goes down, other datanodes on the same rack were getting large number of blocks from remote racks.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.