You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Sameer Paranjpye (JIRA)" <ji...@apache.org> on 2006/05/01 22:22:47 UTC

[jira] Updated: (HADOOP-109) Blocks are not replicated when...

     [ http://issues.apache.org/jira/browse/HADOOP-109?page=all ]

Sameer Paranjpye updated HADOOP-109:
------------------------------------

      Version: 0.1.0
    Assign To: Konstantin Shvachko

> Blocks are not replicated when...
> ---------------------------------
>
>          Key: HADOOP-109
>          URL: http://issues.apache.org/jira/browse/HADOOP-109
>      Project: Hadoop
>         Type: Bug

>   Components: dfs
>     Versions: 0.1.0
>     Reporter: Konstantin Shvachko
>     Assignee: Konstantin Shvachko
>      Fix For: 0.3

>
> When the block is under-replicated the namenode places it into
> FSNamesystem.neededReplications list.
> When a datanode D1 sends getBlockwork() request to the namenode, the namenode
> selects another node D2 (which it thinks is up and running) where the new replica of the
> under-replicated block will be stored.
> Then namenode removes the block from the neededReplications list and places it to
> the pendingReplications list, and then asks D1 to replicate the block to D2.
> If D2 is in fact down, then replication will fail and will never be retried later, because
> the block is not in the neededReplications list, but rather in the pendingReplications list,
> which namenode never checks.

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see:
   http://www.atlassian.com/software/jira