You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-commits@hadoop.apache.org by cu...@apache.org on 2007/03/08 00:34:26 UTC
svn commit: r515843 - in /lucene/hadoop/trunk: CHANGES.txt
src/java/org/apache/hadoop/dfs/FSNamesystem.java
Author: cutting
Date: Wed Mar 7 15:34:25 2007
New Revision: 515843
URL: http://svn.apache.org/viewvc?view=rev&rev=515843
Log:
HADOOP-1083. Fix so that when a cluster restarts with a missing datanode, its blocks are replicated.
Modified:
lucene/hadoop/trunk/CHANGES.txt
lucene/hadoop/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java
Modified: lucene/hadoop/trunk/CHANGES.txt
URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/CHANGES.txt?view=diff&rev=515843&r1=515842&r2=515843
==============================================================================
--- lucene/hadoop/trunk/CHANGES.txt (original)
+++ lucene/hadoop/trunk/CHANGES.txt Wed Mar 7 15:34:25 2007
@@ -21,6 +21,10 @@
5. HADOOP-1077. Fix a race condition fetching map outputs that could
hang reduces. (Devaraj Das via cutting)
+ 6. HADOOP-1083. Fix so that when a cluster restarts with a missing
+ datanode, its blocks are replicated. (Hairong Kuang via cutting)
+
+
Release 0.12.0 - 2007-03-02
1. HADOOP-975. Separate stdout and stderr from tasks.
Modified: lucene/hadoop/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java
URL: http://svn.apache.org/viewvc/lucene/hadoop/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java?view=diff&rev=515843&r1=515842&r2=515843
==============================================================================
--- lucene/hadoop/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java (original)
+++ lucene/hadoop/trunk/src/java/org/apache/hadoop/dfs/FSNamesystem.java Wed Mar 7 15:34:25 2007
@@ -2115,7 +2115,8 @@
return block;
// filter out containingNodes that are marked for decommission.
- int numCurrentReplica = countContainingNodes(containingNodes);
+ int numCurrentReplica = countContainingNodes(containingNodes)
+ + pendingReplications.getNumReplicas(block);
// check whether safe replication is reached for the block
// only if it is a part of a files
@@ -2123,8 +2124,8 @@
// handle underReplication/overReplication
short fileReplication = fileINode.getReplication();
- if(neededReplications.contains(block)) {
- neededReplications.update(block, curReplicaDelta, 0);
+ if(numCurrentReplica < fileReplication) {
+ neededReplications.update(block, curReplicaDelta, 0);
}
proccessOverReplicatedBlock( block, fileReplication );
return block;