You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "qianyu (JIRA)" <ji...@apache.org> on 2009/06/19 03:29:07 UTC
[jira] Created: (HADOOP-6085) Why open method in class DFSClient
would compare old LocatedBlocks and new LocatedBlocks?
Why open method in class DFSClient would compare old LocatedBlocks and new LocatedBlocks?
-----------------------------------------------------------------------------------------
Key: HADOOP-6085
URL: https://issues.apache.org/jira/browse/HADOOP-6085
Project: Hadoop Core
Issue Type: Wish
Components: dfs
Affects Versions: 0.19.0
Reporter: qianyu
Fix For: 0.19.2
This is in the package of org.apache.hadoop.hdfs, DFSClient.openInfo():
if (locatedBlocks != null) {
Iterator<LocatedBlock> oldIter = locatedBlocks.getLocatedBlocks().iterator();
Iterator<LocatedBlock> newIter = newInfo.getLocatedBlocks().iterator();
while (oldIter.hasNext() && newIter.hasNext()) {
if (! oldIter.next().getBlock().equals(newIter.next().getBlock())) {
throw new IOException("Blocklist for " + src + " has changed!");
}
}
}
Why we need compare old LocatedBlocks and new LocatedBlocks, and in what case it happen?
Why not "this.locatedBlocks = newInfo" directly?
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.