You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Konstantin Shvachko (JIRA)" <ji...@apache.org> on 2009/01/16 23:27:59 UTC
[jira] Created: (HADOOP-5074) Consistency of different replicas of
the same block is not checked.
Consistency of different replicas of the same block is not checked.
-------------------------------------------------------------------
Key: HADOOP-5074
URL: https://issues.apache.org/jira/browse/HADOOP-5074
Project: Hadoop Core
Issue Type: Bug
Components: dfs
Affects Versions: 0.14.0
Reporter: Konstantin Shvachko
HDFS currently detects corrupted replicas by verifying that its contents matches the checksum stored in the block meta-file. This is done independently for each replica of the block on the data-node it belongs to. But we do not check that the replicas are identical across data-nodes as long as they have the same size.
This is not common but can happen as a result of a software bug or an operator mismanagement. And in this case different clients will read different data from the same file.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.