You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Gruust (JIRA)" <ji...@apache.org> on 2017/10/12 20:48:00 UTC
[jira] [Created] (HDFS-12649) handling of corrupt blocks not
suitable for commodity hardware
Gruust created HDFS-12649:
-----------------------------
Summary: handling of corrupt blocks not suitable for commodity hardware
Key: HDFS-12649
URL: https://issues.apache.org/jira/browse/HDFS-12649
Project: Hadoop HDFS
Issue Type: Improvement
Components: namenode
Affects Versions: 2.8.1
Reporter: Gruust
Priority: Minor
Hadoop's documentation tells me it's suitable for commodity hardware in the sense that hardware failures are expected to happen frequently. However, there is currently no automatic handling of corrupted blocks, which seems a bit contradictory to me.
See: https://stackoverflow.com/questions/19205057/how-to-fix-corrupt-hdfs-files
This is even problematic for data integrity as the redundancy is not kept at the desired level without manual intervention. If there is a corrupted block, I would at least expect that the namenode forces the creation of an additional good replica to keep up the redundancy level.
--
This message was sent by Atlassian JIRA
(v6.4.14#64029)
---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org