You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "dhruba borthakur (JIRA)" <ji...@apache.org> on 2010/12/16 01:07:01 UTC
[jira] Created: (HDFS-1539) prevent data loss when a cluster
suffers a power loss
prevent data loss when a cluster suffers a power loss
-----------------------------------------------------
Key: HDFS-1539
URL: https://issues.apache.org/jira/browse/HDFS-1539
Project: Hadoop HDFS
Issue Type: Improvement
Components: data-node, hdfs client, name-node
Reporter: dhruba borthakur
we have seen an instance where a external outage caused many datanodes to reboot at around the same time. This resulted in many corrupted blocks. These were recently written blocks; the current implementation of HDFS Datanodes do not sync the data of a block file when the block is closed.
1. Have a cluster-wide config setting that causes the datanode to sync a block file when a block is finalized.
2. Introduce a new parameter to the FileSystem.create() to trigger the new behaviour, i.e. cause the datanode to sync a block-file when it is finalized.
3. Implement the FSDataOutputStream.hsync() to cause all data written to the specified file to be written to stable storage.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.