You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Hairong Kuang (JIRA)" <ji...@apache.org> on 2009/09/02 00:11:33 UTC
[jira] Created: (HDFS-583) DataNode should enforce a max block size
DataNode should enforce a max block size
----------------------------------------
Key: HDFS-583
URL: https://issues.apache.org/jira/browse/HDFS-583
Project: Hadoop HDFS
Issue Type: Improvement
Components: data-node
Reporter: Hairong Kuang
When DataNode creates a replica, it should enforce a max block size, so clients can't go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that check the block size.
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.