You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Allen Wittenauer (JIRA)" <ji...@apache.org> on 2014/07/24 00:46:41 UTC

[jira] [Resolved] (HDFS-583) HDFS should enforce a max block size

     [ https://issues.apache.org/jira/browse/HDFS-583?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Allen Wittenauer resolved HDFS-583.
-----------------------------------

    Resolution: Won't Fix

I'm going to close this as Won't Fix.  At some point in time, block sizes were inadvertently limited to 2GB.  This was increased to (some other value which escapes me at the moment, but it might be 4GB).

In practice, users tend not to mess with large block sizes unless they have a very specific reason... especially when one considers that disk quotas are also in play.  

> HDFS should enforce a max block size
> ------------------------------------
>
>                 Key: HDFS-583
>                 URL: https://issues.apache.org/jira/browse/HDFS-583
>             Project: Hadoop HDFS
>          Issue Type: Improvement
>          Components: namenode
>            Reporter: Hairong Kuang
>
> When DataNode creates a replica, it should enforce a max block size, so clients can't go crazy. One way of enforcing this is to make BlockWritesStreams to be filter steams that check the block size.



--
This message was sent by Atlassian JIRA
(v6.2#6252)