You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Aaron T. Myers (JIRA)" <ji...@apache.org> on 2013/11/15 03:15:21 UTC

[jira] [Created] (HDFS-5517) Lower the default maximum number of blocks per file

Aaron T. Myers created HDFS-5517:
------------------------------------

             Summary: Lower the default maximum number of blocks per file
                 Key: HDFS-5517
                 URL: https://issues.apache.org/jira/browse/HDFS-5517
             Project: Hadoop HDFS
          Issue Type: Bug
          Components: namenode
    Affects Versions: 2.2.0
            Reporter: Aaron T. Myers
            Assignee: Aaron T. Myers


We introduced the maximum number of blocks per file in HDFS-4305, but we set the default to 1MM. In practice this limit is so high as to never be hit, whereas we know that an individual file with 10s of thousands of blocks can cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.1#6144)