You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Andrew Wang (JIRA)" <ji...@apache.org> on 2016/11/29 18:50:58 UTC

[jira] [Reopened] (HDFS-5517) Lower the default maximum number of blocks per file

     [ https://issues.apache.org/jira/browse/HDFS-5517?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Andrew Wang reopened HDFS-5517:
-------------------------------

Sorry about the fuss, reverted for now.

> Lower the default maximum number of blocks per file
> ---------------------------------------------------
>
>                 Key: HDFS-5517
>                 URL: https://issues.apache.org/jira/browse/HDFS-5517
>             Project: Hadoop HDFS
>          Issue Type: Bug
>          Components: namenode
>    Affects Versions: 2.2.0
>            Reporter: Aaron T. Myers
>            Assignee: Aaron T. Myers
>              Labels: BB2015-05-TBR
>         Attachments: HDFS-5517.002.patch, HDFS-5517.patch
>
>
> We introduced the maximum number of blocks per file in HDFS-4305, but we set the default to 1MM. In practice this limit is so high as to never be hit, whereas we know that an individual file with 10s of thousands of blocks can cause problems. We should lower the default value, in my opinion to 10k.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org