You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "Kihwal Lee (JIRA)" <ji...@apache.org> on 2014/03/11 17:37:42 UTC

[jira] [Created] (HDFS-6088) Add configurable maximum block count for datanode

Kihwal Lee created HDFS-6088:
--------------------------------

             Summary: Add configurable maximum block count for datanode
                 Key: HDFS-6088
                 URL: https://issues.apache.org/jira/browse/HDFS-6088
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: Kihwal Lee


Currently datanode resources are protected by the free space check and the balancer.  But datanodes can run out of memory simply storing too many blocks. If the sizes of blocks are small, datanodes will appear to have plenty of space to put more blocks.

I propose adding a configurable max block count to datanode. Since datanodes can have different heap configurations, it will make sense to make it datanode-level, rather than something enforced by namenode.



--
This message was sent by Atlassian JIRA
(v6.2#6252)