You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Wei-Chiu Chuang (JIRA)" <ji...@apache.org> on 2019/07/23 14:35:00 UTC

[jira] [Created] (HADOOP-16452) Increase ipc.maximum.data.length default from 64MB to 128MB

Wei-Chiu Chuang created HADOOP-16452:
----------------------------------------

             Summary: Increase ipc.maximum.data.length default from 64MB to 128MB
                 Key: HADOOP-16452
                 URL: https://issues.apache.org/jira/browse/HADOOP-16452
             Project: Hadoop Common
          Issue Type: Improvement
          Components: ipc
    Affects Versions: 2.6.0
            Reporter: Wei-Chiu Chuang


Reason for bumping the default:
Denser DataNodes are common. It is not uncommon to find a DataNode with > 7 million blocks these days.

With such a high number of blocks, the block report message can exceed the 64mb limit (defined by ipc.maximum.data.length). The block reports are rejected, causing missing blocks in HDFS. We had to double this configuration value in order to work around the issue.

We are seeing an increasing number of these cases. I think it's time to revisit some of these default values as the hardware evolves.



--
This message was sent by Atlassian JIRA
(v7.6.14#76016)

---------------------------------------------------------------------
To unsubscribe, e-mail: common-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: common-dev-help@hadoop.apache.org