You are viewing a plain text version of this content. The canonical link for it is here.
Posted to hdfs-dev@hadoop.apache.org by "ConfX (Jira)" <ji...@apache.org> on 2023/07/18 15:49:00 UTC

[jira] [Created] (HDFS-17096) Out of memory exception when mistakenly set ipc.server.read.threadpool.size to a large value

ConfX created HDFS-17096:
----------------------------

             Summary: Out of memory exception when mistakenly set ipc.server.read.threadpool.size to a large value 
                 Key: HDFS-17096
                 URL: https://issues.apache.org/jira/browse/HDFS-17096
             Project: Hadoop HDFS
          Issue Type: Bug
            Reporter: ConfX
         Attachments: reproduce.sh

h2. What happened:

When setting {{ipc.server.read.threadpool.size}} to a large number, {{IPCServer}} throws an out-of-memory exception due to inappropriate checking and handling.
*There is no checking or error-handling logic for this parameter at all.*
h2. Buggy code:

In {{org/apache/hadoop/ipc/Server.java}}
{noformat}
  protected Server(...) {
      //readThreads gets value from ipc.server.read.threadpool.size
      this.readThreads = conf.getInt( 
          CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_KEY,
          CommonConfigurationKeys.IPC_SERVER_RPC_READ_THREADS_DEFAULT);
  }  Listener(int port) throws IOException {
      ...
      readers = new Reader[readThreads];                // <<--- OOM happens here when readThreads is too large.
      for (int i = 0; i < readThreads; i++) {
      ...
      }
  }  
{noformat}
{{}}
h2. StackTrace:
{noformat}
java.lang.OutOfMemoryError: Java heap space
        at org.apache.hadoop.ipc.Server$Listener.<init>(Server.java:1240)
        at org.apache.hadoop.ipc.Server.<init>(Server.java:3127)
        at org.apache.hadoop.ipc.RPC$Server.<init>(RPC.java:1062)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2$Server.<init>(ProtobufRpcEngine2.java:468)
        at org.apache.hadoop.ipc.ProtobufRpcEngine2.getServer(ProtobufRpcEngine2.java:371)
        at org.apache.hadoop.ipc.RPC$Builder.build(RPC.java:853)
        at org.apache.hadoop.hdfs.server.namenode.NameNodeRpcServer.<init>(NameNodeRpcServer.java:466)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createRpcServer(NameNode.java:865)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:771)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:1020)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:995)
        at org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1769){noformat}
Reproduce:(1) Set ipc.server.read.threadpool.size to a large value, e.g., 1535931645 (2) Run a simple test that exercises this parameter, e.g. org.apache.hadoop.hdfs.server.namenode.metrics.TestNameNodeMetrics#testExcessBlocksFor an easy reproduction, run the reproduce.sh in the attachment.

{{}}



--
This message was sent by Atlassian Jira
(v8.20.10#820010)

---------------------------------------------------------------------
To unsubscribe, e-mail: hdfs-dev-unsubscribe@hadoop.apache.org
For additional commands, e-mail: hdfs-dev-help@hadoop.apache.org