You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-dev@hadoop.apache.org by "Raghu Angadi (JIRA)" <ji...@apache.org> on 2006/11/08 21:58:56 UTC

[jira] Updated: (HADOOP-637) ipc.Server has memory leak -- serious issue for namenode server

     [ http://issues.apache.org/jira/browse/HADOOP-637?page=all ]

Raghu Angadi updated HADOOP-637:
--------------------------------

    Attachment: directbuffers.patch


directbuffers.patch is attatched. It fixes both issues mentioned in second comment above.


> ipc.Server has memory leak -- serious issue for namenode server
> ---------------------------------------------------------------
>
>                 Key: HADOOP-637
>                 URL: http://issues.apache.org/jira/browse/HADOOP-637
>             Project: Hadoop
>          Issue Type: Bug
>          Components: ipc
>    Affects Versions: 0.7.1
>            Reporter: Christian Kunz
>         Assigned To: Raghu Angadi
>         Attachments: directbuffers.patch
>
>
> In my environment (running a lot of batch processes each of which reads, creates, and deletes a lof of  files in dfs) the namenode server can run out of memory rather quickly (in a few hours on a 150 node cluster). The netbeans profiler shows an increasing number of direct byte buffers not garbage collected. The documentation on java.nio.ByteBuffer indicates that their allocation might (and obviously does) happen outside the normal gc-collected heap, and, therefore, it is required that direct byte buffers should only be used for long-lived objects.
> ipc.Server seems to use a 4KB direct byte buffer for every connection, but, worse, for every RPC call. If I replace the latter ones with non-direct byte buffers, the memory footprint of the namenode server increases only slowly, but even then it is just a matter of time (since I started it 24 hours ago, it leaked by about 300-400MB). If the performance increase by using direct buffers is a requirement, I would suggest to use a static pool.
> Although my environment abuses the namenode server in unusual manner, I would imagine that the memory footprint of the namenode server creeps up slowly everywhere

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators: http://issues.apache.org/jira/secure/Administrators.jspa
-
For more information on JIRA, see: http://www.atlassian.com/software/jira