You are viewing a plain text version of this content. The canonical link for it is here.
Posted to common-issues@hadoop.apache.org by "Suresh Srinivas (JIRA)" <ji...@apache.org> on 2009/12/22 00:17:18 UTC

[jira] Updated: (HADOOP-6460) Namenode runs of out of memory due to memory leak in ipc Server

     [ https://issues.apache.org/jira/browse/HADOOP-6460?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Suresh Srinivas updated HADOOP-6460:
------------------------------------

    Attachment: hadoop-6460.patch

The attached patch:
# Introduces shrinkable byte array stream to be used by the handler for serializing response. The underlying byte array in the stream is discarded when the array exceeds the maximum size (1MB).
# Added a log to print the RPC call for which the response size exceeded the max size.

> Namenode runs of out of memory due to memory leak in ipc Server
> ---------------------------------------------------------------
>
>                 Key: HADOOP-6460
>                 URL: https://issues.apache.org/jira/browse/HADOOP-6460
>             Project: Hadoop Common
>          Issue Type: Bug
>    Affects Versions: 0.20.1, 0.21.0, 0.22.0
>            Reporter: Suresh Srinivas
>            Assignee: Suresh Srinivas
>            Priority: Blocker
>             Fix For: 0.20.2, 0.21.0, 0.22.0
>
>         Attachments: hadoop-6460.patch
>
>
> Namenode heap usage grows disproportional to the number objects supports (files, directories and blocks). Based on heap dump analysis, this is due to large growth in ByteArrayOutputStream allocated in o.a.h.ipc.Server.Handler.run().

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.