You are viewing a plain text version of this content. The canonical link for it is here.
Posted to server-dev@james.apache.org by "Stefano Bagnara (JIRA)" <se...@james.apache.org> on 2007/02/06 11:54:06 UTC

[jira] Resolved: (JAMES-592) OOM caused by unbounded cache in InetAddress (was James leaks memory slowly)

     [ https://issues.apache.org/jira/browse/JAMES-592?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Stefano Bagnara resolved JAMES-592.
-----------------------------------

       Resolution: Fixed
    Fix Version/s:     (was: Next Minor)
                       (was: Next Major)
         Assignee: Stefano Bagnara  (was: Noel J. Bergman)

Now james uses a default 300 seconds expiration for the positive dns results cache.
The expiration is tunable via system property -Dnetworkaddress.cache.ttl
Setting it to -1 will revert to the default "cache forever" JVM behaviour.
Setting it to 0 will remove caching at all.
Applied to trunk 4 days ago and backported to v2.3 (for 2.3.1) now.

> OOM caused by unbounded cache in InetAddress (was James leaks memory slowly)
> ----------------------------------------------------------------------------
>
>                 Key: JAMES-592
>                 URL: https://issues.apache.org/jira/browse/JAMES-592
>             Project: James
>          Issue Type: Bug
>    Affects Versions: 2.2.0, 2.3.0
>            Reporter: Norman Maurer
>         Assigned To: Stefano Bagnara
>            Priority: Critical
>             Fix For: 2.3.1-dev
>
>
> Noel wrote on list:
> I do not know where in the application it is happening, but after running
> JAMES non-stop since Fri Aug 11 03:29:57 EDT 2006, this morning the JVM
> started to throw OutOfMemoryError exceptions, such as:
> 21/08/06 08:39:47 WARN  mailstore: Exception retrieving mail:
> java.lang.RuntimeException: Exception caught while retrieving an object,
> cause: java.lang.OutOfMemoryError, so we're deleting it.
> That did not recover, so it wasn't just due to a transient large allocation
> (which I limit, anyway), so there is definitely something leaking, albeit
> slowly.  Keep in mind that the store was one of the victims, but not
> necessarily the cause.
> The JVM process size had steadily grown from a somewhat stable 114MB to
> 130MB last night.  I did not look at it this morning before restarting the
> server.
>         --- Noel

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.


---------------------------------------------------------------------
To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org