You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@archiva.apache.org by "Alix Lourme (JIRA)" <ji...@codehaus.org> on 2013/11/07 17:51:53 UTC

[jira] (MRM-1785) Little memory leak detected

     [ https://jira.codehaus.org/browse/MRM-1785?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Alix Lourme updated MRM-1785:
-----------------------------

    Attachment: 20131105-091005-ProblemSuspect-2-2-CommonPathToTheAccumulationPoint.png
                20131105-091005-ProblemSuspect-2-1-description.png
                20131105-091005-ProblemSuspect-1-4-AccumulatedObjectsByClass.png
                20131105-091005-ProblemSuspect-1-3-AccumulatedObjects.png
                20131105-091005-ProblemSuspect-1-2-ShortestPaths.png
                20131105-091005-ProblemSuspect-1-1-description.png
                20131105-091005-dump.png
    
> Little memory leak detected
> ---------------------------
>
>                 Key: MRM-1785
>                 URL: https://jira.codehaus.org/browse/MRM-1785
>             Project: Archiva
>          Issue Type: Bug
>          Components: Problem Reporting
>    Affects Versions: 1.4-M4
>         Environment: Linux SLES 11 86_64
>            Reporter: Alix Lourme
>            Priority: Critical
>         Attachments: 20131105-091005-dump.png, 20131105-091005-ProblemSuspect-1-1-description.png, 20131105-091005-ProblemSuspect-1-2-ShortestPaths.png, 20131105-091005-ProblemSuspect-1-3-AccumulatedObjects.png, 20131105-091005-ProblemSuspect-1-4-AccumulatedObjectsByClass.png, 20131105-091005-ProblemSuspect-2-1-description.png, 20131105-091005-ProblemSuspect-2-2-CommonPathToTheAccumulationPoint.png, GC-HeapUsage-AfterProblem.png, GC-HeapUsage-BeforeProblem.png, GC-HeapUsage-OneWeek.png, GC-InvocationCountOneWeek.png
>
>
> Perhaps some duplicate of MRM-1741 (but no activity since 6 months => openend a new)
> ----
> We are using Archiva 1.4-M4 for our company, and we found some memory problem usage on this version.
> It seems not to be a "big problem", because there is _not directly_ a OutOfMemory, the Jetty server becomes very slow before. Symptom from CI platorm (for exemple) : 
> {quote}
> Server returned HTTP response code: 502 for URL: http://[url]/repository/[repoName]/[grouId]/[artifactId]/[version]/maven-metadata.xml
> {quote}
> ----
> +Informations about volumetry+ : 
> Requests by day (wc -l request-XXX.log) : *430000* (average)
> Company repositories (_Total File Count_ from Stats in  Repositories menu) : 
> * company-releases  : 574771
> * company-snapshots : 118905
> * proxied-releases  : 232626
> * proxied-snapshots : 2136
> * extra-libs : 9232
> * commercial-libs : 4587
> * *Total : 950000*
> +Note+ : Option  "Skip Packed Index creation" activated on each repositories.
> ----
> +Analysis+ : 
> The GC grow up during the week : 
> !GC-HeapUsage-OneWeek.png!
> The invocation to GC grow up when memory is short :
> !GC-InvocationCountOneWeek.png!
> Before the problem, we can see the impact (difficulty to garbage memory) :
> !GC-HeapUsage-BeforeProblem.png!
> After a application restart, the common usage is less than 2Go :
> !GC-HeapUsage-AfterProblem.png!
> => So the supposition is a little memory leak.
> ----
> A solution could be some cache time reduction or SoftReference/WeakReference usage.
> Today I have not more information about problem, restart was urgent, and analysis a 4Go HeapDump a little difficult.
> I will take HeapDumps the next week to give some detail about memory usage.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira