You are viewing a plain text version of this content. The canonical link for it is here.
Posted to issues@cxf.apache.org by "Andriy Redko (Jira)" <ji...@apache.org> on 2020/09/13 19:11:00 UTC

[jira] [Commented] (CXF-7710) ClientImpl is memory-leak prone

    [ https://issues.apache.org/jira/browse/CXF-7710?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=17195107#comment-17195107 ] 

Andriy Redko commented on CXF-7710:
-----------------------------------

[~dumi_p] sorry, took me a while to get back to this issue, as far as I understood, the issue it not solved for general case, however there are hooks in place to manually clean up the request/response contexts (relevant for your case of pooled objects):

```

obj.getResponseContext().clear();
 obj.getRequestContext().clear();

```

 

> ClientImpl is memory-leak prone
> -------------------------------
>
>                 Key: CXF-7710
>                 URL: https://issues.apache.org/jira/browse/CXF-7710
>             Project: CXF
>          Issue Type: Bug
>            Reporter: Facundo Velazquez
>            Priority: Critical
>         Attachments: heapdump_Leak_Suspects.zip, image-2020-07-02-11-06-23-711.png, leak capture.png
>
>
> In the Mule ESB we are seeing a memory leak caused by non-released objects in the 
> org.apache.cxf.endpoint.ClientImpl. 
> After some research, I could see that the requestContext and the responseContext have a thread local implementation. As our code calls the client from different threads, in a high load scenario, lots of entries will be put in the requestContext map. Take into account that we clean each requestContext value (that is an EchoContext object), but an entry per thread is kept alive in the requestContext map (with an empty EchoContext map). You'll able to see in the attached files that this is causing a memory leak.
> Even in my tests trying to reproduce the issue, I've obtained a fatal OutOfMemoryError.
> Looking at the code, I've seen that  the request context is a WeakHashMap, however the keys are threads. I supposed the purpose of this implementation is that entries can be removed when necessary by the garbage collector. However, if the threads are pooled (which is our case), strong references will be pointing to them, and will be never collected. 
> I suppose an easy solution could be to use the thread names as keys instead threads objects directly. If this approach is taken, consider using string constructors to wrap the literal name for ensuring its garbage collection (since this is another well-know issue --> [https://stackoverflow.com/questions/14494875/weakreference-string-didnt-garbage-collected-how|https://stackoverflow.com/questions/14494875/weakreference-string-didnt-garbage-collected-how).] ).
> Another solution, that entails more changes, would be to use a Guava Cache, setting an expiration time. 
> If the first approach is implemented, could you provide a way to clean the requestContext programmatically?, so in this way, we don't have to depend on the garbage collection process.
> Thank you very much.



--
This message was sent by Atlassian Jira
(v8.3.4#803005)