You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by Vikas Bhatia <vi...@gmail.com> on 2007/10/23 15:58:15 UTC

memory leak issues in jackrabbit

Hi

Environment: Jackrabbit 1.3.1, Tomcat, MySQL (data plus blobs)
Use case: Jackrabbit is used as a BIG dumping ground for data that is
fed into it all the time in large numbers

We have been using jackrabbit and have noticed a memory leak issue
that seems to exist in the cache management.

We have tried to put a bug in JIRA at
https://issues.apache.org/jira/browse/JCR-1037, and have been told
that unless we can recreate the bug in a very basic format, nothing
can be done about it. The problem arises just there. We are not using
basic nodetypes, we have extended nodetypes, most of the data is
versioned and there is a background auditing thread based on the
observation manager that is always running. Hence recreating the
problem that can be reproduced is hard to do. We tried providing a
video to display the problem, but that did not help.

This remains an outstanding problem for us and we can consistently
reproduce it with our code.

So the questions are:
1. Has anyone else experienced these problems? I see outstanding
issues in the Jira regarding cache and memory leaks that have either
been closed due to irreproducibility or lack of activity. The fact is
that this bug has not been acknowledged. Hence we wonder why we are
the only people having this bug, I don't think that the use case is
any different than what jackrabbit was designed to handle.

2. Is there a way to just turn caching off? If not, I see the merits
of being allowed to do so.

3. If we were to start creating a fix for this issue, where should we
start? We can work with the Dev team and create the fix that could
help the community.

I would appreciate it, if we can get a conversation started because
this is a show-stopping category bug for us.

Thanks.

Vikas.

Re: memory leak issues in jackrabbit

Posted by Marcel Reutegger <ma...@gmx.net>.
If you are able to easily reproduce the memory issue then you might want to add 
the option -XX:+HeapDumpOnOutOfMemoryError to your jvm. this will create a heap 
dump when it runs into an outofmemory error (works with sun jvm >=1.4.2_12 or 
 >=1.5.0_7 or 1.6).

regards
  marcel

Thomas Mueller wrote:
> Hi,
> 
> I think the problem is related to many inactive sessions. I can't say
> if the sessions are not closed, or if they are closed but the objects
> are not released. In any case, it is currently hard to detect what
> actually happened without having a test case (as you said) or a good
> understanding what the application does. The video did help, but part
> of the source code was not shown.
> 
> I understand it is not an option for you to just post the application.
> I don't suggest to do that. Also it may be hard to create a simple
> test case (however that would really help).
> 
> There may be another solution: could you add the jcrlog wrapper
> between your application and Jackrabbit? This wrapper logs out all JCR
> API calls, and therefore it would help understand where the sessions
> are closed and opened, where objects are created and released.
> 
> The jcrlog wrapper is available under
> http://svn.apache.org/repos/asf/jackrabbit/trunk/contrib/jcrlog
> 
> I hope this problem can be resolved. Please tell me if you need more help.
> 
> Thanks,
> Thomas
> 
> 
> 
> 
> On 10/23/07, Vikas Bhatia <vi...@gmail.com> wrote:
>> Hi
>>
>> Environment: Jackrabbit 1.3.1, Tomcat, MySQL (data plus blobs)
>> Use case: Jackrabbit is used as a BIG dumping ground for data that is
>> fed into it all the time in large numbers
>>
>> We have been using jackrabbit and have noticed a memory leak issue
>> that seems to exist in the cache management.
>>
>> We have tried to put a bug in JIRA at
>> https://issues.apache.org/jira/browse/JCR-1037, and have been told
>> that unless we can recreate the bug in a very basic format, nothing
>> can be done about it. The problem arises just there. We are not using
>> basic nodetypes, we have extended nodetypes, most of the data is
>> versioned and there is a background auditing thread based on the
>> observation manager that is always running. Hence recreating the
>> problem that can be reproduced is hard to do. We tried providing a
>> video to display the problem, but that did not help.
>>
>> This remains an outstanding problem for us and we can consistently
>> reproduce it with our code.
>>
>> So the questions are:
>> 1. Has anyone else experienced these problems? I see outstanding
>> issues in the Jira regarding cache and memory leaks that have either
>> been closed due to irreproducibility or lack of activity. The fact is
>> that this bug has not been acknowledged. Hence we wonder why we are
>> the only people having this bug, I don't think that the use case is
>> any different than what jackrabbit was designed to handle.
>>
>> 2. Is there a way to just turn caching off? If not, I see the merits
>> of being allowed to do so.
>>
>> 3. If we were to start creating a fix for this issue, where should we
>> start? We can work with the Dev team and create the fix that could
>> help the community.
>>
>> I would appreciate it, if we can get a conversation started because
>> this is a show-stopping category bug for us.
>>
>> Thanks.
>>
>> Vikas.
>>
> 


Re: memory leak issues in jackrabbit

Posted by Thomas Mueller <th...@gmail.com>.
Hi,

I think the problem is related to many inactive sessions. I can't say
if the sessions are not closed, or if they are closed but the objects
are not released. In any case, it is currently hard to detect what
actually happened without having a test case (as you said) or a good
understanding what the application does. The video did help, but part
of the source code was not shown.

I understand it is not an option for you to just post the application.
I don't suggest to do that. Also it may be hard to create a simple
test case (however that would really help).

There may be another solution: could you add the jcrlog wrapper
between your application and Jackrabbit? This wrapper logs out all JCR
API calls, and therefore it would help understand where the sessions
are closed and opened, where objects are created and released.

The jcrlog wrapper is available under
http://svn.apache.org/repos/asf/jackrabbit/trunk/contrib/jcrlog

I hope this problem can be resolved. Please tell me if you need more help.

Thanks,
Thomas




On 10/23/07, Vikas Bhatia <vi...@gmail.com> wrote:
> Hi
>
> Environment: Jackrabbit 1.3.1, Tomcat, MySQL (data plus blobs)
> Use case: Jackrabbit is used as a BIG dumping ground for data that is
> fed into it all the time in large numbers
>
> We have been using jackrabbit and have noticed a memory leak issue
> that seems to exist in the cache management.
>
> We have tried to put a bug in JIRA at
> https://issues.apache.org/jira/browse/JCR-1037, and have been told
> that unless we can recreate the bug in a very basic format, nothing
> can be done about it. The problem arises just there. We are not using
> basic nodetypes, we have extended nodetypes, most of the data is
> versioned and there is a background auditing thread based on the
> observation manager that is always running. Hence recreating the
> problem that can be reproduced is hard to do. We tried providing a
> video to display the problem, but that did not help.
>
> This remains an outstanding problem for us and we can consistently
> reproduce it with our code.
>
> So the questions are:
> 1. Has anyone else experienced these problems? I see outstanding
> issues in the Jira regarding cache and memory leaks that have either
> been closed due to irreproducibility or lack of activity. The fact is
> that this bug has not been acknowledged. Hence we wonder why we are
> the only people having this bug, I don't think that the use case is
> any different than what jackrabbit was designed to handle.
>
> 2. Is there a way to just turn caching off? If not, I see the merits
> of being allowed to do so.
>
> 3. If we were to start creating a fix for this issue, where should we
> start? We can work with the Dev team and create the fix that could
> help the community.
>
> I would appreciate it, if we can get a conversation started because
> this is a show-stopping category bug for us.
>
> Thanks.
>
> Vikas.
>