You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@jackrabbit.apache.org by François Cassistat <f...@maya-systems.com> on 2010/07/21 17:54:41 UTC

java.lang.OutOfMemoryError: Java heap space on Node.remove()

Hi.

I write a procedure that remove a node and it gets an OutOfMemoryError with -Xmx128m.
Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
	at org.apache.commons.collections.map.AbstractHashedMap.ensureCapacity(AbstractHashedMap.java:611)
	at org.apache.commons.collections.map.AbstractHashedMap.checkCapacity(AbstractHashedMap.java:591)
	at org.apache.commons.collections.map.AbstractHashedMap.addMapping(AbstractHashedMap.java:496)
	at org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
	at org.apache.commons.collections.map.AbstractReferenceMap.put(AbstractReferenceMap.java:256)
	at org.apache.jackrabbit.core.state.ItemStateMap.put(ItemStateMap.java:74)
	at org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemStateReferenceCache.java:122)
	at org.apache.jackrabbit.core.state.LocalItemStateManager.getPropertyState(LocalItemStateManager.java:136)
	at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(LocalItemStateManager.java:174)
	at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemStateManager.java:260)
	at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:200)
	at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:390)
	at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:336)
	at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:615)
	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:650)
	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
	at org.apache.jackrabbit.core.NodeImpl.removeChildNode(NodeImpl.java:586)
	at org.apache.jackrabbit.core.ItemImpl.internalRemove(ItemImpl.java:887)
	at org.apache.jackrabbit.core.ItemImpl.remove(ItemImpl.java:959)
	at com.myapplication.whatever...

This node may be huge (By example, all data stocked from an user). In my case, this node contains a hierarchy of 40000+ nodes.

Also note that since, my application use concurrency and I use a TransientRepository, I have always another session that is open.

So I think it have to do with the cache/replica system that save change in memory before session.save().

All I think is to use a recursive algorithm that save each time a node is removed. But I would prefer to keep my data always consistent for other users. Any idea?


Frank


Re: java.lang.OutOfMemoryError: Java heap space on Node.remove()

Posted by François Cassistat <f...@maya-systems.com>.
Great, exactly what I need ! Thanks !!


F


Le 2010-07-21 à 12:00 PM, Michael Dürig a écrit :

> 
> This is a known limitation. As a workaround you could move your node to a temporary location (which is fast). An then delete it recursively there (possibly in the background).
> 
> Michael
> 
> On 21.7.10 17:54, François Cassistat wrote:
>> Hi.
>> 
>> I write a procedure that remove a node and it gets an OutOfMemoryError with -Xmx128m.
>> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
>> 	at org.apache.commons.collections.map.AbstractHashedMap.ensureCapacity(AbstractHashedMap.java:611)
>> 	at org.apache.commons.collections.map.AbstractHashedMap.checkCapacity(AbstractHashedMap.java:591)
>> 	at org.apache.commons.collections.map.AbstractHashedMap.addMapping(AbstractHashedMap.java:496)
>> 	at org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
>> 	at org.apache.commons.collections.map.AbstractReferenceMap.put(AbstractReferenceMap.java:256)
>> 	at org.apache.jackrabbit.core.state.ItemStateMap.put(ItemStateMap.java:74)
>> 	at org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemStateReferenceCache.java:122)
>> 	at org.apache.jackrabbit.core.state.LocalItemStateManager.getPropertyState(LocalItemStateManager.java:136)
>> 	at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(LocalItemStateManager.java:174)
>> 	at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemStateManager.java:260)
>> 	at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:200)
>> 	at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:390)
>> 	at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:336)
>> 	at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:615)
>> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:650)
>> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
>> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
>> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
>> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
>> 	at org.apache.jackrabbit.core.NodeImpl.removeChildNode(NodeImpl.java:586)
>> 	at org.apache.jackrabbit.core.ItemImpl.internalRemove(ItemImpl.java:887)
>> 	at org.apache.jackrabbit.core.ItemImpl.remove(ItemImpl.java:959)
>> 	at com.myapplication.whatever...
>> 
>> This node may be huge (By example, all data stocked from an user). In my case, this node contains a hierarchy of 40000+ nodes.
>> 
>> Also note that since, my application use concurrency and I use a TransientRepository, I have always another session that is open.
>> 
>> So I think it have to do with the cache/replica system that save change in memory before session.save().
>> 
>> All I think is to use a recursive algorithm that save each time a node is removed. But I would prefer to keep my data always consistent for other users. Any idea?
>> 
>> 
>> Frank
>> 


Re: java.lang.OutOfMemoryError: Java heap space on Node.remove()

Posted by Michael Dürig <mi...@day.com>.
This is a known limitation. As a workaround you could move your node to 
a temporary location (which is fast). An then delete it recursively 
there (possibly in the background).

Michael

On 21.7.10 17:54, François Cassistat wrote:
> Hi.
>
> I write a procedure that remove a node and it gets an OutOfMemoryError with -Xmx128m.
> Exception in thread "main" java.lang.OutOfMemoryError: Java heap space
> 	at org.apache.commons.collections.map.AbstractHashedMap.ensureCapacity(AbstractHashedMap.java:611)
> 	at org.apache.commons.collections.map.AbstractHashedMap.checkCapacity(AbstractHashedMap.java:591)
> 	at org.apache.commons.collections.map.AbstractHashedMap.addMapping(AbstractHashedMap.java:496)
> 	at org.apache.commons.collections.map.AbstractHashedMap.put(AbstractHashedMap.java:284)
> 	at org.apache.commons.collections.map.AbstractReferenceMap.put(AbstractReferenceMap.java:256)
> 	at org.apache.jackrabbit.core.state.ItemStateMap.put(ItemStateMap.java:74)
> 	at org.apache.jackrabbit.core.state.ItemStateReferenceCache.cache(ItemStateReferenceCache.java:122)
> 	at org.apache.jackrabbit.core.state.LocalItemStateManager.getPropertyState(LocalItemStateManager.java:136)
> 	at org.apache.jackrabbit.core.state.LocalItemStateManager.getItemState(LocalItemStateManager.java:174)
> 	at org.apache.jackrabbit.core.state.XAItemStateManager.getItemState(XAItemStateManager.java:260)
> 	at org.apache.jackrabbit.core.state.SessionItemStateManager.getItemState(SessionItemStateManager.java:200)
> 	at org.apache.jackrabbit.core.ItemManager.getItemData(ItemManager.java:390)
> 	at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:336)
> 	at org.apache.jackrabbit.core.ItemManager.getItem(ItemManager.java:615)
> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:650)
> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
> 	at org.apache.jackrabbit.core.NodeImpl.onRemove(NodeImpl.java:636)
> 	at org.apache.jackrabbit.core.NodeImpl.removeChildNode(NodeImpl.java:586)
> 	at org.apache.jackrabbit.core.ItemImpl.internalRemove(ItemImpl.java:887)
> 	at org.apache.jackrabbit.core.ItemImpl.remove(ItemImpl.java:959)
> 	at com.myapplication.whatever...
>
> This node may be huge (By example, all data stocked from an user). In my case, this node contains a hierarchy of 40000+ nodes.
>
> Also note that since, my application use concurrency and I use a TransientRepository, I have always another session that is open.
>
> So I think it have to do with the cache/replica system that save change in memory before session.save().
>
> All I think is to use a recursive algorithm that save each time a node is removed. But I would prefer to keep my data always consistent for other users. Any idea?
>
>
> Frank
>