You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by Shane Preater <sh...@googlemail.com> on 2007/02/01 08:59:26 UTC

Session handling problem

Hi all,
I am getting an intermittnet problem with jackrabbit sessions .

Basically everything seems fine but every now and again when trying to
acquire a session the system seems to lock up.

Are there any known issues with either:
1) Sharing sessions using commons-pooling?

2) Doing workspace scoped operations (clone etc) while other people are
performing session scoped operations like saving nodes? These will probably
not both be affecting the same node (Although I can not confirm this but
based on the workflow our users perform it should not be the case) ?

Any help would be great.

Thanks,
Shane.

Re: Session handling problem

Posted by Shane Preater <sh...@googlemail.com>.
It would appear that the call to login is blocking. But we have only
experienced this in our live environment which I don't have direct access to
so I am relying on second hand information. Also for this reason getting a
thread dump is more difficult.

Leave this with me and I will try and put some more logging into the system.
Also I will see if the services team can grab me a thread dump.

Although I wonder if the limitation you have linked me with could be my
problem. I will do some more investigation and update you once I have a bit
more information.

Thanks for taking the time to give me some more ideas,

Shane.

On 01/02/07, Stefan Guggisberg <st...@gmail.com> wrote:
>
> hi shane
>
> On 2/1/07, Shane Preater <sh...@googlemail.com> wrote:
> > Hi all,
> > I am getting an intermittnet problem with jackrabbit sessions .
> >
> > Basically everything seems fine but every now and again when trying to
> > acquire a session the system seems to lock up.
>
> what do you mean by 'lock up'? does the Repository.login call block?
> a dead-lock? anyway, a thread dump would help analyzing the issue...
>
> >
> > Are there any known issues with either:
> > 1) Sharing sessions using commons-pooling?
> >
> > 2) Doing workspace scoped operations (clone etc) while other people are
> > performing session scoped operations like saving nodes? These will
> probably
> > not both be affecting the same node (Although I can not confirm this but
> > based on the workflow our users perform it should not be the case) ?
> >
>
> there's a known limitation/issue:
> calls to the persistence layer are effetively serialized in order to
> ensure
> data consistency.e.g. large workspace-scoped operations might affect
> performance of other concurrent workspace or session-scoped save
> operations.
>
> for more details see http://issues.apache.org/jira/browse/JCR-314
>
> cheers
> stefan
>
> > Any help would be great.
> >
> > Thanks,
> > Shane.
> >
> >
>

Re: Session handling problem

Posted by Stefan Guggisberg <st...@gmail.com>.
hi shane

On 2/1/07, Shane Preater <sh...@googlemail.com> wrote:
> Hi all,
> I am getting an intermittnet problem with jackrabbit sessions .
>
> Basically everything seems fine but every now and again when trying to
> acquire a session the system seems to lock up.

what do you mean by 'lock up'? does the Repository.login call block?
a dead-lock? anyway, a thread dump would help analyzing the issue...

>
> Are there any known issues with either:
> 1) Sharing sessions using commons-pooling?
>
> 2) Doing workspace scoped operations (clone etc) while other people are
> performing session scoped operations like saving nodes? These will probably
> not both be affecting the same node (Although I can not confirm this but
> based on the workflow our users perform it should not be the case) ?
>

there's a known limitation/issue:
calls to the persistence layer are effetively serialized in order to ensure
data consistency.e.g. large workspace-scoped operations might affect
performance of other concurrent workspace or session-scoped save
operations.

for more details see http://issues.apache.org/jira/browse/JCR-314

cheers
stefan

> Any help would be great.
>
> Thanks,
> Shane.
>
>