You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@jackrabbit.apache.org by smjain <ja...@gmail.com> on 2010/07/28 11:25:08 UTC

Concurrent Write issues with Jackrabbit

Hi All,
We see a serious Write lock contention issue when we are trying to write to
jackrabbit through different jcr sessions.

We see huge number of threads waiting on a write lock when a session.save is
done.

All threads are writing to a different part of the repository but it seems
as per the SharedItemStateManager 

design we have just one Write Lock  for the entire workspace.

We tried with FineGrained Locks but that doesnt help either as it just
optimizes read and still we have a single Write lock.

Thoughts/Suggestions

Regards
Shashank
-- 
View this message in context: http://jackrabbit.510166.n4.nabble.com/Concurrent-Write-issues-with-Jackrabbit-tp2304674p2304674.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.

Re: Concurrent Write issues with Jackrabbit

Posted by shashank Jain <ja...@gmail.com>.
Sure..
But with 100 concurrent threads I guess we should not be looking for
big hardware anyways.

Also if we cluster the repository I guess still we deal with a global
lock across nodes..

So not sure how much we gain

Thanks
Shashank

On Wed, Jul 28, 2010 at 9:06 PM, Thomas Müller <th...@day.com> wrote:
> Hi,
>
>> What I am getting here is that writes will be
>> serialized due to a Single Write lock
>
> For scalability, you also need scalable hardware. Just using multiple
> threads will not improve performance if all the data is then stored on
> the same disk.
>
> Regards,
> Thomas
>

Re: Concurrent Write issues with Jackrabbit

Posted by Thomas Müller <th...@day.com>.
Hi,

> What I am getting here is that writes will be
> serialized due to a Single Write lock

For scalability, you also need scalable hardware. Just using multiple
threads will not improve performance if all the data is then stored on
the same disk.

Regards,
Thomas

Re: Concurrent Write issues with Jackrabbit

Posted by shashank Jain <ja...@gmail.com>.
Sure..

We are already going through them. I just wanted to make sure if we
have some kind of  configuration to allow parallel writes in
jackrabbit if we write to different

parts of repository..What I am getting here is that writes will be
serialized due to a Single Write lock

Thanks a lot
Shashank


On Wed, Jul 28, 2010 at 8:37 PM, Thomas Müller <th...@day.com> wrote:
> Hi,
>
> Do you use Day CRX / CQ? If yes, I suggest to use the Day support.
>
> Regards,
> Thomas
>

Re: Concurrent Write issues with Jackrabbit

Posted by Thomas Müller <th...@day.com>.
Hi,

Do you use Day CRX / CQ? If yes, I suggest to use the Day support.

Regards,
Thomas

Re: Concurrent Write issues with Jackrabbit

Posted by shashank Jain <ja...@gmail.com>.
Hi,

The problem is performance . Thats based on concurrent writes we are doing.

So if I see the design of jackrabbit and as much I make out of it is
that the Write lock is acquired at the SharedItemStateManager level
which is above PersistanceManager.

We tried changing to DB persistance manager but that didnt scale as
well and we continued to see almost same results as with File
Persistance Manager

We see a write through put of about 3.5 secs to 7 secs for workflow
creation through Day workflow using the REST API's. This is when we
run 100 concurrent users.

I think it should be few millisecs per creation

Thanks

Shashank

On Wed, Jul 28, 2010 at 8:11 PM, Thomas Müller <th...@day.com> wrote:
> Hi,
>
> Are you sure the problem is "concurrency" and not "performance"? Are
> you sure that the persistence manager you use does support higher
> write throughput? What persistence manager do you use, and what is the
> write throughput you see, and what do you need?
>
> Regards,
> Thomas
>

Re: Concurrent Write issues with Jackrabbit

Posted by Thomas Müller <th...@day.com>.
Hi,

Are you sure the problem is "concurrency" and not "performance"? Are
you sure that the persistence manager you use does support higher
write throughput? What persistence manager do you use, and what is the
write throughput you see, and what do you need?

Regards,
Thomas

Re: Concurrent Write issues with Jackrabbit

Posted by smjain <ja...@gmail.com>.
Thanks Marcel,

We cannot batch it in a single session as it will lead to thread safety
issues..

Each request is handled in a different session . We might lead to corruption
of repository by sharing session

Thanks
Shashank
-- 
View this message in context: http://jackrabbit.510166.n4.nabble.com/Concurrent-Write-issues-with-Jackrabbit-tp2304674p2305071.html
Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.

Re: Concurrent Write issues with Jackrabbit

Posted by Marcel Reutegger <ma...@day.com>.
hi,

jackrabbit currently indeed serializes writes on the persistence
level. as a workaround you could batch multiple writes into a single
session save. though, I'm not sure if that's an option in your case.

regards
 marcel

On Wed, Jul 28, 2010 at 11:25, smjain <ja...@gmail.com> wrote:
>
> Hi All,
> We see a serious Write lock contention issue when we are trying to write to
> jackrabbit through different jcr sessions.
>
> We see huge number of threads waiting on a write lock when a session.save is
> done.
>
> All threads are writing to a different part of the repository but it seems
> as per the SharedItemStateManager
>
> design we have just one Write Lock  for the entire workspace.
>
> We tried with FineGrained Locks but that doesnt help either as it just
> optimizes read and still we have a single Write lock.
>
> Thoughts/Suggestions
>
> Regards
> Shashank
> --
> View this message in context: http://jackrabbit.510166.n4.nabble.com/Concurrent-Write-issues-with-Jackrabbit-tp2304674p2304674.html
> Sent from the Jackrabbit - Dev mailing list archive at Nabble.com.
>