You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-user@db.apache.org by Tosurinchi <to...@yahoo.com> on 2007/04/20 16:26:09 UTC

Concurrency and locks in multi-user application

Hi guys, 
I have a general question after getting a little lost in the documentation.
Please bear with me since I'm not database guru :) 
I embed derby-10.2.2.0 in my application which basically work like this.
There is a single thread which inserts/deletes data into a single table.
There are many other threads (user sessions) which read from the same table.
This table can get quite large, and users read data by means of scrollable
result sets which I keep per user session. Kinda long conversation. I think
this is a pretty typical application and it works fine with other databases
I've tried so far - postgree, hsqdb, oracle, mysql. Lately I came up to
Derby because I think it's so cool and easily embeddable into my system,
but. I am getting a little lost not being a super expert in database
internals. What happens is that read and write work perfectly as long as
they work separately. As soon as there is at least one user session which
tries to scroll through the data, the inserts stop working. I have lock
timeouts and other scary things... My question would be, - why this same
scenario works out of the box with other databases and how do I make derby
also work, I really want to support it in my application. Any help or hint
is greatly appreciated.
Thanks in advance.
-- 
View this message in context: http://www.nabble.com/Concurrency-and-locks-in-multi-user-application-tf3618205.html#a10103003
Sent from the Apache Derby Users mailing list archive at Nabble.com.


Re: Concurrency and locks in multi-user application

Posted by Stanley Bradbury <St...@gmail.com>.
To narrow things down turn on lock tracing (deadlockTrace) and 
monitoring (monitor) and examine the additional information in derby.log 
file.  If the blocking is happening at the row level I would not expect 
the behavior you describe.  If, for some reason, there is a table lock 
being held you need to determine why.  The trace printed to the Derby 
logfile will show the lock types.

(monitor) 
http://db.apache.org/derby/docs/10.2/tuning/rtunproper98166.html  and
(deadlockTrace)  
http://db.apache.org/derby/docs/10.2/tuning/rtunproper23835.html

If the same SQL statements are involved in most blocking analyze the 
statement to be sure indexes are being used:
http://db.apache.org/derby/docs/10.2/tuning/ttundepth33391.html

Also look at the 'Locking and performance' section of the tuning guide, 
particularly the section on lock escalation if table level locks are 
being held:
http://db.apache.org/derby/docs/10.2/tuning/ctunoptimz42065.html

HTH

Tosurinchi wrote:
> Hi guys, 
> I have a general question after getting a little lost in the documentation.
> Please bear with me since I'm not database guru :) 
> I embed derby-10.2.2.0 in my application which basically work like this.
> There is a single thread which inserts/deletes data into a single table.
> There are many other threads (user sessions) which read from the same table.
> This table can get quite large, and users read data by means of scrollable
> result sets which I keep per user session. Kinda long conversation. I think
> this is a pretty typical application and it works fine with other databases
> I've tried so far - postgree, hsqdb, oracle, mysql. Lately I came up to
> Derby because I think it's so cool and easily embeddable into my system,
> but. I am getting a little lost not being a super expert in database
> internals. What happens is that read and write work perfectly as long as
> they work separately. As soon as there is at least one user session which
> tries to scroll through the data, the inserts stop working. I have lock
> timeouts and other scary things... My question would be, - why this same
> scenario works out of the box with other databases and how do I make derby
> also work, I really want to support it in my application. Any help or hint
> is greatly appreciated.
> Thanks in advance.
>