You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-user@db.apache.org by markvarvil <mv...@us.ibm.com> on 2012/10/05 00:29:40 UTC

LOCK_TABLE not cleared (Derby 10.5)

Using Derby 10.5, and running in network server mode, we have encountered an
issue where the lock_table contained locks, yet these locks are not being
deleted;  Even after the system sat idle overnight, the locks entries
persisted in the lock_table the next morning, and caused the system to
function improperly.

We are using the default values for lock and deadlock timeouts.   Our
application is using JPA and Optimistic Locking.  

What would cause the locks to NOT be cleared and/or released (even after a
lengthy timeout period)?

http://old.nabble.com/file/p34515902/derbylocks.png 




-- 
View this message in context: http://old.nabble.com/LOCK_TABLE-not-cleared-%28Derby-10.5%29-tp34515902p34515902.html
Sent from the Apache Derby Users mailing list archive at Nabble.com.

Re: LOCK_TABLE not cleared (Derby 10.5)

Posted by Katherine Marsden <km...@sbcglobal.net>.
On 10/8/2012 9:37 AM, Katherine Marsden wrote:
> On 10/4/2012 3:29 PM, markvarvil wrote:
>> Using Derby *10.5*, and running in *network server mode*, we have 
>> encountered an issue where the lock_table contained locks, yet these 
>> locks are not being deleted; Even after the system sat idle 
>> overnight, the locks entries persisted in the lock_table the next 
>> morning, and caused the system to function improperly. We are using 
>> the default values for lock and deadlock timeouts. Our application is 
>> using JPA and Optimistic Locking. What would cause the locks to NOT 
>> be cleared and/or released (even after a lengthy timeout period)?
>
> Hi Mark,
>
> Likely there are transactions that have not been committed or rolled 
> back that are holding these locks.
> The lock timeout  is a setting that will determine how long a new 
> transaction will wait for existing  locks to be released before the 
> new transaction times  out  itself. It does not affect how long the 
> original transaction will hold the locks which will be until the 
> transaction is committed or rolled back.
>
> One thing to do is to make sure that all transactions are committed 
> and rolled back, especially in exception
> circumstances.  A frequent case I have seen when transactions and 
> connections are left open is that transactions are not rolled back 
> before attempting to close a connection. Then the exception that is 
> thrown on close  is caught and ignored.   A global approach to this 
> particular issue might be to replace Connection.close() calls with a 
> method that always rolls back before closing the connection.
>
>
I neglected to mention that a good tool to see the current session info 
is NetworkServerControl runtimeinfo.  Also if you set 
derby.stream.error.logSeverityLevel=0 and 
derby.language.logStatementText=true and run your application to the 
point of the orphaned transactions you can get more context.  grep the 
log for "ERROR 25001.  This specifically is:

ERROR 25001:errorCode 20000:Cannot close a connection while a 
transaction is still active.



Re: LOCK_TABLE not cleared (Derby 10.5)

Posted by Katherine Marsden <km...@sbcglobal.net>.
On 10/4/2012 3:29 PM, markvarvil wrote:
> Using Derby *10.5*, and running in *network server mode*, we have 
> encountered an issue where the lock_table contained locks, yet these 
> locks are not being deleted; Even after the system sat idle overnight, 
> the locks entries persisted in the lock_table the next morning, and 
> caused the system to function improperly. We are using the default 
> values for lock and deadlock timeouts. Our application is using JPA 
> and Optimistic Locking. What would cause the locks to NOT be cleared 
> and/or released (even after a lengthy timeout period)?

Hi Mark,

Likely there are transactions that have not been committed or rolled 
back that are holding these locks.
The lock timeout  is a setting that will determine how long a new 
transaction will wait for existing  locks to be released before  the new 
transaction times  out  itself. It does not affect how long the original 
transaction will hold the locks which will be until the transaction is 
committed or rolled back.

One thing to do is to make sure that all transactions are committed and 
rolled back, especially in exception
circumstances.  A frequent case I have seen when transactions and 
connections are left open is that transactions are not rolled back 
before attempting to close a connection. Then the exception that is 
thrown on close  is caught and ignored.   A global approach to this 
particular issue might be to replace Connection.close() calls with a 
method that always rolls back before closing the connection.


Best

Kathey



> ------------------------------------------------------------------------
> View this message in context: LOCK_TABLE not cleared (Derby 10.5) 
> <http://old.nabble.com/LOCK_TABLE-not-cleared-%28Derby-10.5%29-tp34515902p34515902.html>
> Sent from the Apache Derby Users mailing list archive 
> <http://old.nabble.com/Apache-Derby-Users-f93.html> at Nabble.com.