You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Mike Matrigali (JIRA)" <de...@db.apache.org> on 2004/12/10 00:05:11 UTC

[jira] Commented: (DERBY-94) Lock not being released properly, possibly related to occurence of lock escalation

     [ http://nagoya.apache.org/jira/browse/DERBY-94?page=comments#action_56477 ]
     
Mike Matrigali commented on DERBY-94:
-------------------------------------

Here is how I have debugged these kinds of issues in the past:

1) get a lot of disk space
2) run the repro in a SANE build
3) enable the debug option which prints a stack trace with each lock request to the derby.log (see code in opensource/java/engine/org/apache/derby/impl/services/locks/SinglePool.java, 
Constants.java).  This will produce a very large derby.log file, depending
on how many locks are requested in test case.
4) Find the lock that is left over after the transaction commits and then search backward
     in the log to find the code which requested the lock.

past bugs like this have happened if a lock is not put on the transaction list, and then for
some reason does not get explicitly removed.  In this case the escalation is probably the
cause.  

In normal case locks get put on transaction list and then all locks on transaction list are removed at end of transaction.  I can't remember the
last time I saw problems in this area.

Locks which need to be released before end transaction, ie. read committed,
 are put on temporary lists which are keyed by the open conglomerate,
and then we either explicitly unlock them or we unlock all the ones on the list when the coglomerate
is closed at the end.  probably something going wrong in this area. 

/mikem

> Lock not being released properly, possibly related to occurence of lock escalation
> ----------------------------------------------------------------------------------
>
>          Key: DERBY-94
>          URL: http://nagoya.apache.org/jira/browse/DERBY-94
>      Project: Derby
>         Type: Bug
>   Components: Store
>     Versions: 10.0.2.1
>  Environment: all
>     Reporter: Sunitha Kambhampati
>     Assignee: Suresh Thalamati
>  Attachments: Derby94Test.java, Derby94Test_Output, derby.log
>
> In the following scenario: 
> <code snippet>
>            String sel = "select * from t1 FOR UPDATE of i2";
>   	   PreparedStatement ps1 = conn.prepareStatement (sel);
>    	   int val = 300;
> 	   ps1.setMaxRows(val);
> 	   ResultSet rs = ps1.executeQuery();
>    	   String ins = "Update t1 set i2=? WHERE CURRENT OF "+rs.getCursorName() ;
>    	   PreparedStatement ps2 = conn.prepareStatement(ins);
>            ps2.setInt(1,iteration);
>            while(rs.next())
>            {
>      	      ps2.executeUpdate();
>    	   };
>    	   // print lock table information
>    	   System.out.println("Lock Table before commit transaction");
>    	   printLockTable(conn);
>    	   conn.commit();
> <end code snippet>
> Running the above transaction twice causes a lock timeout the second time.
> It seems like locks are not being released properly on the table even after the transaction commits and the connection is closed. Also, this condition seems to  happen only when lock escalation to table lock occurs. By increasing lock  escalation threshold to prevent lock escalation and with only row level locking, the locks are released properly.
> I printed out the locks information and see a U row level lock on the table , and also a table level lock as a result of lock escalation. After commit, and resultset being closed, the U row level lock is not released. Thus in the second iteration of the test, the unreleased U row level lock causes a lock timeout to happen.  In case of the second iteration of the test, the lock table shows the previous U row lock with a null transaction id. This is not right. 
> The transactions are running at the default isolation level ( read committed).
> By default, the lock escalation threshold is set to 5000
> http://incubator.apache.org/derby/manuals/tuning/perf80.html#IDX547
> I will be attaching the program for reproduction.  To reproduce the problem with less number of rows in the table, please run the program with the following derby properties set 
> derby.locks.deadlockTrace=true
> derby.locks.escalationThreshold=110
>  

-- 
This message is automatically generated by JIRA.
-
If you think it was sent incorrectly contact one of the administrators:
   http://nagoya.apache.org/jira/secure/Administrators.jspa
-
If you want more information on JIRA, or have a bug to report see:
   http://www.atlassian.com/software/jira