You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Mike Matrigali (JIRA)" <ji...@apache.org> on 2008/12/03 22:49:45 UTC

[jira] Updated: (DERBY-2991) Index split deadlock

     [ https://issues.apache.org/jira/browse/DERBY-2991?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]

Mike Matrigali updated DERBY-2991:
----------------------------------


I am concerned about performance of always saving full key.  This is going to cause both cpu and memory problems(garbage collection).  As originally designed it was expected that this path would be rare so no optimization was ever done.  The save by id path was the one optimized.

Did you ever consider fixing by only changing the deadlock case, ie always giving up the scan position lock when one has to wait on a lock?  This case should be rare, compared with the very normal case of group scan which happens by default for any scan of btree returning more than 16 rows for one query.

sorry for late comment, was out on vacation when you started on this.

> Index split deadlock
> --------------------
>
>                 Key: DERBY-2991
>                 URL: https://issues.apache.org/jira/browse/DERBY-2991
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.2.2.0, 10.3.1.4
>         Environment: Windows XP, Java 6
>            Reporter: Bogdan Calmac
>            Assignee: Knut Anders Hatlen
>         Attachments: d2991-preview-1a.diff, d2991-preview-1a.stat, d2991-preview-1b.diff, d2991-preview-1b.stat, d2991-preview-1c.diff, d2991-preview-1c.stat, derby.log, InsertSelectDeadlock.java, Repro2991.java, stacktraces_during_deadlock.txt
>
>
> After doing dome research on the mailing list, it appears that the index split deadlock is a known behaviour, so I will start by describing the theoretical problem first and then follow with the details of my test case.
> If you have concurrent select and insert transactions on the same table, the observed locking behaviour is as follows:
>  - the select transaction acquires an S lock on the root block of the index and then waits for an S lock on some uncommitted row of the insert transaction
>  - the insert transaction acquires X locks on the inserted records and if it needs to do an index split creates a sub-transaction that tries to acquire an X lock on the root block of the index
> In summary: INDEX LOCK followed by ROW LOCK + ROW LOCK followed by INDEX LOCK = deadlock
> In the case of my project this is an important issue (lack of concurrency after being forced to use table level locking) and I would like to contribute to the project and fix this issue (if possible). I was wondering if someone that knows the code can give me a few pointers on the implications of this issue:
>  - Is this a limitation of the top-down algorithm used?
>  - Would fixing it require to use a bottom up algorithm for better concurrency (which is certainly non trivial)?
>  - Trying to break the circular locking above, I would first question why does the select transaction need to acquire (and hold) a lock on the root block of the index. Would it be possible to ensure the consistency of the select without locking the index?
> -----
> The attached test (InsertSelectDeadlock.java) tries to simulate a typical data collection application, it consists of: 
>  - an insert thread that inserts records in batch 
>  - a select thread that 'processes' the records inserted by the other thread: 'select * from table where id > ?' 
> The derby log provides detail about the deadlock trace and stacktraces_during_deadlock.txt shows that the inser thread is doing an index split.
> The test was run on 10.2.2.0 and 10.3.1.4 with identical behaviour.
> Thanks,
> Bogdan Calmac.

-- 
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.