You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Mike Matrigali (JIRA)" <ji...@apache.org> on 2014/05/01 00:09:19 UTC

[jira] [Commented] (DERBY-6554) Too much contention followed by assert failure when accessing sequence in transaction that created it

    [ https://issues.apache.org/jira/browse/DERBY-6554?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13986153#comment-13986153 ] 

Mike Matrigali commented on DERBY-6554:
---------------------------------------

Rick I have copied below from DERBY-5443, I think we are back to discussing the same problem that was marked fixed.  Could you update this description if necessary to reflect what sequencers do in trunk now, I have lost track.
I like option 1 the best but not sure if anyone with skills in the lock manager would be interested, I don' like option 2 anymore and think option 3 may be the easiest solution, but need to understand what current implementation of sequencers do now.  

Description of problem from around 03/Oct/11 14:47 in DERBY-5443:

Currently the Sequence updater tries to do the system catalog update as part of the user thread, but in a nested user transaction. When this works
all is well as the nested user transaction is immediately committed and thus the throughput of all threads depending on allocating sequences is
optimized.

In order to be able to commit the nested writable transaction independently the lock manager must treat the parent and nested transactions as two
independent transactions and locks held by the parent will thus block the child. And in effect any lock that is blocked by the parent is a deadlock,
but the lock manager does not understand this relationship and thus only will timeout and not recognize the implicit deadlock.

Only 2 cases come to mind of the parent blocking the child in this manner for sequences:
1) ddl like create done in transaction followed by inserts into the table requiring sequence update.
2) users doing jdbc data dictionary lookups in a multistatment transaction resulting in holding locks on the system catalog rows and subsequently
doing inserts into the table requiring sequence updates.

The sequence updater currently never waits for a lock in the nested transaction and assumes any blocked lock is this parent deadlock case. It
then falls back on doing the update in tranaction and then the system catalog lock remains until the user transaction commits which could then
hold hostage all other inserts into the table. This is ok in the above 2 cases as there is not any other choice since the user transaction is already
holding the system hostage.

The problem is the case where it was not a deadlock but just another thread trying to do the sequence update. In this case the thread should
not be getting locks on the user thread.

I am not sure best way to address this project but here are some ideas:
1) enhance lock manager to recognize the deadlock and then change to code to somehow do an immediately deadlock check for internal
nested transactions, no matter what the system default is. Then the code should go ahead and use the system wait timeout on this lock
and only fall over to using user transaction for deadlock (or maybe even throw a new "self deadlock" error that would only be possible for
internal transactions).

2) somehow execute the internal system catalog update as part of a whole different transaction in the system. Would need a separate context.
Sort of like the background daemon threads. Then no self deadlock is possible and it could just go ahead and wait. The downside is that then
the code to "wait" for a new sequence becomes more complicated as it has to wait for an event from another thread. But seems like it could
designed with locks/synchonization blocks somehow.

3) maybe add another lock synchronization that would only involve threads updating the sequences. So first an updater would request the
sequence updater lock (with a key specific to the table and a new type) and it could just wait on it. It should never be held by parent
transaction. Then it would still need the catalog row lock to do the update. I think with proper ordering this would insure that blocking on
the catalog row lock would only happen in the self deadlock case.

Overall this problem is less important as the size of the chunk of sequence is tuned properly for the application, and ultimately best if derby
autotuned the chunk. 

> Too much contention followed by assert failure when accessing sequence in transaction that created it
> -----------------------------------------------------------------------------------------------------
>
>                 Key: DERBY-6554
>                 URL: https://issues.apache.org/jira/browse/DERBY-6554
>             Project: Derby
>          Issue Type: Bug
>          Components: SQL
>    Affects Versions: 10.9.1.0, 10.11.0.0, 10.10.2.0
>            Reporter: Knut Anders Hatlen
>         Attachments: D6554.java, derby-6554-01-aa-useCreationTransaction.diff, derby-6554-01-ab-useCreationTransaction.diff, derby-6554-01-ac-useCreationTransaction.diff
>
>
> {noformat}
> ij version 10.11
> ij> connect 'jdbc:derby:memory:db;create=true' as c1;
> ij> autocommit off;
> ij> create sequence seq;
> 0 rows inserted/updated/deleted
> ij> values next value for seq;
> 1          
> -----------
> ERROR X0Y84: Too much contention on sequence SEQ. This is probably caused by an uncommitted scan of the SYS.SYSSEQUENCES catalog. Do not query this catalog directly. Instead, use the SYSCS_UTIL.SYSCS_PEEK_AT_SEQUENCE function to view the current value of a query generator.
> ij> rollback;
> ERROR 08003: No current connection.
> ij> connect 'jdbc:derby:memory:db' as c2;
> ij(C2)> autocommit off;
> ij(C2)> create sequence seq;
> 0 rows inserted/updated/deleted
> ij(C2)> values next value for seq;
> 1          
> -----------
> ERROR 38000: The exception 'org.apache.derby.shared.common.sanity.AssertFailure: ASSERT FAILED Identity being changed on a live cacheable. Old uuidString = 0ddd00a9-0145-98ba-79df-000007d88b08' was thrown while evaluating an expression.
> ERROR XJ001: Java exception: 'ASSERT FAILED Identity being changed on a live cacheable. Old uuidString = 0ddd00a9-0145-98ba-79df-000007d88b08: org.apache.derby.shared.common.sanity.AssertFailure'.
> {noformat}



--
This message was sent by Atlassian JIRA
(v6.2#6252)