You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Dag H. Wanvik (JIRA)" <ji...@apache.org> on 2013/04/02 06:19:16 UTC

[jira] [Commented] (DERBY-6137) update/delete statement on table with trigger fails randomly with ERROR XSTA2

    [ https://issues.apache.org/jira/browse/DERBY-6137?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13619501#comment-13619501 ] 

Dag H. Wanvik commented on DERBY-6137:
--------------------------------------

>From the stack trace it looks like Derby is trying to create a nested updatable transaction *twice*, first for compiling the trigger's stored prepared statment, then as a part of that effort, when the data dictionary is clearing out sequence caches as it tries to save unused but allocated sequences.

----------------------------------------------------------------
Booting Derby version The Apache Software Foundation - Apache Derby - 10.9.1.0 - (1344872): 
:

ERROR XSTA2: A transaction was already active, when attempt was made to make another transaction active.
	at org.apache.derby.iapi.error.StandardException.newException
	at org.apache.derby.impl.store.raw.xact.XactFactory.pushTransactionContext
	at org.apache.derby.impl.store.raw.xact.XactFactory.startCommonTransaction
	at org.apache.derby.impl.store.raw.xact.XactFactory.startNestedUpdateUserTransaction
	at org.apache.derby.impl.store.raw.RawStore.startNestedUpdateUserTransaction
	at org.apache.derby.impl.store.access.RAMTransaction.startNestedUserTransaction
**>	at org.apache.derby.impl.sql.catalog.SequenceUpdater.updateCurrentValueOnDisk
	at org.apache.derby.impl.sql.catalog.SequenceUpdater.clean
	at org.apache.derby.impl.sql.catalog.SequenceUpdater.clearIdentity
	at org.apache.derby.impl.services.cache.ConcurrentCache.removeEntry
	at org.apache.derby.impl.services.cache.ConcurrentCache.ageOut
	at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.clearSequenceCaches
	at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.clearCaches
	at org.apache.derby.impl.sql.catalog.DataDictionaryImpl.startWriting
	at org.apache.derby.iapi.sql.dictionary.SPSDescriptor.compileStatement
	at org.apache.derby.iapi.sql.dictionary.SPSDescriptor.prepareAndRelease
**>	at org.apache.derby.iapi.sql.dictionary.SPSDescriptor.getPreparedStatement 
#line 733 (nested transaction started at #706)

	at org.apache.derby.iapi.sql.dictionary.SPSDescriptor.getPreparedStatement
	at org.apache.derby.impl.sql.execute.GenericTriggerExecutor.executeSPS
	at org.apache.derby.impl.sql.execute.RowTriggerExecutor.fireTrigger
	at org.apache.derby.impl.sql.execute.TriggerEventActivator.notifyEvent
	at org.apache.derby.impl.sql.execute.DeleteResultSet.fireAfterTriggers
	at org.apache.derby.impl.sql.execute.DeleteResultSet.open
	at org.apache.derby.impl.sql.GenericPreparedStatement.executeStmt
	at org.apache.derby.impl.sql.GenericPreparedStatement.execute
	at org.apache.derby.impl.jdbc.EmbedStatement.executeStatement
	at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeStatement
	at org.apache.derby.impl.jdbc.EmbedPreparedStatement.executeUpdate
:

The transaction machinery doesn't allow two nested update transaction,
cf test in XactFactory.pushTransactionContext:

   :
   if (cm.getContext(contextName) != null) {
       throw StandardException.newException

contextName here being AccessFactoryGlobals.NESTED_UPDATE_USER_TRANS in both cases above.

This looks like a clear bug to me. Thanks for finding this, Andrew!

It looks as if the code to clean out the cache values to disk was introduced as part of "DERBY-5398: Use a transient transaction to flush unused sequence values back to disk during orderly engine shutdown."

In this case it looks like the cache is cleaned not only at shutdown but also when DataDictionaryImpl#startWriting happens (line 1314).

Note: the call to get a nested updatable user transaction in SPSDescriptor.getPreparedStatement was modified as part of a fix to

   DERBY-5494 Same value returned by successive calls to a sequence generator flanking an unorderly shutdown.
   DERBY-5780 identity column performance has degredated 

but I don't think that's relevant, since it only added explicit synching to disk.

                
> update/delete statement on table with trigger fails randomly with ERROR XSTA2
> -----------------------------------------------------------------------------
>
>                 Key: DERBY-6137
>                 URL: https://issues.apache.org/jira/browse/DERBY-6137
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.9.1.0, 10.11.0.0
>         Environment: $ java -version
> java version "1.6.0_43"
> Java(TM) SE Runtime Environment (build 1.6.0_43-b01)
> Java HotSpot(TM) 64-Bit Server VM (build 20.14-b01, mixed mode)
> Linux x86_64
>            Reporter: Andrew Clemons
>         Attachments: derby_db.log
>
>
> I have an AFTER DELETE trigger on an exchange rate table which inserts data into a history table. It uses a sequence for the key in the history table.
> Simplified it looks like this:
> CREATE TRIGGER TRG_EXCHANGE_RATES_HIST_DEL
> AFTER DELETE ON EXCHANGE_RATES
> REFERENCING OLD ROW AS OLD
> FOR EACH ROW
>  INSERT INTO EXCHANGE_RATE_HISTORY (ID_HISTORY, CUR_FROM, AMNT_FROM, CUR_TO, AMNT_TO, AMNT_RATE, DAT_CREATION)
>    VALUES (( NEXT VALUE for HIST_SEQ ), OLD.CUR_FROM, OLD.AMNT_FROM, OLD.CUR_TO, OLD.AMNT_TO, OLD.AMNT_RATE, CURRENT_TIMESTAMP);
> The sequence is defined as:
> create sequence HIST_SEQ
> increment by 1
> start with 10000001
>  no maxvalue
>  minvalue 10000001
> no cycle;
> Randomly when deleting data the statement will fail with:
> ERROR XSTA2: A transaction was already active, when attempt was made to make another transaction active.
> I will attach the full stack trace. It looks like the sequence cache needs to be cleared which causes a nested user transaction to start. But my delete statement is already running as part of a transaction (non XA - spring managed - hibernate).
> We do not have this exception when testing with derby 10.8.2.2.
> I get the same exception after building and running the latest trunk version (revision 1463340, Bundle-Version: 10.11.0000000.1463340)
> To rule out the sequence on the history table, I switched it to use a generated column (GENERATED ALWAYS AS IDENTITY (START WITH 10000001, INCREMENT BY 1)) but I still randomly get the exception.
> All tables in the application use sequences for their primary keys (through Hibernate's SequenceHiLoGenerator) so it seems to be possibly related to that.

--
This message is automatically generated by JIRA.
If you think it was sent incorrectly, please contact your JIRA administrators
For more information on JIRA, see: http://www.atlassian.com/software/jira