You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-user@db.apache.org by Geoff hendrey <ge...@yahoo.com> on 2008/01/10 06:59:28 UTC
why are table locks being made?
I execute 10 DELETE operations. Each DELETE is executed from its own thread and deletes a different row from the same table. I get the same behavior with both the embedded and network driver, on derby 10.3.14.
I am seeing deadlocks. Here is the deadlock dump:
A lock could not be obtained due to a deadlock, cycle of locks and waiters is:
Lock : TABLE, GEOFF__BLOG__USER_MESSAGES, Tablelock
Waiting XID : {7250, IX} , BLOG, DELETE FROM GEOFF__BLOG__USER WHERE "PK"=822
Granted XID : {7249, IX}
Lock : TABLE, GEOFF__BLOG__USER, Tablelock
Waiting XID : {7249, X} , BLOG, DELETE FROM GEOFF__BLOG__USER WHERE "PK"=830
Granted XID : {7249, IX} , {7250, IX}
. The selected victim is XID : 7250.
My question is, why are TABLE locks being issued? Why are any locks at all being issued? I have set the transaction isolation level to READ_UNCOMMITTED. My JDBC transactions have autocommit false. I am not issuing a LOCK TABLE or anything else that should cause the Tablelock.
Also, since I never execute any delete against "GEOFF__BLOG__USER_MESSAGES", I am confused as to why this table appears in the dump. You can see that the delete statements both delete a different row from GEOFF__BLOG__USER, *not* from GEOFF__BLOG__USER_MESSAGES. Is it possible that there is some bug that causes derby to get confused about the long table names, one of which starts with the other?
Any suggestions?
Re: why are table locks being made?
Posted by Oystein Grovlen - Sun Norway <Oy...@Sun.COM>.
Some things to consider:
* Isolation level is only relevant when locking for reads. For
insert/update/delete locking is needed for the duration of the
transaction in order to guarantee the recoverability of the
database.
* If your table has a referential constraint to other tables, a
delete may cause cascading deletes in referred tables. (Ref.
http://db.apache.org/derby/docs/10.3/ref/rrefsqlj13590.html)
* The optimizer may choose to use table locking instead of row
locking, if it thinks that will be more efficient. If a table scan
is used, table locking will certainly be used. If the table is
very small, the optimizer my choose to use a table scan instead of
an index look-up. (E.g, If all records fit in one page, a table
scan will only access one page, while an index lookup will access
two pages.) See
http://db.apache.org/derby/docs/10.3/tuning/ctunoptimzoverride.html
for advice on how to force a query to use index lookup.
Hope this clears up a few things. Please, do not hesitate to ask more
questions if you are still not able to figure out how to solve your
problem. Posting your DDL and the query plans will also make it
easier to pinpoint your problem. (See
http://wiki.apache.org/db-derby/PerformanceDiagnosisTips for advice on
how to generate query plans).
--
Øystein
Geoff hendrey wrote:
> I execute 10 DELETE operations. Each DELETE is executed from its own
thread and deletes a different row from the same table. I get the same
behavior with both the embedded and network driver, on derby 10.3.14.
>
>
>
> I am seeing deadlocks. Here is the deadlock dump:
>
> A lock could not be obtained due to a deadlock, cycle of locks and
waiters is:
>
> Lock : TABLE, GEOFF__BLOG__USER_MESSAGES, Tablelock
>
> Waiting XID : {7250, IX} , BLOG, DELETE FROM GEOFF__BLOG__USER
WHERE "PK"=822
>
> Granted XID : {7249, IX}
>
> Lock : TABLE, GEOFF__BLOG__USER, Tablelock
>
> Waiting XID : {7249, X} , BLOG, DELETE FROM GEOFF__BLOG__USER WHERE
"PK"=830
>
> Granted XID : {7249, IX} , {7250, IX}
>
> . The selected victim is XID : 7250.
>
>
>
> My question is, why are TABLE locks being issued? Why are any locks
at all being issued? I have set the transaction isolation level to
READ_UNCOMMITTED. My JDBC transactions have autocommit false. I am not
issuing a LOCK TABLE or anything else that should cause the Tablelock.
>
>
>
> Also, since I never execute any delete against
"GEOFF__BLOG__USER_MESSAGES", I am confused as to why this table appears
in the dump. You can see that the delete statements both delete a
different row from GEOFF__BLOG__USER, *not* from
GEOFF__BLOG__USER_MESSAGES. Is it possible that there is some bug that
causes derby to get confused about the long table names, one of which
starts with the other?
>
>
>
> Any suggestions?
>
>
>