You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Kathey Marsden (JIRA)" <ji...@apache.org> on 2009/03/02 19:48:56 UTC
[jira] Updated: (DERBY-4055) Space may not be reclaimed if row
locks are not available after three retries
[ https://issues.apache.org/jira/browse/DERBY-4055?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel ]
Kathey Marsden updated DERBY-4055:
----------------------------------
Derby Categories: [High Value Fix]
> Space may not be reclaimed if row locks are not available after three retries
> -------------------------------------------------------------------------------
>
> Key: DERBY-4055
> URL: https://issues.apache.org/jira/browse/DERBY-4055
> Project: Derby
> Issue Type: Bug
> Components: Store
> Affects Versions: 10.1.3.1, 10.2.2.0, 10.3.3.0, 10.4.2.0, 10.5.0.0
> Reporter: Kathey Marsden
> Attachments: derby.log.T_RawStoreFactoryWithAssert
>
>
> In a multithreaded clob update where the same row is being updated, space will not be reclaimed. The offending code is in ReclaimSpaceHelper:
> RecordHandle headRecord = work.getHeadRowHandle();
> if (!container_rlock.lockRecordForWrite(
> tran, headRecord, false /* not insert */, false /* nowait */))
> {
> // cannot get the row lock, retry
> tran.abort();
> if (work.incrAttempts() < 3)
> {
> return Serviceable.REQUEUE;
> }
> else
> {
> // If code gets here, the space will be lost forever, and
> // can only be reclaimed by a full offline compress of the
> // table/index.
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace,
> " gave up after 3 tries to get row lock " +
> work);
> }
> }
> return Serviceable.DONE;
> }
> }
> If we cannot get the lock after three tries we give up. The reproduction for this issue is in the test store.ClobReclamationTest.xtestMultiThreadUpdateSingleRow().
> This issue also used to reference the code below and has some references to trying to get a reproduction for that issue, but that work has moved to DERBY-4054. Please see DERBY-4054 for work on the container lock issue.
> ContainerHandle containerHdl =
> openContainerNW(tran, container_rlock, work.getContainerId());
> if (containerHdl == null)
> {
> tran.abort();
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace, " aborted " + work +
> " because container is locked or dropped");
> }
> }
> if (work.incrAttempts() < 3) // retry this for serveral times
> {
> return Serviceable.REQUEUE;
> }
> else
> {
> // If code gets here, the space will be lost forever, and
> // can only be reclaimed by a full offline compress of the
> // table/index.
> if (SanityManager.DEBUG)
> {
> if (SanityManager.DEBUG_ON(DaemonService.DaemonTrace))
> {
> SanityManager.DEBUG(
> DaemonService.DaemonTrace,
> " gave up after 3 tries to get container lock " +
> work);
> }
> }
> return Serviceable.DONE;
> }
> }
--
This message is automatically generated by JIRA.
-
You can reply to this email to add a comment to the issue online.