You are viewing a plain text version of this content. The canonical link for it is here.
Posted to derby-dev@db.apache.org by "Kristian Waagan (JIRA)" <ji...@apache.org> on 2011/08/11 11:38:27 UTC

[jira] [Issue Comment Edited] (DERBY-5367) Stale data retrieved when using new collation=TERRITORY_BASED:PRIMARY feature

    [ https://issues.apache.org/jira/browse/DERBY-5367?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=13083033#comment-13083033 ] 

Kristian Waagan edited comment on DERBY-5367 at 8/11/11 9:37 AM:
-----------------------------------------------------------------

Turns out the problem here is the insert into the index (see BTreeController.doIns).
There is optimization code in there to undelete a deleted row in a unique index if the "same" row is inserted again before the row has been purged. This assumes that the fields that are part of the key are equal, but this assumption may be broken when using collations.

The insert code is smart enough to update the row location of the index row, but it doesn't understand that the value stored in the base row has changed.

I'm not very familiar with this code, so I'd like to get some feedback from more knowledgeable people.
To me these possible solutions come to mind:
 a) Deoptimize insert - always insert a new index row.
 b) Make conglomerate code aware of collations and take appropriate actions after comparing the new and the old value(s)

I don't know what consequences the proposed solutions bring to the table.
A quick and dirty fix of the code (where I updated the key field) made the repro pass.

      was (Author: kristwaa):
    Turns out the problem here is the insert into the index (see BTreeController.doIns).
There is optimization code in there to undelete a deleted row in a unique index if the "same" row is inserted again before the row has been purged. This assumes that the fields that are part of the key are equal, but this assumption may be broken when using collations.

The insert code is smart enough to update the row location of the index row, but it doesn't understand that the value stored in the base row has changed.

I'm not very familiar with this code, so I'd like to get some feedback from more knowledgeable people.
To me these possible solutions come to mind:
 a) Deoptimize insert - always insert a new index row.
 b) Make conglomerate code aware of collations and take actions 
  
> Stale data retrieved when using new collation=TERRITORY_BASED:PRIMARY feature
> -----------------------------------------------------------------------------
>
>                 Key: DERBY-5367
>                 URL: https://issues.apache.org/jira/browse/DERBY-5367
>             Project: Derby
>          Issue Type: Bug
>          Components: Store
>    Affects Versions: 10.8.1.2, 10.9.0.0
>         Environment: Mac OS X, Windows
>            Reporter: Brett Wooldridge
>            Priority: Critical
>
> Our product recently upgraded to version 10.8.1.2 in order to take advantage of the new 'case-insensitive' mode offered by Derby in the form of the "collation=TERRITORY_BASED:PRIMARY" connection parameter.
> Unfortunately, we have run into an issue whereby stale data appears to be retrieved from an index, even though the data in the table itself has changed.
> You can see this issue in the IJ session below.  The database in question was created using this Java parameter when invoking IJ:
> -Dij.database=jdbc:derby:test;create=true;collation=TERRITORY_BASED:PRIMARY
> Here is the IJ session:
> CONNECTION0* - 	jdbc:derby:test
> * = current connection
> ij> CREATE TABLE tag (
>     tag_id INTEGER GENERATED BY DEFAULT AS IDENTITY NOT NULL,
>     tag VARCHAR(255) NOT NULL,
>     CONSTRAINT tag_pk PRIMARY KEY (tag_id),
>     CONSTRAINT tag_tag_unique UNIQUE (tag)
> );
> 0 rows inserted/updated/deleted
> ij> -- first insert a value 'Test', note the upper-case 'T' in 'Test'
> ij> INSERT INTO tag (tag) VALUES ('Test');
> 1 row inserted/updated/deleted
> ij> SELECT * FROM tag;
> TAG_ID     |TAG                                                                                                                             
> --------------------------------------------------------------------------------------------------------------------------------------------
> 1          |Test                                                                                                                            
> 1 row selected
> ij> -- Now delete the row
> ij> DELETE FROM tag WHERE tag='Test';
> 1 row inserted/updated/deleted
> ij> -- You could run another SELECT here to verify it is gone, but it is.
> ij> -- Now insert a new value 'test', note the lower-case 't' in 'test'
> ij> INSERT INTO tag (tag) VALUES ('test');
> 1 row inserted/updated/deleted
> ij> -- Now verify that the table contains only the lower-case version: 'test'
> ij> SELECT * FROM tag;
> TAG_ID     |TAG                                                                                                                             
> --------------------------------------------------------------------------------------------------------------------------------------------
> 2          |test                                                                                                                            
> 1 row selected
> ij> -- Now, here is the bug.
> ij> SELECT tag FROM tag;
> TAG                                                                                                                             
> --------------------------------------------------------------------------------------------------------------------------------
> Test                                                                                                                            
> 1 row selected
> ij> 
> Note in the last SELECT we specify the 'tag' column specifically.  When we 'SELECT *', Derby performs a table-scan and the result is correct.  However, when we 'SELECT tag', Derby appears to use the index created for the 'tag_tag_unique' unique constraint.  As an optimization Derby, like many databases, will use values directly from the index in the case where the index covers all requested columns.
> The bigger question is, why doesn't the DELETE action cause the entry in the tag_tag_unique index to be deleted?  Is this a further optimization?  If so, it is imperative that the index at least be updated when the new value is inserted.
> This is rather a severe bug for us that causes stale data to be returned.

--
This message is automatically generated by JIRA.
For more information on JIRA, see: http://www.atlassian.com/software/jira