You are viewing a plain text version of this content. The canonical link for it is here.
Posted to ojb-user@db.apache.org by Danilo Tommasina <dt...@risksys.com> on 2003/06/04 15:48:38 UTC

Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Hello,

I noticed an odd behaviour when using *broker.deleteByQuery*, this issue seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery leaves Cache in an inconsistent state), however no info is still available in the javadoc nor a solution seems to be available.
Get a look at this code:

       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
        //Insert entries
        try {
            broker.beginTransaction();
            UserAttrs ua;
            //Columns:        userid, attrName, attrValue
            //Primary Key:       x  ,    x
            ua= new UserAttrs( "id1", "attr1", "test1" );
            broker.store( ua );
            ua= new UserAttrs( "id1", "attr2", "test2" );
            broker.store( ua );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

        //Delete all entries with userID = "id1"
        try {
            UserAttrs ua= new UserAttrs();
            ua.setUserid( "id1" );
            Query q = new QueryByCriteria(ua);
            broker.beginTransaction();
            broker.deleteByQuery( q );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

        //Re-Insert entries
        try {
            broker.beginTransaction();
            UserAttrs ua;
            //Columns:        userid, attrName, attrValue
            //Primary Key:       x  ,    x
            ua= new UserAttrs( "id1", "attr1", "test1" );
            broker.store( ua );
            ua= new UserAttrs( "id1", "attr2", "test2" );
            broker.store( ua );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

On first execution this causes the generation of following SQL:

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr1','test1')
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr2','test2')
-> commit

SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE A0.USERID =  'id1'
DELETE FROM USER_ATTRS WHERE USERID =  'id1'
-> commit

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
-> commit

The UPDATE statements in the 3. block will have no effect on the database, this is from my point of view a seldom but potentially dangerous BUG!!!

There is a simple workaround to this, until the code is fixed, simply call a broker.clearCache() after the deleteByQuery transaction has been executed.
However this is a performance killer if you are going to deleteByQuery very often.
I adopted following solution, but since I am a OJB Newbie I'd like to know if you see a better solution, without re-implementing the ObjectCacheImpl class
I extended PersistenceBrokerImpl through a new class and did an override of the deleteByQuery method, then declared this new class in the OJB.properties int the PersistenceBrokerClass property.
Here the code:

public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
    protected SafeDeleteByQueryPBImpl() {
        super();
    }
    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF pbf) {
        super( key, pbf );
    }

    /**
     * Bug workaround
     * Added code for clearing matching objects from cache when executing PersistenceBrokerImpl.deleteByQuery(query)
     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
     */
    public void deleteByQuery(Query query) throws PersistenceBrokerException {
        //Clear cached objects
        Iterator it= super.getIteratorByQuery( query );  //List all objects affected by the query
        while ( it.hasNext() ) {
            super.objectCache.remove( new Identity( it.next(), this ) );    //Remove matching objects form cache
        }
        //Delegate deleteByQuery to super class
        super.deleteByQuery( query );
    }
}

Calling the method will cause an extra SELECT statment to be inserted and all the objects to be loaded in memory, however this should be faster than executing single deletes or clearing the cache each time.
Is there a better solution to that?
Thanks and sorry for the long message
 Danilo Tommasina

Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Danilo Tommasina <dt...@risksys.com>.
hi thomas,

uhm, if I understand you well, this is exactly what I am doing with my SaveDeleteByQueryPBImpl class (see at the end of the mail).
It should be possible to add a conditional block and check if cache is active.
If active then get the list of affected objects trough the getIteratorByQuery and clear the cache else just skip it.

danilo

> Hi,
>
> I'm not convinced that this is really a necessary feature.
> why not ask the user to
> 1. load the list of all matching objects with getCollectionByQuery(q)
> 2. iterate over the colection and remove all elements from the cache
> 3. call broker.deleteByQuery(q) ?
>
> It won't be difficult to implement, but would violate the micro kernel
> approach:  it would expose an additional kernel method that could be
> easily replaced with user calls...
>
> I we want to implement it I see a problem with SQL delete statements, as
> they circumvent all OJB mechanisms.
> it would be not so easy to apply steps 1. and to in this case...
>
> my 2c
> Thomas
>
> Jakob Braeuchi wrote:
> > hi armin,
> >
> > sounds good.  would you like to sync the cache by an ordinary query or
> > are you thinking about something special for this task ?
> >
> > jakob
> >
> > Armin Waibel wrote:
> >> Hi Danilo,
> >>
> >> you are right cache was not synchronized.
> >>
> >> But some user don't use the cache (using 'empty cache'),
> >> so maybe we need both possibilities:
> >> * with cache synchronization - safe but less
> >> performant
> >> * without cache synchronization - performant
> >>
> >> public void deleteByQuery(Query query)
> >> does cache synchronization by default
> >>
> >> public void deleteByQuery(Query query, boolean synchronizeCache)
> >> choose what you want
> >>
> >> What do you think?
> >>
> >> regards,
> >> Armin
> >>
> >> ----- Original Message -----
> >> From: "Danilo Tommasina" <dt...@risksys.com>
> >> To: "OJB Users List" <oj...@db.apache.org>
> >> Sent: Wednesday, June 04, 2003 3:48 PM
> >> Subject: Cache inconsitence using deleteByQuery with
> >> PersistenceBrokerImpl
> >>
> >>
> >> Hello,
> >>
> >> I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
> >> seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
> >> leaves Cache in an inconsistent state), however no info is still
> >> available in the javadoc nor a solution seems to be available.
> >> Get a look at this code:
> >>
> >>       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
> >>        file://Insert entries
> >>        try {
> >>            broker.beginTransaction();
> >>            UserAttrs ua;
> >>            file://Columns:        userid, attrName, attrValue
> >>            file://Primary Key:       x  ,    x
> >>            ua= new UserAttrs( "id1", "attr1", "test1" );
> >>            broker.store( ua );
> >>            ua= new UserAttrs( "id1", "attr2", "test2" );
> >>            broker.store( ua );
> >>            broker.commitTransaction();
> >>        } catch (Throwable t) {
> >>            broker.abortTransaction();
> >>            t.printStackTrace();
> >>        }
> >>
> >>        file://Delete all entries with userID = "id1"
> >>        try {
> >>            UserAttrs ua= new UserAttrs();
> >>            ua.setUserid( "id1" );
> >>            Query q = new QueryByCriteria(ua);
> >>            broker.beginTransaction();
> >>            broker.deleteByQuery( q );
> >>            broker.commitTransaction();
> >>        } catch (Throwable t) {
> >>            broker.abortTransaction();
> >>            t.printStackTrace();
> >>        }
> >>
> >>        file://Re-Insert entries
> >>        try {
> >>            broker.beginTransaction();
> >>            UserAttrs ua;
> >>            file://Columns:        userid, attrName, attrValue
> >>            file://Primary Key:       x  ,    x
> >>            ua= new UserAttrs( "id1", "attr1", "test1" );
> >>            broker.store( ua );
> >>            ua= new UserAttrs( "id1", "attr2", "test2" );
> >>            broker.store( ua );
> >>            broker.commitTransaction();
> >>        } catch (Throwable t) {
> >>            broker.abortTransaction();
> >>            t.printStackTrace();
> >>        }
> >>
> >> On first execution this causes the generation of following SQL:
> >>
> >> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >> AND ATTR_NAME = 'attr1'
> >> INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
> >> ('id1','attr1','test1')
> >> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >> AND ATTR_NAME = 'attr2'
> >> INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
> >> ('id1','attr2','test2')
> >> -> commit
> >>
> >> SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
> >> A0.USERID =  'id1'
> >> DELETE FROM USER_ATTRS WHERE USERID =  'id1'
> >> -> commit
> >>
> >> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >> AND ATTR_NAME = 'attr1'
> >> UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND
> >> ATTR_NAME = 'attr1'
> >> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >> AND ATTR_NAME = 'attr2'
> >> UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND
> >> ATTR_NAME = 'attr2'
> >> -> commit
> >>
> >> The UPDATE statements in the 3. block will have no effect on the
> >> database, this is from my point of view a seldom but potentially
> >> dangerous BUG!!!
> >>
> >> There is a simple workaround to this, until the code is fixed, simply
> >> call a broker.clearCache() after the deleteByQuery transaction has been
> >> executed.
> >> However this is a performance killer if you are going to deleteByQuery
> >> very often.
> >> I adopted following solution, but since I am a OJB Newbie I'd like to
> >> know if you see a better solution, without re-implementing the
> >> ObjectCacheImpl class
> >> I extended PersistenceBrokerImpl through a new class and did an override
> >> of the deleteByQuery method, then declared this new class in the
> >> OJB.properties int the PersistenceBrokerClass property.
> >> Here the code:
> >>
> >> public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
> >>    protected SafeDeleteByQueryPBImpl() {
> >>        super();
> >>    }
> >>    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
> >> pbf) {
> >>        super( key, pbf );
> >>    }
> >>
> >>    /**
> >>     * Bug workaround
> >>     * Added code for clearing matching objects from cache when
> >> executing PersistenceBrokerImpl.deleteByQuery(query)
> >>     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
> >>     */
> >>    public void deleteByQuery(Query query) throws
> >> PersistenceBrokerException {
> >>        file://Clear cached objects
> >>        Iterator it= super.getIteratorByQuery( query );  file://List all
> >> objects affected by the query
> >>        while ( it.hasNext() ) {
> >>            super.objectCache.remove( new Identity( it.next(), this ) );
> >> file://Remove matching objects form cache
> >>        }
> >>        file://Delegate deleteByQuery to super class
> >>        super.deleteByQuery( query );
> >>    }
> >> }
> >>
> >> Calling the method will cause an extra SELECT statment to be inserted
> >> and all the objects to be loaded in memory, however this should be
> >> faster than executing single deletes or clearing the cache each time.
> >> Is there a better solution to that?
> >> Thanks and sorry for the long message
> >> Danilo Tommasina
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> >> For additional commands, e-mail: ojb-user-help@db.apache.org
> >>
> >>
> >>
> >>
> >>
> >>
> >>
> >> ---------------------------------------------------------------------
> >> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> >> For additional commands, e-mail: ojb-user-help@db.apache.org
> >
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> > For additional commands, e-mail: ojb-user-help@db.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> For additional commands, e-mail: ojb-user-help@db.apache.org


Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Danilo Tommasina <dt...@risksys.com>.
hi armin, hi jakob,

uhm, I have a little question, if the user is using an empty cache, the problem should not occur, since the produced SQL is completly independent of previous statements?
Or am I wrong? In this case a synchronizeCache flag is obsolete, since OJB 'knows' if he is using cahe or not...
It should be also possible to extend the ObjectCache interface trough a method like clearTable( tablename ), where all cached objects of a table are cleared.
You can then clear the tables affected by the deleteByQuery method, it is still not super optimal but better than clearing the whole cache.
This would also require a re-implementation of ObjectCacheImpl.
Depending on the cirumstancies, the "SELECT + DELETE" solution may be faster than the clearTable solution.

danilo

> hi armin,
>
> sounds good.  would you like to sync the cache by an ordinary query or
> are you thinking about something special for this task ?
>
> jakob
>
> Armin Waibel wrote:
> >Hi Danilo,
> >
> >you are right cache was not synchronized.
> >
> >But some user don't use the cache (using 'empty cache'),
> >so maybe we need both possibilities:
> >* with cache synchronization - safe but less
> >performant
> >* without cache synchronization - performant
> >
> >public void deleteByQuery(Query query)
> >does cache synchronization by default
> >
> >public void deleteByQuery(Query query, boolean synchronizeCache)
> >choose what you want
> >
> >What do you think?
> >
> >regards,
> >Armin
> >
> >----- Original Message -----
>
> From: "Danilo Tommasina" <dt...@risksys.com>
>
> >To: "OJB Users List" <oj...@db.apache.org>
> >Sent: Wednesday, June 04, 2003 3:48 PM
> >Subject: Cache inconsitence using deleteByQuery with
> >PersistenceBrokerImpl
> >
> >
> >Hello,
> >
> >I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
> >seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
> >leaves Cache in an inconsistent state), however no info is still
> >available in the javadoc nor a solution seems to be available.
> >Get a look at this code:
> >
> >       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
> >        file://Insert entries
> >        try {
> >            broker.beginTransaction();
> >            UserAttrs ua;
> >            file://Columns:        userid, attrName, attrValue
> >            file://Primary Key:       x  ,    x
> >            ua= new UserAttrs( "id1", "attr1", "test1" );
> >            broker.store( ua );
> >            ua= new UserAttrs( "id1", "attr2", "test2" );
> >            broker.store( ua );
> >            broker.commitTransaction();
> >        } catch (Throwable t) {
> >            broker.abortTransaction();
> >            t.printStackTrace();
> >        }
> >
> >        file://Delete all entries with userID = "id1"
> >        try {
> >            UserAttrs ua= new UserAttrs();
> >            ua.setUserid( "id1" );
> >            Query q = new QueryByCriteria(ua);
> >            broker.beginTransaction();
> >            broker.deleteByQuery( q );
> >            broker.commitTransaction();
> >        } catch (Throwable t) {
> >            broker.abortTransaction();
> >            t.printStackTrace();
> >        }
> >
> >        file://Re-Insert entries
> >        try {
> >            broker.beginTransaction();
> >            UserAttrs ua;
> >            file://Columns:        userid, attrName, attrValue
> >            file://Primary Key:       x  ,    x
> >            ua= new UserAttrs( "id1", "attr1", "test1" );
> >            broker.store( ua );
> >            ua= new UserAttrs( "id1", "attr2", "test2" );
> >            broker.store( ua );
> >            broker.commitTransaction();
> >        } catch (Throwable t) {
> >            broker.abortTransaction();
> >            t.printStackTrace();
> >        }
> >
> >On first execution this causes the generation of following SQL:
> >
> >SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >AND ATTR_NAME = 'attr1'
> >INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
> >('id1','attr1','test1')
> >SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >AND ATTR_NAME = 'attr2'
> >INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
> >('id1','attr2','test2')
> >-> commit
> >
> >SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
> >A0.USERID =  'id1'
> >DELETE FROM USER_ATTRS WHERE USERID =  'id1'
> >-> commit
> >
> >SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >AND ATTR_NAME = 'attr1'
> >UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND
> >ATTR_NAME = 'attr1'
> >SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
> >AND ATTR_NAME = 'attr2'
> >UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND
> >ATTR_NAME = 'attr2'
> >-> commit
> >
> >The UPDATE statements in the 3. block will have no effect on the
> >database, this is from my point of view a seldom but potentially
> >dangerous BUG!!!
> >
> >There is a simple workaround to this, until the code is fixed, simply
> >call a broker.clearCache() after the deleteByQuery transaction has been
> >executed.
> >However this is a performance killer if you are going to deleteByQuery
> >very often.
> >I adopted following solution, but since I am a OJB Newbie I'd like to
> >know if you see a better solution, without re-implementing the
> >ObjectCacheImpl class
> >I extended PersistenceBrokerImpl through a new class and did an override
> >of the deleteByQuery method, then declared this new class in the
> >OJB.properties int the PersistenceBrokerClass property.
> >Here the code:
> >
> >public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
> >    protected SafeDeleteByQueryPBImpl() {
> >        super();
> >    }
> >    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
> >pbf) {
> >        super( key, pbf );
> >    }
> >
> >    /**
> >     * Bug workaround
> >     * Added code for clearing matching objects from cache when
> >executing PersistenceBrokerImpl.deleteByQuery(query)
> >     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
> >     */
> >    public void deleteByQuery(Query query) throws
> >PersistenceBrokerException {
> >        file://Clear cached objects
> >        Iterator it= super.getIteratorByQuery( query );  file://List all
> >objects affected by the query
> >        while ( it.hasNext() ) {
> >            super.objectCache.remove( new Identity( it.next(), this ) );
> >file://Remove matching objects form cache
> >        }
> >        file://Delegate deleteByQuery to super class
> >        super.deleteByQuery( query );
> >    }
> >}
> >
> >Calling the method will cause an extra SELECT statment to be inserted
> >and all the objects to be loaded in memory, however this should be
> >faster than executing single deletes or clearing the cache each time.
> >Is there a better solution to that?
> >Thanks and sorry for the long message
> > Danilo Tommasina
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> >For additional commands, e-mail: ojb-user-help@db.apache.org
> >
> >
> >
> >
> >
> >
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> >For additional commands, e-mail: ojb-user-help@db.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> For additional commands, e-mail: ojb-user-help@db.apache.org


Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Thomas Mahler <th...@web.de>.
Hi,

I'm not convinced that this is really a necessary feature.
why not ask the user to
1. load the list of all matching objects with getCollectionByQuery(q)
2. iterate over the colection and remove all elements from the cache
3. call broker.deleteByQuery(q) ?

It won't be difficult to implement, but would violate the micro kernel 
approach:  it would expose an additional kernel method that could be 
easily replaced with user calls...

I we want to implement it I see a problem with SQL delete statements, as 
they circumvent all OJB mechanisms.
it would be not so easy to apply steps 1. and to in this case...

my 2c
Thomas


Jakob Braeuchi wrote:
> hi armin,
> 
> sounds good.  would you like to sync the cache by an ordinary query or 
> are you thinking about something special for this task ?
> 
> jakob
> 
> Armin Waibel wrote:
> 
>> Hi Danilo,
>>
>> you are right cache was not synchronized.
>>
>> But some user don't use the cache (using 'empty cache'),
>> so maybe we need both possibilities:
>> * with cache synchronization - safe but less
>> performant
>> * without cache synchronization - performant
>>
>> public void deleteByQuery(Query query)
>> does cache synchronization by default
>>
>> public void deleteByQuery(Query query, boolean synchronizeCache)
>> choose what you want
>>
>> What do you think?
>>
>> regards,
>> Armin
>>
>> ----- Original Message -----
>> From: "Danilo Tommasina" <dt...@risksys.com>
>> To: "OJB Users List" <oj...@db.apache.org>
>> Sent: Wednesday, June 04, 2003 3:48 PM
>> Subject: Cache inconsitence using deleteByQuery with
>> PersistenceBrokerImpl
>>
>>
>> Hello,
>>
>> I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
>> seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
>> leaves Cache in an inconsistent state), however no info is still
>> available in the javadoc nor a solution seems to be available.
>> Get a look at this code:
>>
>>       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
>>        file://Insert entries
>>        try {
>>            broker.beginTransaction();
>>            UserAttrs ua;
>>            file://Columns:        userid, attrName, attrValue
>>            file://Primary Key:       x  ,    x
>>            ua= new UserAttrs( "id1", "attr1", "test1" );
>>            broker.store( ua );
>>            ua= new UserAttrs( "id1", "attr2", "test2" );
>>            broker.store( ua );
>>            broker.commitTransaction();
>>        } catch (Throwable t) {
>>            broker.abortTransaction();
>>            t.printStackTrace();
>>        }
>>
>>        file://Delete all entries with userID = "id1"
>>        try {
>>            UserAttrs ua= new UserAttrs();
>>            ua.setUserid( "id1" );
>>            Query q = new QueryByCriteria(ua);
>>            broker.beginTransaction();
>>            broker.deleteByQuery( q );
>>            broker.commitTransaction();
>>        } catch (Throwable t) {
>>            broker.abortTransaction();
>>            t.printStackTrace();
>>        }
>>
>>        file://Re-Insert entries
>>        try {
>>            broker.beginTransaction();
>>            UserAttrs ua;
>>            file://Columns:        userid, attrName, attrValue
>>            file://Primary Key:       x  ,    x
>>            ua= new UserAttrs( "id1", "attr1", "test1" );
>>            broker.store( ua );
>>            ua= new UserAttrs( "id1", "attr2", "test2" );
>>            broker.store( ua );
>>            broker.commitTransaction();
>>        } catch (Throwable t) {
>>            broker.abortTransaction();
>>            t.printStackTrace();
>>        }
>>
>> On first execution this causes the generation of following SQL:
>>
>> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>> AND ATTR_NAME = 'attr1'
>> INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
>> ('id1','attr1','test1')
>> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>> AND ATTR_NAME = 'attr2'
>> INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
>> ('id1','attr2','test2')
>> -> commit
>>
>> SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
>> A0.USERID =  'id1'
>> DELETE FROM USER_ATTRS WHERE USERID =  'id1'
>> -> commit
>>
>> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>> AND ATTR_NAME = 'attr1'
>> UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND
>> ATTR_NAME = 'attr1'
>> SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>> AND ATTR_NAME = 'attr2'
>> UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND
>> ATTR_NAME = 'attr2'
>> -> commit
>>
>> The UPDATE statements in the 3. block will have no effect on the
>> database, this is from my point of view a seldom but potentially
>> dangerous BUG!!!
>>
>> There is a simple workaround to this, until the code is fixed, simply
>> call a broker.clearCache() after the deleteByQuery transaction has been
>> executed.
>> However this is a performance killer if you are going to deleteByQuery
>> very often.
>> I adopted following solution, but since I am a OJB Newbie I'd like to
>> know if you see a better solution, without re-implementing the
>> ObjectCacheImpl class
>> I extended PersistenceBrokerImpl through a new class and did an override
>> of the deleteByQuery method, then declared this new class in the
>> OJB.properties int the PersistenceBrokerClass property.
>> Here the code:
>>
>> public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
>>    protected SafeDeleteByQueryPBImpl() {
>>        super();
>>    }
>>    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
>> pbf) {
>>        super( key, pbf );
>>    }
>>
>>    /**
>>     * Bug workaround
>>     * Added code for clearing matching objects from cache when
>> executing PersistenceBrokerImpl.deleteByQuery(query)
>>     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
>>     */
>>    public void deleteByQuery(Query query) throws
>> PersistenceBrokerException {
>>        file://Clear cached objects
>>        Iterator it= super.getIteratorByQuery( query );  file://List all
>> objects affected by the query
>>        while ( it.hasNext() ) {
>>            super.objectCache.remove( new Identity( it.next(), this ) );
>> file://Remove matching objects form cache
>>        }
>>        file://Delegate deleteByQuery to super class
>>        super.deleteByQuery( query );
>>    }
>> }
>>
>> Calling the method will cause an extra SELECT statment to be inserted
>> and all the objects to be loaded in memory, however this should be
>> faster than executing single deletes or clearing the cache each time.
>> Is there a better solution to that?
>> Thanks and sorry for the long message
>> Danilo Tommasina
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
>> For additional commands, e-mail: ojb-user-help@db.apache.org
>>
>>
>>
>>
>>
>>
>>
>> ---------------------------------------------------------------------
>> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
>> For additional commands, e-mail: ojb-user-help@db.apache.org
>>
>>
>>  
>>
> 
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> For additional commands, e-mail: ojb-user-help@db.apache.org
> 
> 


Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Jakob Braeuchi <jb...@gmx.ch>.
hi armin,

sounds good.  would you like to sync the cache by an ordinary query or 
are you thinking about something special for this task ?

jakob

Armin Waibel wrote:

>Hi Danilo,
>
>you are right cache was not synchronized.
>
>But some user don't use the cache (using 'empty cache'),
>so maybe we need both possibilities:
>* with cache synchronization - safe but less
>performant
>* without cache synchronization - performant
>
>public void deleteByQuery(Query query)
>does cache synchronization by default
>
>public void deleteByQuery(Query query, boolean synchronizeCache)
>choose what you want
>
>What do you think?
>
>regards,
>Armin
>
>----- Original Message -----
>From: "Danilo Tommasina" <dt...@risksys.com>
>To: "OJB Users List" <oj...@db.apache.org>
>Sent: Wednesday, June 04, 2003 3:48 PM
>Subject: Cache inconsitence using deleteByQuery with
>PersistenceBrokerImpl
>
>
>Hello,
>
>I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
>seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
>leaves Cache in an inconsistent state), however no info is still
>available in the javadoc nor a solution seems to be available.
>Get a look at this code:
>
>       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
>        file://Insert entries
>        try {
>            broker.beginTransaction();
>            UserAttrs ua;
>            file://Columns:        userid, attrName, attrValue
>            file://Primary Key:       x  ,    x
>            ua= new UserAttrs( "id1", "attr1", "test1" );
>            broker.store( ua );
>            ua= new UserAttrs( "id1", "attr2", "test2" );
>            broker.store( ua );
>            broker.commitTransaction();
>        } catch (Throwable t) {
>            broker.abortTransaction();
>            t.printStackTrace();
>        }
>
>        file://Delete all entries with userID = "id1"
>        try {
>            UserAttrs ua= new UserAttrs();
>            ua.setUserid( "id1" );
>            Query q = new QueryByCriteria(ua);
>            broker.beginTransaction();
>            broker.deleteByQuery( q );
>            broker.commitTransaction();
>        } catch (Throwable t) {
>            broker.abortTransaction();
>            t.printStackTrace();
>        }
>
>        file://Re-Insert entries
>        try {
>            broker.beginTransaction();
>            UserAttrs ua;
>            file://Columns:        userid, attrName, attrValue
>            file://Primary Key:       x  ,    x
>            ua= new UserAttrs( "id1", "attr1", "test1" );
>            broker.store( ua );
>            ua= new UserAttrs( "id1", "attr2", "test2" );
>            broker.store( ua );
>            broker.commitTransaction();
>        } catch (Throwable t) {
>            broker.abortTransaction();
>            t.printStackTrace();
>        }
>
>On first execution this causes the generation of following SQL:
>
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>AND ATTR_NAME = 'attr1'
>INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
>('id1','attr1','test1')
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>AND ATTR_NAME = 'attr2'
>INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
>('id1','attr2','test2')
>-> commit
>
>SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
>A0.USERID =  'id1'
>DELETE FROM USER_ATTRS WHERE USERID =  'id1'
>-> commit
>
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>AND ATTR_NAME = 'attr1'
>UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND
>ATTR_NAME = 'attr1'
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
>AND ATTR_NAME = 'attr2'
>UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND
>ATTR_NAME = 'attr2'
>-> commit
>
>The UPDATE statements in the 3. block will have no effect on the
>database, this is from my point of view a seldom but potentially
>dangerous BUG!!!
>
>There is a simple workaround to this, until the code is fixed, simply
>call a broker.clearCache() after the deleteByQuery transaction has been
>executed.
>However this is a performance killer if you are going to deleteByQuery
>very often.
>I adopted following solution, but since I am a OJB Newbie I'd like to
>know if you see a better solution, without re-implementing the
>ObjectCacheImpl class
>I extended PersistenceBrokerImpl through a new class and did an override
>of the deleteByQuery method, then declared this new class in the
>OJB.properties int the PersistenceBrokerClass property.
>Here the code:
>
>public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
>    protected SafeDeleteByQueryPBImpl() {
>        super();
>    }
>    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
>pbf) {
>        super( key, pbf );
>    }
>
>    /**
>     * Bug workaround
>     * Added code for clearing matching objects from cache when
>executing PersistenceBrokerImpl.deleteByQuery(query)
>     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
>     */
>    public void deleteByQuery(Query query) throws
>PersistenceBrokerException {
>        file://Clear cached objects
>        Iterator it= super.getIteratorByQuery( query );  file://List all
>objects affected by the query
>        while ( it.hasNext() ) {
>            super.objectCache.remove( new Identity( it.next(), this ) );
>file://Remove matching objects form cache
>        }
>        file://Delegate deleteByQuery to super class
>        super.deleteByQuery( query );
>    }
>}
>
>Calling the method will cause an extra SELECT statment to be inserted
>and all the objects to be loaded in memory, however this should be
>faster than executing single deletes or clearing the cache each time.
>Is there a better solution to that?
>Thanks and sorry for the long message
> Danilo Tommasina
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
>For additional commands, e-mail: ojb-user-help@db.apache.org
>
>
>
>
>
>
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
>For additional commands, e-mail: ojb-user-help@db.apache.org
>
>
>  
>


Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Armin Waibel <ar...@code-au-lait.de>.
Hi Danilo,

you are right cache was not synchronized.

But some user don't use the cache (using 'empty cache'),
so maybe we need both possibilities:
* with cache synchronization - safe but less
performant
* without cache synchronization - performant

public void deleteByQuery(Query query)
does cache synchronization by default

public void deleteByQuery(Query query, boolean synchronizeCache)
choose what you want

What do you think?

regards,
Armin

----- Original Message -----
From: "Danilo Tommasina" <dt...@risksys.com>
To: "OJB Users List" <oj...@db.apache.org>
Sent: Wednesday, June 04, 2003 3:48 PM
Subject: Cache inconsitence using deleteByQuery with
PersistenceBrokerImpl


Hello,

I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
leaves Cache in an inconsistent state), however no info is still
available in the javadoc nor a solution seems to be available.
Get a look at this code:

       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
        file://Insert entries
        try {
            broker.beginTransaction();
            UserAttrs ua;
            file://Columns:        userid, attrName, attrValue
            file://Primary Key:       x  ,    x
            ua= new UserAttrs( "id1", "attr1", "test1" );
            broker.store( ua );
            ua= new UserAttrs( "id1", "attr2", "test2" );
            broker.store( ua );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

        file://Delete all entries with userID = "id1"
        try {
            UserAttrs ua= new UserAttrs();
            ua.setUserid( "id1" );
            Query q = new QueryByCriteria(ua);
            broker.beginTransaction();
            broker.deleteByQuery( q );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

        file://Re-Insert entries
        try {
            broker.beginTransaction();
            UserAttrs ua;
            file://Columns:        userid, attrName, attrValue
            file://Primary Key:       x  ,    x
            ua= new UserAttrs( "id1", "attr1", "test1" );
            broker.store( ua );
            ua= new UserAttrs( "id1", "attr2", "test2" );
            broker.store( ua );
            broker.commitTransaction();
        } catch (Throwable t) {
            broker.abortTransaction();
            t.printStackTrace();
        }

On first execution this causes the generation of following SQL:

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr1'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
('id1','attr1','test1')
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr2'
INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES
('id1','attr2','test2')
-> commit

SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
A0.USERID =  'id1'
DELETE FROM USER_ATTRS WHERE USERID =  'id1'
-> commit

SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr1'
UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND
ATTR_NAME = 'attr1'
SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'
AND ATTR_NAME = 'attr2'
UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND
ATTR_NAME = 'attr2'
-> commit

The UPDATE statements in the 3. block will have no effect on the
database, this is from my point of view a seldom but potentially
dangerous BUG!!!

There is a simple workaround to this, until the code is fixed, simply
call a broker.clearCache() after the deleteByQuery transaction has been
executed.
However this is a performance killer if you are going to deleteByQuery
very often.
I adopted following solution, but since I am a OJB Newbie I'd like to
know if you see a better solution, without re-implementing the
ObjectCacheImpl class
I extended PersistenceBrokerImpl through a new class and did an override
of the deleteByQuery method, then declared this new class in the
OJB.properties int the PersistenceBrokerClass property.
Here the code:

public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
    protected SafeDeleteByQueryPBImpl() {
        super();
    }
    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
pbf) {
        super( key, pbf );
    }

    /**
     * Bug workaround
     * Added code for clearing matching objects from cache when
executing PersistenceBrokerImpl.deleteByQuery(query)
     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
     */
    public void deleteByQuery(Query query) throws
PersistenceBrokerException {
        file://Clear cached objects
        Iterator it= super.getIteratorByQuery( query );  file://List all
objects affected by the query
        while ( it.hasNext() ) {
            super.objectCache.remove( new Identity( it.next(), this ) );
file://Remove matching objects form cache
        }
        file://Delegate deleteByQuery to super class
        super.deleteByQuery( query );
    }
}

Calling the method will cause an extra SELECT statment to be inserted
and all the objects to be loaded in memory, however this should be
faster than executing single deletes or clearing the cache each time.
Is there a better solution to that?
Thanks and sorry for the long message
 Danilo Tommasina

---------------------------------------------------------------------
To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
For additional commands, e-mail: ojb-user-help@db.apache.org







Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Danilo Tommasina <dt...@risksys.com>.
hi jakob,

The UPDATE statement in the 3. transaction will have no effect on the database, because after the DELETE statement no more objects with userid="id1" are available.
The statements are generated because OJB sees the data in the cache and generates UPDATEs instead of INSERTs.
This is quite dangerous, because you won't see any error and the data is NOT written in the db.
I know, it is quite seldom (and surly not optimal) to delete an entry and then re-insert it, however this may happen :)
Clearing the cache correctly after the DELETE statments, will cause OJB to generate INSERT statements.

There is a little mistake in my previous mail, the SELECT statment in the 2. transaction is generated only using my implementation of the PersistentBroker. :(

danilo

> hi danilo,
>
> imo loading all object to delete them from the cache is the only way,
> because the cache does not know which objects will be deleted by the query.
>
> >>The UPDATE statements in the 3. block will have no effect on the
> >> database, this is from my point of view a seldom >>but potentially
> >> dangerous BUG!!!
>
> could you please be more specific about this one ?
>
>
> jakob
>
> Danilo Tommasina wrote:
> >Hello,
> >
> >I noticed an odd behaviour when using *broker.deleteByQuery*, this issue
> > seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery
> > leaves Cache in an inconsistent state), however no info is still
> > available in the javadoc nor a solution seems to be available. Get a look
> > at this code:
> >
> >       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
> >        //Insert entries
> >        try {
> >            broker.beginTransaction();
> >            UserAttrs ua;
> >            //Columns:        userid, attrName, attrValue
> >            //Primary Key:       x  ,    x
> >            ua= new UserAttrs( "id1", "attr1", "test1" );
> >            broker.store( ua );
> >            ua= new UserAttrs( "id1", "attr2", "test2" );
> >            broker.store( ua );
> >            broker.commitTransaction();
> >        } catch (Throwable t) {
> >            broker.abortTransaction();
> >            t.printStackTrace();
> >        }
> >
> >        //Delete all entries with userID = "id1"
> >        try {
> >            UserAttrs ua= new UserAttrs();
> >            ua.setUserid( "id1" );
> >            Query q = new QueryByCriteria(ua);
> >            broker.beginTransaction();
> >            broker.deleteByQuery( q );
> >            broker.commitTransaction();
> >        } catch (Throwable t) {
> >            broker.abortTransaction();
> >            t.printStackTrace();
> >        }
> >
> >        //Re-Insert entries
> >        try {
> >            broker.beginTransaction();
> >            UserAttrs ua;
> >            //Columns:        userid, attrName, attrValue
> >            //Primary Key:       x  ,    x
> >            ua= new UserAttrs( "id1", "attr1", "test1" );
> >            broker.store( ua );
> >            ua= new UserAttrs( "id1", "attr2", "test2" );
> >            broker.store( ua );
> >            broker.commitTransaction();
> >        } catch (Throwable t) {
> >            broker.abortTransaction();
> >            t.printStackTrace();
> >        }
> >
> >On first execution this causes the generation of following SQL:
> >
> >SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1' 
> > AND ATTR_NAME = 'attr1' INSERT INTO USER_ATTRS
> > (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr1','test1') SELECT
> > ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND
> > ATTR_NAME = 'attr2' INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE)
> > VALUES ('id1','attr2','test2') -> commit
> >
> >SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE
> > A0.USERID =  'id1' DELETE FROM USER_ATTRS WHERE USERID =  'id1'
> >-> commit
> >
> >SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1' 
> > AND ATTR_NAME = 'attr1' UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE
> > USERID = 'id1'  AND ATTR_NAME = 'attr1' SELECT
> > ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND
> > ATTR_NAME = 'attr2' UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID
> > = 'id1'  AND ATTR_NAME = 'attr2' -> commit
> >
> >The UPDATE statements in the 3. block will have no effect on the database,
> > this is from my point of view a seldom but potentially dangerous BUG!!!
> >
> >There is a simple workaround to this, until the code is fixed, simply call
> > a broker.clearCache() after the deleteByQuery transaction has been
> > executed. However this is a performance killer if you are going to
> > deleteByQuery very often. I adopted following solution, but since I am a
> > OJB Newbie I'd like to know if you see a better solution, without
> > re-implementing the ObjectCacheImpl class I extended
> > PersistenceBrokerImpl through a new class and did an override of the
> > deleteByQuery method, then declared this new class in the OJB.properties
> > int the PersistenceBrokerClass property. Here the code:
> >
> >public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
> >    protected SafeDeleteByQueryPBImpl() {
> >        super();
> >    }
> >    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF
> > pbf) { super( key, pbf );
> >    }
> >
> >    /**
> >     * Bug workaround
> >     * Added code for clearing matching objects from cache when executing
> > PersistenceBrokerImpl.deleteByQuery(query) * @see
> > org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query) */
> >    public void deleteByQuery(Query query) throws
> > PersistenceBrokerException { //Clear cached objects
> >        Iterator it= super.getIteratorByQuery( query );  //List all
> > objects affected by the query while ( it.hasNext() ) {
> >            super.objectCache.remove( new Identity( it.next(), this ) );  
> >  //Remove matching objects form cache }
> >        //Delegate deleteByQuery to super class
> >        super.deleteByQuery( query );
> >    }
> >}
> >
> >Calling the method will cause an extra SELECT statment to be inserted and
> > all the objects to be loaded in memory, however this should be faster
> > than executing single deletes or clearing the cache each time. Is there a
> > better solution to that?
> >Thanks and sorry for the long message
> > Danilo Tommasina
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> >For additional commands, e-mail: ojb-user-help@db.apache.org
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
> For additional commands, e-mail: ojb-user-help@db.apache.org


Re: Cache inconsitence using deleteByQuery with PersistenceBrokerImpl

Posted by Jakob Braeuchi <jb...@gmx.ch>.
hi danilo,

imo loading all object to delete them from the cache is the only way, 
because the cache does not know which objects will be deleted by the query.

>>The UPDATE statements in the 3. block will have no effect on the database, this is from my point of view a seldom >>but potentially dangerous BUG!!!

could you please be more specific about this one ?


jakob

Danilo Tommasina wrote:

>Hello,
>
>I noticed an odd behaviour when using *broker.deleteByQuery*, this issue seems to be known (see developer mailist, msg 652 [VOTE] deleteByQuery leaves Cache in an inconsistent state), however no info is still available in the javadoc nor a solution seems to be available.
>Get a look at this code:
>
>       broker = PersistenceBrokerFactory.defaultPersistenceBroker();
>        //Insert entries
>        try {
>            broker.beginTransaction();
>            UserAttrs ua;
>            //Columns:        userid, attrName, attrValue
>            //Primary Key:       x  ,    x
>            ua= new UserAttrs( "id1", "attr1", "test1" );
>            broker.store( ua );
>            ua= new UserAttrs( "id1", "attr2", "test2" );
>            broker.store( ua );
>            broker.commitTransaction();
>        } catch (Throwable t) {
>            broker.abortTransaction();
>            t.printStackTrace();
>        }
>
>        //Delete all entries with userID = "id1"
>        try {
>            UserAttrs ua= new UserAttrs();
>            ua.setUserid( "id1" );
>            Query q = new QueryByCriteria(ua);
>            broker.beginTransaction();
>            broker.deleteByQuery( q );
>            broker.commitTransaction();
>        } catch (Throwable t) {
>            broker.abortTransaction();
>            t.printStackTrace();
>        }
>
>        //Re-Insert entries
>        try {
>            broker.beginTransaction();
>            UserAttrs ua;
>            //Columns:        userid, attrName, attrValue
>            //Primary Key:       x  ,    x
>            ua= new UserAttrs( "id1", "attr1", "test1" );
>            broker.store( ua );
>            ua= new UserAttrs( "id1", "attr2", "test2" );
>            broker.store( ua );
>            broker.commitTransaction();
>        } catch (Throwable t) {
>            broker.abortTransaction();
>            t.printStackTrace();
>        }
>
>On first execution this causes the generation of following SQL:
>
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
>INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr1','test1')
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
>INSERT INTO USER_ATTRS (USERID,ATTR_NAME,ATTR_VALUE) VALUES ('id1','attr2','test2')
>-> commit
>
>SELECT A0.ATTR_NAME,A0.USERID,A0.ATTR_VALUE FROM USER_ATTRS A0 WHERE A0.USERID =  'id1'
>DELETE FROM USER_ATTRS WHERE USERID =  'id1'
>-> commit
>
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
>UPDATE USER_ATTRS SET ATTR_VALUE='test1' WHERE USERID = 'id1'  AND ATTR_NAME = 'attr1'
>SELECT ATTR_NAME,USERID,ATTR_VALUE FROM USER_ATTRS WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
>UPDATE USER_ATTRS SET ATTR_VALUE='test2' WHERE USERID = 'id1'  AND ATTR_NAME = 'attr2'
>-> commit
>
>The UPDATE statements in the 3. block will have no effect on the database, this is from my point of view a seldom but potentially dangerous BUG!!!
>
>There is a simple workaround to this, until the code is fixed, simply call a broker.clearCache() after the deleteByQuery transaction has been executed.
>However this is a performance killer if you are going to deleteByQuery very often.
>I adopted following solution, but since I am a OJB Newbie I'd like to know if you see a better solution, without re-implementing the ObjectCacheImpl class
>I extended PersistenceBrokerImpl through a new class and did an override of the deleteByQuery method, then declared this new class in the OJB.properties int the PersistenceBrokerClass property.
>Here the code:
>
>public class SafeDeleteByQueryPBImpl extends PersistenceBrokerImpl {
>    protected SafeDeleteByQueryPBImpl() {
>        super();
>    }
>    public SafeDeleteByQueryPBImpl(PBKey key, PersistenceBrokerFactoryIF pbf) {
>        super( key, pbf );
>    }
>
>    /**
>     * Bug workaround
>     * Added code for clearing matching objects from cache when executing PersistenceBrokerImpl.deleteByQuery(query)
>     * @see org.apache.ojb.broker.PersistenceBroker#deleteByQuery(Query)
>     */
>    public void deleteByQuery(Query query) throws PersistenceBrokerException {
>        //Clear cached objects
>        Iterator it= super.getIteratorByQuery( query );  //List all objects affected by the query
>        while ( it.hasNext() ) {
>            super.objectCache.remove( new Identity( it.next(), this ) );    //Remove matching objects form cache
>        }
>        //Delegate deleteByQuery to super class
>        super.deleteByQuery( query );
>    }
>}
>
>Calling the method will cause an extra SELECT statment to be inserted and all the objects to be loaded in memory, however this should be faster than executing single deletes or clearing the cache each time.
>Is there a better solution to that?
>Thanks and sorry for the long message
> Danilo Tommasina
>
>---------------------------------------------------------------------
>To unsubscribe, e-mail: ojb-user-unsubscribe@db.apache.org
>For additional commands, e-mail: ojb-user-help@db.apache.org
>
>
>  
>