You are viewing a plain text version of this content. The canonical link for it is here.
Posted to users@openjpa.apache.org by PaulCB <pa...@smilecoms.com> on 2008/12/01 08:55:38 UTC

Re: Dirty Data Under Concurrency


Hi Milosz,

Yes, I'm using InnoDB. I'll try the select for update (I didn't know how
to get JPA to do that ;-)) My current strategy does work as the update
can only be done by thread 2 once thread 1 has committed:

> > 1) EJB finds the ACCOUNT row for the given account id using a simple
find 
> > 2) Updates a field on the ACCOUNT row to get a lock on the row and
does em.persist and em.flush

Once 1 and 2 above are done by thread 2, then it reads the data in the
balance table and sees the updates done by thread 1. With repeatable
read, the read shows the data as it was prior to thread 1's commit,
while read_committed shows then new data

Paul

-----Original Message-----
From: MiƂosz Tylenda (via Nabble) <ml-user
+63810-1061096733@n2.nabble.com>
Reply-to: Post 1594665 on Nabble <ml-node
+1594665-1599610017@n2.nabble.com>
To: PaulCB <pa...@smilecoms.com>
Subject: Re: Dirty Data Under Concurrency
Date: Sun, 30 Nov 2008 00:46:11 -0800 (PST)


Good it now works. However it seems weird to me that it was this change
which helped. Do you use InnoDB tables? In MySQL READ COMMITTED is a
lower isolation level than REPEATABLE READ. I am wondering how could
lowering the isolation level improve the consistency? 

In this case I would try SELECT FOR UPDATE (EntityManager.lock in JPA
terms) - that would cause the 2nd thread to wait until the 1st does its
job and execute find+calculate+persist without losing anything. 


-- 
View this message in context: http://n2.nabble.com/Dirty-Data-Under-Concurrency-tp1592091p1597782.html
Sent from the OpenJPA Users mailing list archive at Nabble.com.