You are viewing a plain text version of this content. The canonical link for it is here.
Posted to sandesha-dev@ws.apache.org by Matthew Lovett <ML...@uk.ibm.com> on 2006/11/30 15:27:00 UTC
Thinking some more about the in-memory locking
Hi all,
Following on from the locks that we introduced, and the several patches
that Andy has come up with to avoid deadlock, I'm a little concerned about
Sandesha in a real production environment. We could hope that we have got
the locking strategy right, or we could put in deadlock detection.
Additionally we could restructure the code to make locking simpler and
more reliable.
Here are some ideas I have along those lines. I'd be interested to see
which ones seem like the best direction to go in.
a) Put in deadlock detection.
This is easy enough to do. All we need to do is add a 'otherTransaction'
member to each in-memory transaction object. If you are blocked by another
transaction then you should set the 'otherTransaction' member. Effectively
we have a single-linked-list of transactions. When you are about to wait
on a lock, check to see if there is a loop in the 'otherTransaction' list,
and if there is we can be sure that there is a deadlock.
When we detect a deadlock then we should throw a SandeshaStorageException,
to try and get out of the mess. However, for this to be reliable we also
need:
b) Put in rollback capability for in-memory beans.
Again, quite simple. Just add in 2 member variables for the value -
'clean' and 'dirty'. On update just update the dirty value, and only copy
it into clean at commit time. However, I think that the nicest way to
implement this is to turn the current beans into interfaces, so that other
storage managers don't get this extra logic and overhead. As some of the
code currently creates beans with 'new' we'd have to add factory methods
onto the bean managers instead, so that we allocate the correct type of
bean.
c) Step back a few paces
One of the reasons this is quite complex is that the state for a sequence
is distributed across quite a few 'beans', and each bean has to be locked
separately. I think it would be much simpler to have a single object with
several properties on it. That would give us a coarser-grained locking
story, and it would be much simpler to understand the code. There is no
reason why any back-end store shouldn't split the data back into piecemeal
chunks if they choose, so I don't think this would be too dramatic an
option.
So, comments? I'm happy to get (a) and (b) done, but I'd quite like to do
(c) first and see where it takes us. I don't see much point doing (a)
without (b), or vice-versa.
Thanks
Matt
---------------------------------------------------------------------
To unsubscribe, e-mail: sandesha-dev-unsubscribe@ws.apache.org
For additional commands, e-mail: sandesha-dev-help@ws.apache.org