You are viewing a plain text version of this content. The canonical link for it is here.
Posted to dev@metamodel.apache.org by Ankit Kumar <ak...@gmail.com> on 2015/06/16 11:01:18 UTC

[DISCUSS] | Transaction Management at Service layer in MetaModel | like Spring transactions.

Hi All,

I would like to discuss a topic related to transaction control within
Apache Metamodel. Currently in Apache MetaModel we do not have transaction
control possible at a place outside the DAO's. Most of us Java developers
are used to relying on some framework for transaction management and we
mostly like to have it controllable/defined at the service layer.

In the MetaModel API, UpdateCallback is the place where all transaction
control happens. In a stateless DAO architecture where 1 table maps to 1
DAO class, this brings limitations specially when we want to perform atomic
transactions across different tables and want to have transaction control
at the service layer.

Of course we can share across the UpdateCallback between the different DAO
classes/methods, but we then need to instantiate the UpdateCallback most
likely in the service layer code. This feels like not a good design.

I would like to ask you all if you face similar issues while working with
Apache MetaModel, and if you have a nice solution applied.

In addition would it be nice to have something like this in the roadmap of
Apache Metamodel soon.


Regards
Ankit

Re: [DISCUSS] | Transaction Management at Service layer in MetaModel | like Spring transactions.

Posted by Kasper Sørensen <i....@gmail.com>.
Hi Ankit,

I personally am quite happy with the closure inspired (single abstract
method interface) approach to managing update scripts. But that's maybe a
personal preference that I do like to have my code organized around
transactions so that I clearly see and understand when they are created and
committed. It also ensures that there cannot be any resource leaks because
the involved resources (file reader/writers, JDBC transactions, network
connections or whatever is "behind" the datastore's update support) is
managed by the DataContext that runs the update script closure. It allows
the DataContext to ensure things like isolation level even on datastores
that does not have this feature natively (such as file based datastores
which we currently synchronize all updates to).

Your preference might be different and I don't want to judge that as either
right or wrong - just state what my preference is.

So if I should consider your proposal, here's how I could imagine it
working if anybody wants to implement it:

1) In addition to the existing executeUpdate(...) method on DataContext, we
would need to introduce a method that starts a transaction without yet
committing it. I imagine it like this:

// in your aspect, filter, interceptor or whatever
CommitableUpdateCallback updateCallback = dataContext.startUpdate();


2) The above method would be executed by some Aspect in your application
code - if you use AOP, servlet filters or something like that.

3) Once this object is made, you would need to bind it to some context.
Maybe to a ThreadLocal variable or so. This would allow your DAO to access
it during any following operations.

4) Your dependency injection framework would have to be aware of the
context in which the object is bound. If it is thread locally bound for
instance, then you could probably make a custom scope in your DI framework
that injects it based on the ThreadLocal variable:

// in your dao class
@Inject
@ThreadLocalScope
CommitableUpdateCallback updateCallback


5) Once the operation is done (the aspect ends, the servlet filter ends or
whatever) then you invoke a commit operation on the object:

// in your aspect, filter, interceptor or whatever
updateCallback.commit();


I hope that approach makes sense. Should provide some background into how
it can be archieved if we as a community decide this is a direction we want
to go.

Best regards,
Kasper

2015-06-16 11:01 GMT+02:00 Ankit Kumar <ak...@gmail.com>:

> Hi All,
>
> I would like to discuss a topic related to transaction control within
> Apache Metamodel. Currently in Apache MetaModel we do not have transaction
> control possible at a place outside the DAO's. Most of us Java developers
> are used to relying on some framework for transaction management and we
> mostly like to have it controllable/defined at the service layer.
>
> In the MetaModel API, UpdateCallback is the place where all transaction
> control happens. In a stateless DAO architecture where 1 table maps to 1
> DAO class, this brings limitations specially when we want to perform atomic
> transactions across different tables and want to have transaction control
> at the service layer.
>
> Of course we can share across the UpdateCallback between the different DAO
> classes/methods, but we then need to instantiate the UpdateCallback most
> likely in the service layer code. This feels like not a good design.
>
> I would like to ask you all if you face similar issues while working with
> Apache MetaModel, and if you have a nice solution applied.
>
> In addition would it be nice to have something like this in the roadmap of
> Apache Metamodel soon.
>
>
> Regards
> Ankit
>